[bootlin/training-materials updates] master: debugging: labs: add ftrace, perf, uprobe and kernelshark section (3634e4bf)

Clément Léger clement.leger at bootlin.com
Thu Nov 17 16:19:40 CET 2022


Repository : https://github.com/bootlin/training-materials
On branch  : master
Link       : https://github.com/bootlin/training-materials/commit/3634e4bfcba380804f2eaf1e46fd0b381516ec5b

>---------------------------------------------------------------

commit 3634e4bfcba380804f2eaf1e46fd0b381516ec5b
Author: Clément Léger <clement.leger at bootlin.com>
Date:   Thu Nov 17 16:11:14 2022 +0100

    debugging: labs: add ftrace, perf, uprobe and kernelshark section
    
    Signed-off-by: Clément Léger <clement.leger at bootlin.com>


>---------------------------------------------------------------

3634e4bfcba380804f2eaf1e46fd0b381516ec5b
 .../debugging-system-wide-profiling.tex            | 59 ++++++++++++++++++++--
 1 file changed, 55 insertions(+), 4 deletions(-)

diff --git a/labs/debugging-system-wide-profiling/debugging-system-wide-profiling.tex b/labs/debugging-system-wide-profiling/debugging-system-wide-profiling.tex
index 0a72b0ad..c509e27c 100644
--- a/labs/debugging-system-wide-profiling/debugging-system-wide-profiling.tex
+++ b/labs/debugging-system-wide-profiling/debugging-system-wide-profiling.tex
@@ -27,20 +27,71 @@ using:
 $ trace-cmd report
 \end{bashinput}
 
-This will display a trace of the longest chain of calls while itnerrupts were
+This will display a trace of the longest chain of calls while interrupts were
 disabled. Based on this report, can you find the code that generates this
 latency ?
 
-\section{LTTng}
+\section{ftrace \& uprobes}
+
+First of all, we will start a small program using the following command:
+
+\begin{bashinput}
+$ mystery_program 1000 200 2 &
+\end{bashinput}
+
+In order to trace a full system, we can use ftrace. However, if we want to trace
+the userspace, we'll need to add new tracepoints using uprobes. This can be done
+manually with the uprobe sysfs interface or using \code{perf probe}.
 
-Go into the \code{crc_random} directory and open the \code{crc_random.c} file. This file
-computes crc32 on a buffer of random data. Compile it using the provided toolchain
+Before starting to profile, we will compile our program to be instrumented:
 
 \begin{bashinput}
 $ cd /home/<user>/debugging-labs/nfsroot/root/system_profiling
 $  $(CROSS_COMPILE)-gcc -Wall -Werror -g3 crc_random.c -o crc_random
 \end{bashinput}
 
+On the target, we will create a uprobe in the main function of the
+\code{crc_random} program each time a crc is computed. First, let list the lines
+number that are recognized by perf to add a uprobe:
+
+\begin{bashinput}
+$ perf probe --source=./ -x ./crc_random -L main
+\end{bashinput}
+
+\em Note: in order to be able to add such userspace probe, perf needs to access
+symbols and source file
+
+Then, we can create a uprobe and capture the crc value using:
+
+\begin{bashinput}
+$ perf probe --source=./ -x ./crc_random main:35 crc
+\end{bashinput}
+
+We can finally trace the application using trace-cmd with this event
+
+\begin{bashinput}
+$ trace-cmd record -e probe_crc_random:main_L35 ./crc_random
+^C
+\end{bashinput}
+
+Then, using kernelshark tool on the host, analyze the trace:
+
+\begin{bashinput}
+$ kernelshark
+\end{bashinput}
+
+We can see that something is wrong, our process does not seems to compute crc at
+a fixed rate. Let's trace the \code{sched_switch} events to see whats happening:
+
+\begin{bashinput}
+$ trace-cmd record -e probe_crc_random:main_L35 ./crc_random
+^C
+\end{bashinput}
+
+Reanalyze the traces with kernel shark and try to understand what is going on.
+
+\section{LTTng}
+
 In order to observe our program performances, we want to instrument it with
 tracepoints. We would like to know how much times it takes to compute the
 crc32 of a specific buffer.




More information about the training-materials-updates mailing list