[bootlin/training-materials updates] master: debugging: labs: add multiple mentions about where to run provided commands (fb2bf727)

Alexis Lothoré alexis.lothore at bootlin.com
Thu Aug 3 15:16:36 CEST 2023


Repository : https://github.com/bootlin/training-materials
On branch  : master
Link       : https://github.com/bootlin/training-materials/commit/fb2bf72715b7f5f96ccb60935aa39abf2ea9d087

>---------------------------------------------------------------

commit fb2bf72715b7f5f96ccb60935aa39abf2ea9d087
Author: Alexis Lothoré <alexis.lothore at bootlin.com>
Date:   Thu Aug 3 15:16:36 2023 +0200

    debugging: labs: add multiple mentions about where to run provided commands
    
    Trainees are sometimes lost about where they are supposed to run commands.
    Add short mentions about either target or development host where it is
    relevant. Do not add too much details, since trainees are expected to
    understand where and why they are running provided commands
    
    Signed-off-by: Alexis Lothoré <alexis.lothore at bootlin.com>


>---------------------------------------------------------------

fb2bf72715b7f5f96ccb60935aa39abf2ea9d087
 .../debugging-application-crash.tex                | 16 +++++++------
 .../debugging-application-profiling.tex            | 17 +++++++-------
 .../debugging-application-tracing.tex              | 10 ++++----
 .../debugging-kernel-debugging.tex                 | 27 +++++++++++-----------
 .../debugging-memory-issues.tex                    |  4 ++--
 labs/debugging-setup/debugging-setup.tex           |  3 ++-
 .../debugging-system-wide-profiling.tex            | 14 ++++++-----
 7 files changed, 49 insertions(+), 42 deletions(-)

diff --git a/labs/debugging-application-crash/debugging-application-crash.tex b/labs/debugging-application-crash/debugging-application-crash.tex
index 881045b8..250e3bfe 100644
--- a/labs/debugging-application-crash/debugging-application-crash.tex
+++ b/labs/debugging-application-crash/debugging-application-crash.tex
@@ -21,8 +21,8 @@ result.
 
 Take our \code{linked_list.c} program. It uses the \code{<sys/queue.h>} header
 which provides multiple linked-list implementations. This program creates and
-fill a linked list with the names read from a file. Compile it using the
-following command:
+fill a linked list with the names read from a file. Compile it from your
+development host using the following command:
 
 \begin{bashinput}
 $ cd /home/$USER/debugging-labs/nfsroot/root/gdb/
@@ -30,8 +30,8 @@ $ make
 \end{bashinput}
 
 By default, it will look for a \code{word_list} file located in the current
-directory. This program should display the list of words that were read from
-the file.
+directory. This program, when run on the target, should display the list of
+words that were read from the file.
 
 \begin{bashinput}
 $ ./linked_list
@@ -40,7 +40,7 @@ $ ./linked_list
 From what you can see, it actually crashes! So we will use GDB to debug that
 program. We will do that remotely since our target does not embed a full gdb,
 only a gdbserver, a lightweight gdb server that allows connecting with a remote
-full feature GDB. Start our program using gdbserver in multi mode:
+full feature GDB. Start our program on the target using gdbserver in multi mode:
 
 \begin{bashinput}
 $ gdbserver --multi :2000 ./linked_list
@@ -83,7 +83,8 @@ crash is not reproducible but crashes only once in a while.  If so, we can use
 the kernel coredump support to generate a core dump of the faulty application
 and do a post-mortem analysis.
 
-First of all, we need to enable kernel coredumping support of programs:
+First of all, we need to enable kernel coredumping support of programs. On the
+target, run:
 
 \begin{bashinput}
 $ ulimit -c unlimited
@@ -128,7 +129,8 @@ iteration on the list. We would like to display each \code{struct name} as
 \code{index: name}. In order to access a struct field in gdb python, you can use
 \code{self.val['field_name']}.
 
-Once done, you can use the following commands to test your script:
+Once done, you can use the following commands in your development host gdb
+client session to test your script:
 
 \begin{bashinput}
 (gdb) source linked_list.py
diff --git a/labs/debugging-application-profiling/debugging-application-profiling.tex b/labs/debugging-application-profiling/debugging-application-profiling.tex
index ef50c5bf..4f4f6be6 100644
--- a/labs/debugging-application-profiling/debugging-application-profiling.tex
+++ b/labs/debugging-application-profiling/debugging-application-profiling.tex
@@ -13,14 +13,14 @@
 
 Massif is really helpful to understand what is going on the memory allocation
 side of an application. Compile the \code{heap_profile} example that we did provide
-using the following command:
+using the following command on your development host
 
 \begin{bashinput}
 $ cd /home/$USER/debugging-labs/nfsroot/root/heap_profile
 $ make
 \end{bashinput}
 
-Once compile, on the target run it under massif using the following command:
+Once compiled, run it on the target under massif using the following command:
 
 \begin{bashinput}
 $ cd /root/heap_profile
@@ -63,7 +63,7 @@ tools.
 Let's start by profiling the application using the \code{cachegrind} tool. Our
 program takes two file names as parameters: an input PNG image and an output
 one. We provided a sample image in \code{tux_small.png} which can be used as an
-input file. First let's compile it using the following commands:
+input file. First let's compile it using the following commands on our development host:
 
 \begin{bashinput}
 $ cd /home/$USER/debugging-labs/nfsroot/root/app_profiling
@@ -91,7 +91,8 @@ function that generates most of the D cache miss time.
 Based on that result, modify the program to be more cache efficient. Run again
 the cachegrind analysis to check that the modifications were actually effective.
 
-We also profile the execution time using callgrind with 
+We also profile the execution time using callgrind by running valgrind again on
+the target but with a different tool:
 
 \begin{bashinput}
 $ valgrind --tool=callgrind ./png_convert tux_small.png out.png
@@ -118,14 +119,14 @@ In order to have a better view of the performance of our program in a real
 system, we will use \code{perf}. In order to gather performance counter from
 the hardware, we will run our program using \code{perf stat}. We would like to
 observe the number of L1 data cache store misses. In order to select the correct
-event, use \code{perf list} to find it amongst the cache events:
+event, use \code{perf list} on the target to find it amongst the cache events:
 
 \begin{bashinput}
 $ perf list cache
 \end{bashinput}
 
-Once found, execute the program using perf stat and specified that event using
--e:
+Once found, execute the program on the target using perf stat and specified that event
+using -e:
 
 \begin{bashinput}
 $ perf stat -e L1-dcache-store-misses ./png_convert tux.png out.png
@@ -143,7 +144,7 @@ $ perf record ./png_convert tux_small.png out.png
 
 Once recorded, a \code{perf.data} file will be generated. This file will
 contain the traces that have been recorded. These traces can be analyzed using
-\code{perf report} on the development platform:
+\code{perf report} on the development host:
 
 \begin{bashinput}
 $ sudo chown $USER:$USER perf.data
diff --git a/labs/debugging-application-tracing/debugging-application-tracing.tex b/labs/debugging-application-tracing/debugging-application-tracing.tex
index b819e417..bebf8b04 100644
--- a/labs/debugging-application-tracing/debugging-application-tracing.tex
+++ b/labs/debugging-application-tracing/debugging-application-tracing.tex
@@ -17,7 +17,7 @@ $ cd /home/$USER/debugging-labs/nfsroot/root/ltrace/
 $ make
 \end{bashinput}
 
-From there, run the \code{authent} application on the target.
+Next, run the \code{authent} application on the target.
 
 \begin{bashinput}
 $ cd /root/ltrace
@@ -31,8 +31,8 @@ the default paths expected by ld (see \manpage{ld.so}{8}), we need to provide
 that path using \code{LD_LIBRARY_PATH}.
 
 As you can see, it seems our application is failing to correctly authenticate
-the system. Using {\em ltrace}, trace the application in order to understand
-what is going on.
+the system. Using {\em ltrace}, trace the application on the target in order to
+understand what is going on.
 
 \begin{bashinput}
 $ ltrace ./authent
@@ -44,14 +44,14 @@ In order to overload this check, we can use a \code{LD_PRELOAD} a library.
 We'll override the \code{al_authent_user()} based on the
 \code{authent_library.h} definitions. Create a file \code{overload.c} which
 override the \code{al_authent_user()}, prints the user, password and returns 0. 
-Compile it using the following command line:
+Compile it on your development host using the following command line:
 
 \begin{bashinput}
 $ ${CROSS_COMPILE}gcc -fPIC -shared overload.c -o overload.so
 \end{bashinput}
 
 Finally, run your application and preload the new library using the following
-command:
+command on the target:
 \begin{bashinput}
 $ LD_PRELOAD=./overload.so ./authent
 \end{bashinput}
diff --git a/labs/debugging-kernel-debugging/debugging-kernel-debugging.tex b/labs/debugging-kernel-debugging/debugging-kernel-debugging.tex
index 2b855c36..2019981b 100644
--- a/labs/debugging-kernel-debugging/debugging-kernel-debugging.tex
+++ b/labs/debugging-kernel-debugging/debugging-kernel-debugging.tex
@@ -15,7 +15,7 @@
 
 \kconfig{CONFIG_PROVE_LOCKING} and \kconfig{CONFIG_DEBUG_ATOMIC_SLEEP} have been
 enabled in the provided kernel image.
-First, compile the module using the following command line:
+First, compile the module on your development host using the following command line:
 
 \begin{bashinput}
 $ cd /home/$USER/debugging-labs/nfsroot/root/locking
@@ -25,7 +25,7 @@ $ export KDIR=/home/$USER/debugging-labs/buildroot/output/build/linux-5.13/
 $ make
 \end{bashinput}
 
-Load the \code{locking.ko} module and look at the output in dmesg:
+On the target, load the \code{locking.ko} module and look at the output in dmesg:
 
 \begin{bashinput}
 # cd /root/locking
@@ -39,9 +39,9 @@ have been reported by the \code{lockdep} system.
 \section{Kmemleak}
 
 The provided kernel image contains kmemleak but it is disabled by default to
-avoid having a large overhead. In order to enable it, reboot and enable it by
-adding \code{kmemleak=on} on the command line. Interrupt U-Boot at reboot and
-modify the \code{bootargs} variable:
+avoid having a large overhead. In order to enable it, reboot the target and enable
+kmemleak by adding \code{kmemleak=on} on the command line. Interrupt U-Boot at
+reboot and modify the \code{bootargs} variable:
 
 \begin{bashinput}
 STM32MP> env edit bootargs
@@ -49,7 +49,7 @@ STM32MP> <existing bootargs> kmemleak=on
 STM32MP> boot
 \end{bashinput}
 
-Then compile the kmemleak test module:
+Then compile the dummy test module on your development host:
 
 \begin{bashinput}
 $ cd /home/$USER/debugging-labs/nfsroot/root/kmemleak
@@ -96,7 +96,7 @@ that did exist in the 5.13 kernel version!
 
 \section{OOPS analysis}
 We noticed that the watchdog command generated a crash on the kernel. In order
-to reproduce the crash, run the following command:
+to reproduce the crash, run the following command on the target:
 
 \begin{bashinput}
 $ watchdog -T 10 -t 5 /dev/watchdog0
@@ -132,7 +132,7 @@ compiled with \code{-g} compiler flag, which adds a lot of debugging
 information (matching between source code lines and assembly for instance).
 
 Using \code{addr2line}, find the exact source code line were the crash happened.
-For that, you can use the following command:
+For that, you can use the following command on your development host:
 
 \begin{bashinput}
 $ addr2line -e /home/$USER/debugging-labs/buildroot/output/build/linux-5.13/vmlinux
@@ -162,7 +162,7 @@ $ ./scripts/decode_stacktrace.sh vmlinux < ~/debugging-labs/oops.txt
 In order to debug this OOPS, we'll use KGDB which is an in-kernel debugger.
 The provided image already contains the necessary KGDB support and the watchdog
 has been disabled to avoid rebooting while debugging. In order to use KGDB and
-the console simultaneously, compile and run kdmx:
+the console simultaneously, compile and run kdmx on your development host:
 
 \begin{bashinput}
 $ git clone https://git.kernel.org/pub/scm/utils/kernel/kgdb/agent-proxy.git
@@ -215,7 +215,8 @@ STM32MP> boot
 \end{bashinput}
 
 Then the kernel will halt during boot waiting for a GDB process to be attached.
-Attached using the same command that was previously used:
+Attach gdb client from your development host using the same command that was previously
+used:
 
 \begin{bashinput}
 $ gdb-multiarch /home/$USER/debugging-labs/buildroot/output/build/linux-5.13/vmlinux
@@ -263,7 +264,7 @@ KGDB.}
 KGDB also allows to debug modules and thanks to the GDB python scripts
 (\code{lx-symbols}) mainly, it is as easy as debugging kernel core code. In
 order to test that feature, we are going to compile a test module and break on
-it.
+it. On your development host, build the module:
 
 \begin{bashinput}
 $ cd /home/$USER/debugging-labs/nfsroot/root/kgdb
@@ -339,7 +340,7 @@ debugged using gdb or crash.
 
 We will now build the dump-capture kernel which will be booted on crash using
 kexec. For that, we will use a simple buildroot image with a builtin initramfs
-using the following commands:
+using the following commands on the development host:
 
 \begin{bashinput}
 $ cd /home/$USER/debugging-labs/buildroot
@@ -374,7 +375,7 @@ STM32MP> boot
 \end{bashinput}
 
 To load the crash kernel into the previously reserved memory zone, run the
-following command:
+following command on the target:
 
 \begin{bashinput}
 # kexec --type zImage -p /root/kexec/zImage --dtb=/root/kexec/stm32mp157a-dk1.dtb
diff --git a/labs/debugging-memory-issues/debugging-memory-issues.tex b/labs/debugging-memory-issues/debugging-memory-issues.tex
index bd234276..b2e270f5 100644
--- a/labs/debugging-memory-issues/debugging-memory-issues.tex
+++ b/labs/debugging-memory-issues/debugging-memory-issues.tex
@@ -9,8 +9,8 @@
 
 \section{valgrind \& vgdb}
 
-Go into the \code{valgrind folder} and compile \code{valgrind.c} with debugging
-information using:
+On your development host, go into the \code{valgrind folder} and compile
+\code{valgrind.c} with debugging information using:
 
 \begin{bashinput}
 $ cd /home/$USER/debugging-labs/nfsroot/root/valgrind
diff --git a/labs/debugging-setup/debugging-setup.tex b/labs/debugging-setup/debugging-setup.tex
index a00fde51..5951ca09 100644
--- a/labs/debugging-setup/debugging-setup.tex
+++ b/labs/debugging-setup/debugging-setup.tex
@@ -127,7 +127,8 @@ Once flashed, plug the sdcard onto the STM32MP157D board and reboot the board.
 
 In order to use a rootfs on NFS, we will use an external rootfs. This can be
 specified by passing bootargs to the kernel. To do so, we are going to set the
-\code{bootargs} U-Boot variable and save the environment.
+\code{bootargs} U-Boot variable and save the environment. On the target, enter
+the following commands:
 
 \begin{bashinput}
 STM32MP1> env set bootargs root=/dev/nfs ip=192.168.0.100:::::eth0
diff --git a/labs/debugging-system-wide-profiling/debugging-system-wide-profiling.tex b/labs/debugging-system-wide-profiling/debugging-system-wide-profiling.tex
index b566fddc..2cb1c9d6 100644
--- a/labs/debugging-system-wide-profiling/debugging-system-wide-profiling.tex
+++ b/labs/debugging-system-wide-profiling/debugging-system-wide-profiling.tex
@@ -11,7 +11,7 @@
 
 \section{ftrace \& uprobes}
 
-First of all, we will start a small program using the following command:
+First of all, we will start a small program on the target using the following command:
 
 \begin{bashinput}
 $ mystery_program 1000 200 2 &
@@ -21,7 +21,8 @@ In order to trace a full system, we can use ftrace. However, if we want to trace
 the userspace, we'll need to add new tracepoints using uprobes. This can be done
 manually with the uprobe sysfs interface or using \code{perf probe}.
 
-Before starting to profile, we will compile our program to be instrumented:
+Before starting to profile, we will compile our program to be instrumented.
+On your development host, run:
 
 \begin{bashinput}
 $ cd /home/$USER/debugging-labs/nfsroot/root/system_profiling
@@ -105,7 +106,7 @@ this. We'll add 2 tracepoints:
 In order to create these tracepoints easily, we will create a
 \code{crc_random-tp.tp} file and generate the tracepoints using
 \code{lttng-gen-tp}. In order to install this tool, the \code{liblttng-ust-dev}
-should be installed:
+should be installed in your development host:
 
 \begin{bashinput}
 sudo apt install liblttng-ust-dev
@@ -159,14 +160,15 @@ on the remote computer.
 In our case, the hostname is buildroot so traces will be
 located in \code{$PWD/traces/buildroot/<session>}
 
-Using \code{babeltrace2}, you can display the raw traces that were acquired:
+Using \code{babeltrace2}, you can display the raw traces that were acquired directly
+on your development host:
 \begin{bashinput}
 $ sudo apt install babeltrace2
 $ babeltrace2 $PWD/traces/buildroot/<session>/
 \end{bashinput}
 
 In order to analyze our traces more visually, we are going to use tracecompass.
-Download \code{tracecompass} latest version and extract it using:
+Download \code{tracecompass} latest version and extract it on your host using:
 
 \begin{bashinput}
 $ wget https://ftp.fau.de/eclipse/tracecompass/releases/8.1.0/rcp/trace-compass-8.1.0-20220919-0815-linux.gtk.x86_64.tar.gz
@@ -199,7 +201,7 @@ In order to profile the whole system, we are going to use perf and try to find
 the function that takes most of the time executing.
 
 First of all, we will run a global recording of functions and their backtrace on
-(all CPUs) during 10 seconds using the following command:
+(all CPUs) during 10 seconds using the following command on the target:
 
 \begin{bashinput}
 $ perf record -F 99 -g -- sleep 10




More information about the training-materials-updates mailing list