docs: update app trace cn trans

pull/8779/head
mofeifei 2022-03-28 16:44:30 +08:00 zatwierdzone przez BOT
rodzic 675e15a6b5
commit 4bd411d254
2 zmienionych plików z 215 dodań i 212 usunięć

Wyświetl plik

@ -5,23 +5,22 @@ Application Level Tracing library
Overview
--------
IDF provides useful feature for program behavior analysis: application level tracing. It is implemented in the corresponding library and can be enabled in menuconfig. This feature allows to transfer arbitrary data between host and {IDF_TARGET_NAME} via JTAG, UART or USB interfaces with small overhead on program execution. It is possible to use JTAG and UART interfaces simultaneously. The UART interface are mostly used for connection with SEGGER SystemView tool (see `SystemView <https://www.segger.com/products/development-tools/systemview/>`_).
ESP-IDF provides a useful feature for program behavior analysis: application level tracing. It is implemented in the corresponding library and can be enabled in menuconfig. This feature allows to transfer arbitrary data between host and {IDF_TARGET_NAME} via JTAG, UART, or USB interfaces with small overhead on program execution. It is possible to use JTAG and UART interfaces simultaneously. The UART interface is mostly used for connection with SEGGER SystemView tool (see `SystemView <https://www.segger.com/products/development-tools/systemview/>`_).
Developers can use this library to send application specific state of execution to the host and receive commands or other type of info in the opposite direction at runtime. The main use cases of this library are:
Developers can use this library to send application-specific state of execution to the host and receive commands or other types of information from the opposite direction at runtime. The main use cases of this library are:
1. Collecting application specific data, see :ref:`app_trace-application-specific-tracing`
2. Lightweight logging to the host, see :ref:`app_trace-logging-to-host`
3. System behavior analysis, see :ref:`app_trace-system-behaviour-analysis-with-segger-systemview`
4. Source code coverage, see :ref:`app_trace-gcov-source-code-coverage`
1. Collecting application-specific data. See :ref:`app_trace-application-specific-tracing`.
2. Lightweight logging to the host. See :ref:`app_trace-logging-to-host`.
3. System behavior analysis. See :ref:`app_trace-system-behaviour-analysis-with-segger-systemview`.
4. Source code coverage. See :ref:`app_trace-gcov-source-code-coverage`.
Tracing components when working over JTAG interface are shown in the figure below.
Tracing components used when working over JTAG interface are shown in the figure below.
.. figure:: ../../_static/app_trace-overview.jpg
:align: center
:alt: Tracing Components when Working Over JTAG
:figclass: align-center
:alt: Tracing Components When Working Over JTAG
Tracing Components when Working Over JTAG
Tracing Components Used When Working Over JTAG
Modes of Operation
@ -29,9 +28,9 @@ Modes of Operation
The library supports two modes of operation:
**Post-mortem mode**. This is the default mode. The mode does not need interaction with the host side. In this mode tracing module does not check whether host has read all the data from *HW UP BUFFER* buffer and overwrites old data with the new ones. This mode is useful when only the latest trace data are interesting to the user, e.g. for analyzing program's behavior just before the crash. Host can read the data later on upon user request, e.g. via special OpenOCD command in case of working via JTAG interface.
**Post-mortem mode:** This is the default mode. The mode does not need interaction with the host side. In this mode, tracing module does not check whether the host has read all the data from *HW UP BUFFER*, but directly overwrites old data with the new ones. This mode is useful when only the latest trace data is interesting to the user, e.g., for analyzing program's behavior just before the crash. The host can read the data later on upon user request, e.g., via special OpenOCD command in case of working via JTAG interface.
**Streaming mode.** Tracing module enters this mode when host connects to {IDF_TARGET_NAME}. In this mode, before writing new data to *HW UP BUFFER*, the tracing module checks that whether there is enough space in it and if necessary, waits for the host to read data and free enough memory. Maximum waiting time is controlled via timeout values passed by users to corresponding API routines. So when application tries to write data to the trace buffer using finite value of the maximum waiting time, it is possible situation that this data will be dropped. This is especially true for tracing from time critical code (ISRs, OS scheduler code, etc.) when infinite timeouts can lead to system malfunction. In order to avoid loss of such critical data, developers can enable additional data buffering via menuconfig option :ref:`CONFIG_APPTRACE_PENDING_DATA_SIZE_MAX`. This macro specifies the size of data which can be buffered in above conditions. The option can also help to overcome situation when data transfer to the host is temporarily slowed down, e.g. due to USB bus congestions. But it will not help when the average bitrate of the trace data stream exceeds the hardware interface capabilities.
**Streaming mode:** Tracing module enters this mode when the host connects to {IDF_TARGET_NAME}. In this mode, before writing new data to *HW UP BUFFER*, the tracing module checks that whether there is enough space in it and if necessary, waits for the host to read data and free enough memory. Maximum waiting time is controlled via timeout values passed by users to corresponding API routines. So when application tries to write data to the trace buffer using the finite value of the maximum waiting time, it is possible that this data will be dropped. This is especially true for tracing from time critical code (ISRs, OS scheduler code, etc.) where infinite timeouts can lead to system malfunction. In order to avoid loss of such critical data, developers can enable additional data buffering via menuconfig option :ref:`CONFIG_APPTRACE_PENDING_DATA_SIZE_MAX`. This macro specifies the size of data which can be buffered in above conditions. The option can also help to overcome situation when data transfer to the host is temporarily slowed down, e.g., due to USB bus congestions. But it will not help when the average bitrate of the trace data stream exceeds the hardware interface capabilities.
Configuration Options and Dependencies
@ -41,37 +40,35 @@ Using of this feature depends on two components:
1. **Host side:** Application tracing is done over JTAG, so it needs OpenOCD to be set up and running on host machine. For instructions on how to set it up, please see :doc:`JTAG Debugging <../api-guides/jtag-debugging/index>` for details.
2. **Target side:** Application tracing functionality can be enabled in menuconfig. *Component config > Application Level Tracing* menu allows selecting
destination for the trace data (HW interface for transport: JTAG or/and UART). Choosing any of the destinations automatically enables ``CONFIG_APPTRACE_ENABLE`` option.
For UART interface user have to define baud rate, TX and RX pins numbers, and additional UART related parameters.
2. **Target side:** Application tracing functionality can be enabled in menuconfig. Please go to ``Component config`` > ``Application Level Tracing`` menu, which allows selecting destination for the trace data (hardware interface for transport: JTAG or/and UART). Choosing any of the destinations automatically enables the ``CONFIG_APPTRACE_ENABLE`` option. For UART interfaces, users have to define baud rate, TX and RX pins numbers, and additional UART-related parameters.
.. note::
In order to achieve higher data rates and minimize number of dropped packets it is recommended to optimize setting of JTAG clock frequency, so it is at maximum and still provides stable operation of JTAG, see :ref:`jtag-debugging-tip-optimize-jtag-speed`.
In order to achieve higher data rates and minimize the number of dropped packets, it is recommended to optimize the setting of JTAG clock frequency, so that it is at maximum and still provides stable operation of JTAG. See :ref:`jtag-debugging-tip-optimize-jtag-speed`.
There are two additional menuconfig options not mentioned above:
1. *Threshold for flushing last trace data to host on panic* (:ref:`CONFIG_APPTRACE_POSTMORTEM_FLUSH_THRESH`). This option is necessary due to the nature of working over JTAG. In that mode trace data are exposed to the host in 16 KB blocks. In post-mortem mode when one block is filled it is exposed to the host and the previous one becomes unavailable. In other words trace data are overwritten in 16 KB granularity. On panic the latest data from the current input block are exposed to host and host can read them for post-analysis. System panic may occur when very small amount of data are not exposed to the host yet. In this case the previous 16 KB of collected data will be lost and host will see the latest, but very small piece of the trace. It can be insufficient to diagnose the problem. This menuconfig option allows avoiding such situations. It controls the threshold for flushing data in case of panic. For example user can decide that it needs not less then 512 bytes of the recent trace data, so if there is less then 512 bytes of pending data at the moment of panic they will not be flushed and will not overwrite previous 16 KB. The option is only meaningful in post-mortem mode and when working over JTAG.
1. *Threshold for flushing last trace data to host on panic* (:ref:`CONFIG_APPTRACE_POSTMORTEM_FLUSH_THRESH`). This option is necessary due to the nature of working over JTAG. In this mode, trace data is exposed to the host in 16 KB blocks. In post-mortem mode, when one block is filled, it is exposed to the host and the previous one becomes unavailable. In other words, the trace data is overwritten in 16 KB granularity. On panic, the latest data from the current input block is exposed to the host and the host can read them for post-analysis. System panic may occur when a very small amount of data are not exposed to the host yet. In this case, the previous 16 KB of collected data will be lost and the host will see the latest, but very small piece of the trace. It can be insufficient to diagnose the problem. This menuconfig option allows avoiding such situations. It controls the threshold for flushing data in case of apanic. For example, users can decide that it needs no less than 512 bytes of the recent trace data, so if there is less then 512 bytes of pending data at the moment of panic, they will not be flushed and will not overwrite the previous 16 KB. The option is only meaningful in post-mortem mode and when working over JTAG.
2. *Timeout for flushing last trace data to host on panic* (:ref:`CONFIG_APPTRACE_ONPANIC_HOST_FLUSH_TMO`). The option is only meaningful in streaming mode and controls the maximum time tracing module will wait for the host to read the last data in case of panic.
2. *Timeout for flushing last trace data to host on panic* (:ref:`CONFIG_APPTRACE_ONPANIC_HOST_FLUSH_TMO`). The option is only meaningful in streaming mode and it controls the maximum time that the tracing module will wait for the host to read the last data in case of panic.
3. *UART RX/TX ring buffer size* (:ref:`CONFIG_APPTRACE_UART_TX_BUFF_SIZE`). The size of the buffer depends on amount of data transfered through the UART.
3. *UART RX/TX ring buffer size* (:ref:`CONFIG_APPTRACE_UART_TX_BUFF_SIZE`). The size of the buffer depends on the amount of data transfered through the UART.
4. *UART TX message size* (:ref:`CONFIG_APPTRACE_UART_TX_MSG_SIZE`). The maximum size of the single message to transfer.
How to use this library
How to Use This Library
-----------------------
This library provides API for transferring arbitrary data between host and {IDF_TARGET_NAME}. When enabled in menuconfig target application tracing module is initialized automatically at the system startup, so all what the user needs to do is to call corresponding API to send, receive or flush the data.
This library provides APIs for transferring arbitrary data between the host and {IDF_TARGET_NAME}. When enabled in menuconfig, the target application tracing module is initialized automatically at the system startup, so all what the user needs to do is to call corresponding APIs to send, receive or flush the data.
.. _app_trace-application-specific-tracing:
Application Specific Tracing
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In general user should decide what type of data should be transferred in every direction and how these data must be interpreted (processed). The following steps must be performed to transfer data between target and host:
In general, users should decide what type of data should be transferred in every direction and how these data must be interpreted (processed). The following steps must be performed to transfer data between the target and the host:
1. On target side user should implement algorithms for writing trace data to the host. Piece of code below shows an example how to do this.
1. On the target side, users should implement algorithms for writing trace data to the host. Piece of code below shows an example on how to do this.
.. code-block:: c
@ -84,7 +81,7 @@ In general user should decide what type of data should be transferred in every d
return res;
}
``esp_apptrace_write()`` function uses memcpy to copy user data to the internal buffer. In some cases it can be more optimal to use ``esp_apptrace_buffer_get()`` and ``esp_apptrace_buffer_put()`` functions. They allow developers to allocate buffer and fill it themselves. The following piece of code shows how to do this.
``esp_apptrace_write()`` function uses memcpy to copy user data to the internal buffer. In some cases, it can be more optimal to use ``esp_apptrace_buffer_get()`` and ``esp_apptrace_buffer_put()`` functions. They allow developers to allocate buffer and fill it themselves. The following piece of code shows how to do this.
.. code-block:: c
@ -99,12 +96,12 @@ In general user should decide what type of data should be transferred in every d
sprintf(ptr, "Here is the number %d", number);
esp_err_t res = esp_apptrace_buffer_put(ESP_APPTRACE_DEST_TRAX, ptr, 100/*tmo in us*/);
if (res != ESP_OK) {
/* in case of error host tracing tool (e.g. OpenOCD) will report incomplete user buffer */
/* in case of error host tracing tool (e.g., OpenOCD) will report incomplete user buffer */
ESP_LOGE(TAG, "Failed to put buffer!");
return res;
}
Also according to his needs user may want to receive data from the host. Piece of code below shows an example how to do this.
Also according to his needs, the user may want to receive data from the host. Piece of code below shows an example on how to do this.
.. code-block:: c
@ -127,7 +124,7 @@ In general user should decide what type of data should be transferred in every d
...
}
``esp_apptrace_read()`` function uses memcpy to copy host data to user buffer. In some cases it can be more optimal to use ``esp_apptrace_down_buffer_get()`` and ``esp_apptrace_down_buffer_put()`` functions. They allow developers to occupy chunk of read buffer and process it in-place. The following piece of code shows how to do this.
``esp_apptrace_read()`` function uses memcpy to copy host data to user buffer. In some casesm it can be more optimal to use ``esp_apptrace_down_buffer_get()`` and ``esp_apptrace_down_buffer_put()`` functions. They allow developers to occupy chunk of read buffer and process it in-place. The following piece of code shows how to do this.
.. code-block:: c
@ -152,28 +149,32 @@ In general user should decide what type of data should be transferred in every d
}
esp_err_t res = esp_apptrace_down_buffer_put(ESP_APPTRACE_DEST_TRAX, ptr, 100/*tmo in us*/);
if (res != ESP_OK) {
/* in case of error host tracing tool (e.g. OpenOCD) will report incomplete user buffer */
/* in case of error host tracing tool (e.g., OpenOCD) will report incomplete user buffer */
ESP_LOGE(TAG, "Failed to put buffer!");
return res;
}
2. The next step is to build the program image and download it to the target as described in the :ref:`Getting Started Guide <get-started-build>`.
3. Run OpenOCD (see :doc:`JTAG Debugging <../api-guides/jtag-debugging/index>`).
4. Connect to OpenOCD telnet server. It can be done using the following command in terminal ``telnet <oocd_host> 4444``. If telnet session is opened on the same machine which runs OpenOCD you can use ``localhost`` as ``<oocd_host>`` in the command above.
5. Start trace data collection using special OpenOCD command. This command will transfer tracing data and redirect them to specified file or socket (currently only files are supported as trace data destination). For description of the corresponding commands see `OpenOCD Application Level Tracing Commands`_.
6. The final step is to process received data. Since format of data is defined by user the processing stage is out of the scope of this document. Good starting points for data processor are python scripts in ``$IDF_PATH/tools/esp_app_trace``: ``apptrace_proc.py`` (used for feature tests) and ``logtrace_proc.py`` (see more details in section `Logging to Host`_).
2. The next step is to build the program image and download it to the target as described in the :ref:`Getting Started Guide <get-started-build>`.
3. Run OpenOCD (see :doc:`JTAG Debugging <../api-guides/jtag-debugging/index>`).
4. Connect to OpenOCD telnet server. It can be done using the following command in terminal ``telnet <oocd_host> 4444``. If telnet session is opened on the same machine which runs OpenOCD, you can use ``localhost`` as ``<oocd_host>`` in the command above.
5. Start trace data collection using special OpenOCD command. This command will transfer tracing data and redirect them to the specified file or socket (currently only files are supported as trace data destination). For description of the corresponding commands, see `OpenOCD Application Level Tracing Commands`_.
6. The final step is to process received data. Since the format of data is defined by users, the processing stage is out of the scope of this document. Good starting points for data processor are python scripts in ``$IDF_PATH/tools/esp_app_trace``: ``apptrace_proc.py`` (used for feature tests) and ``logtrace_proc.py`` (see more details in section `Logging to Host`_).
OpenOCD Application Level Tracing Commands
""""""""""""""""""""""""""""""""""""""""""
*HW UP BUFFER* is shared between user data blocks and filling of the allocated memory is performed on behalf of the API caller (in task or ISR context). In multithreading environment it can happen that task/ISR which fills the buffer is preempted by another high priority task/ISR. So it is possible situation that user data preparation process is not completed at the moment when that chunk is read by the host. To handle such conditions tracing module prepends all user data chunks with header which contains allocated user buffer size (2 bytes) and length of actually written data (2 bytes). So total length of the header is 4 bytes. OpenOCD command which reads trace data reports error when it reads incomplete user data chunk, but in any case it puts contents of the whole user chunk (including unfilled area) to output file.
*HW UP BUFFER* is shared between user data blocks and the filling of the allocated memory is performed on behalf of the API caller (in task or ISR context). In multithreading environment, it can happen that the task/ISR which fills the buffer is preempted by another high priority task/ISR. So it is possible that the user data preparation process is not completed at the moment when that chunk is read by the host. To handle such conditions, the tracing module prepends all user data chunks with header which contains the allocated user buffer size (2 bytes) and the length of the actually written data (2 bytes). So the total length of the header is 4 bytes. OpenOCD command which reads trace data reports error when it reads incomplete user data chunk, but in any case, it puts the contents of the whole user chunk (including unfilled area) to the output file.
Below is the description of available OpenOCD application tracing commands.
.. note::
Currently OpenOCD does not provide commands to send arbitrary user data to the target.
Currently, OpenOCD does not provide commands to send arbitrary user data to the target.
Command usage:
@ -199,25 +200,25 @@ Start command syntax:
``outfile``
Path to file to save data from both CPUs. This argument should have the following format: ``file://path/to/file``.
``poll_period``
Data polling period (in ms) for available trace data. If greater than 0 then command runs in non-blocking mode. By default 1 ms.
Data polling period (in ms) for available trace data. If greater than 0, then command runs in non-blocking mode. By default, 1 ms.
``trace_size``
Maximum size of data to collect (in bytes). Tracing is stopped after specified amount of data is received. By default -1 (trace size stop trigger is disabled).
Maximum size of data to collect (in bytes). Tracing is stopped after specified amount of data is received. By default, -1 (trace size stop trigger is disabled).
``stop_tmo``
Idle timeout (in sec). Tracing is stopped if there is no data for specified period of time. By default -1 (disable this stop trigger). Optionally set it to value longer than longest pause between tracing commands from target.
Idle timeout (in sec). Tracing is stopped if there is no data for specified period of time. By default, -1 (disable this stop trigger). Optionally set it to value longer than longest pause between tracing commands from target.
``wait4halt``
If 0 start tracing immediately, otherwise command waits for the target to be halted (after reset, by breakpoint etc.) and then automatically resumes it and starts tracing. By default 0.
If 0, start tracing immediately, otherwise command waits for the target to be halted (after reset, by breakpoint etc.) and then automatically resumes it and starts tracing. By default, 0.
``skip_size``
Number of bytes to skip at the start. By default 0.
Number of bytes to skip at the start. By default, 0.
.. note::
If ``poll_period`` is 0, OpenOCD telnet command line will not be available until tracing is stopped. You must stop it manually by resetting the board or pressing Ctrl+C in OpenOCD window (not one with the telnet session). Another option is to set ``trace_size`` and wait until this size of data is collected. At this point tracing stops automatically.
If ``poll_period`` is 0, OpenOCD telnet command line will not be available until tracing is stopped. You must stop it manually by resetting the board or pressing Ctrl+C in OpenOCD window (not one with the telnet session). Another option is to set ``trace_size`` and wait until this size of data is collected. At this point, tracing stops automatically.
Command usage examples:
.. highlight:: none
1. Collect 2048 bytes of tracing data to a file "trace.log". The file will be saved in "openocd-esp32" directory.
1. Collect 2048 bytes of tracing data to the file ``trace.log``. The file will be saved in the ``openocd-esp32`` directory.
::
@ -227,7 +228,7 @@ Command usage examples:
.. note::
Tracing data is buffered before it is made available to OpenOCD. If you see "Data timeout!" message, then the target is likely sending not enough data to empty the buffer to OpenOCD before expiration of timeout. Either increase the timeout or use a function ``esp_apptrace_flush()`` to flush the data on specific intervals.
Tracing data is buffered before it is made available to OpenOCD. If you see "Data timeout!" message, then it is likely that the target is not sending enough data to empty the buffer to OpenOCD before the timeout. Either increase the timeout or use the function ``esp_apptrace_flush()`` to flush the data on specific intervals.
2. Retrieve tracing data indefinitely in non-blocking mode.
@ -235,7 +236,7 @@ Command usage examples:
esp apptrace start file://trace.log 1 -1 -1 0 0
There is no limitation on the size of collected data and there is no any data timeout set. This process may be stopped by issuing ``esp apptrace stop`` command on OpenOCD telnet prompt, or by pressing Ctrl+C in OpenOCD window.
There is no limitation on the size of collected data and there is no data timeout set. This process may be stopped by issuing ``esp apptrace stop`` command on OpenOCD telnet prompt, or by pressing Ctrl+C in OpenOCD window.
3. Retrieve tracing data and save them indefinitely.
@ -243,35 +244,36 @@ Command usage examples:
esp apptrace start file://trace.log 0 -1 -1 0 0
OpenOCD telnet command line prompt will not be available until tracing is stopped. To stop tracing press Ctrl+C in OpenOCD window.
OpenOCD telnet command line prompt will not be available until tracing is stopped. To stop tracing, press Ctrl+C in the OpenOCD window.
4. Wait for target to be halted. Then resume target's operation and start data retrieval. Stop after collecting 2048 bytes of data:
4. Wait for the target to be halted. Then resume the target's operation and start data retrieval. Stop after collecting 2048 bytes of data:
::
esp apptrace start file://trace.log 0 2048 -1 1 0
To configure tracing immediately after reset use the openocd ``reset halt`` command.
To configure tracing immediately after reset, use the OpenOCD ``reset halt`` command.
.. _app_trace-logging-to-host:
Logging to Host
^^^^^^^^^^^^^^^
IDF implements useful feature: logging to host via application level tracing library. This is a kind of semihosting when all `ESP_LOGx` calls send strings to be printed to the host instead of UART. This can be useful because "printing to host" eliminates some steps performed when logging to UART. The most part of work is done on the host.
ESP-IDF implements a useful feature: logging to the host via application level tracing library. This is a kind of semihosting when all `ESP_LOGx` calls send strings to be printed to the host instead of UART. This can be useful because "printing to host" eliminates some steps performed when logging to UART. Most part of the work is done on the host.
By default IDF's logging library uses vprintf-like function to write formatted output to dedicated UART. In general it involves the following steps:
By default, ESP-IDF's logging library uses vprintf-like function to write formatted output to dedicated UART. In general, it involves the following steps:
1. Format string is parsed to obtain type of each argument.
2. According to its type every argument is converted to string representation.
2. According to its type, every argument is converted to string representation.
3. Format string combined with converted arguments is sent to UART.
Though implementation of vprintf-like function can be optimized to a certain level, all steps above have to be performed in any case and every step takes some time (especially item 3). So it frequently occurs that with additional log added to the program to identify the problem, the program behavior is changed and the problem cannot be reproduced or in the worst cases the program cannot work normally at all and ends up with an error or even hangs.
Though the implementation of the vprintf-like function can be optimized to a certain level, all steps above have to be performed in any case and every step takes some time (especially item 3). So it frequently occurs that with additional log added to the program to identify the problem, the program behavior is changed and the problem cannot be reproduced. And in the worst cases, the program cannot work normally at all and ends up with an error or even hangs.
Possible ways to overcome this problem are to use higher UART bitrates (or another faster interface) and/or move string formatting procedure to the host.
Possible ways to overcome this problem are to use higher UART bitrates (or another faster interface) and/or to move string formatting procedure to the host.
Application level tracing feature can be used to transfer log information to host using ``esp_apptrace_vprintf`` function. This function does not perform full parsing of the format string and arguments, instead it just calculates number of arguments passed and sends them along with the format string address to the host. On the host log data are processed and printed out by a special Python script.
The application level tracing feature can be used to transfer log information to the host using ``esp_apptrace_vprintf`` function. This function does not perform full parsing of the format string and arguments. Instead, it just calculates the number of arguments passed and sends them along with the format string address to the host. On the host, log data is processed and printed out by a special Python script.
Limitations
@ -279,18 +281,18 @@ Limitations
Current implementation of logging over JTAG has some limitations:
1. Tracing from ``ESP_EARLY_LOGx`` macros is not supported.
2. No support for printf arguments which size exceeds 4 bytes (e.g. ``double`` and ``uint64_t``).
3. Only strings from .rodata section are supported as format strings and arguments.
4. Maximum number of printf arguments is 256.
1. No support for tracing from ``ESP_EARLY_LOGx`` macros.
2. No support for printf arguments whose size exceeds 4 bytes (e.g., ``double`` and ``uint64_t``).
3. Only strings from the .rodata section are supported as format strings and arguments.
4. The maximum number of printf arguments is 256.
How To Use It
"""""""""""""
In order to use logging via trace module user needs to perform the following steps:
In order to use logging via trace module, users need to perform the following steps:
1. On target side special vprintf-like function needs to be installed. As it was mentioned earlier this function is ``esp_apptrace_vprintf``. It sends log data to the host. Example code is provided in :example:`system/app_trace_to_host`.
1. On the target side, the special vprintf-like function ``esp_apptrace_vprintf`` needs to be installed. It sends log data to the host. Example code is provided in :example:`system/app_trace_to_host`.
2. Follow instructions in items 2-5 in `Application Specific Tracing`_.
3. To print out collected log records, run the following command in terminal: ``$IDF_PATH/tools/esp_app_trace/logtrace_proc.py /path/to/trace/file /path/to/program/elf/file``.
@ -305,36 +307,34 @@ Command usage:
Positional arguments:
``trace_file``
Path to log trace file
Path to log trace file.
``elf_file``
Path to program ELF file
Path to program ELF file.
Optional arguments:
``-h``, ``--help``
show this help message and exit
Show this help message and exit.
``--no-errors``, ``-n``
Do not print errors
Do not print errors.
.. _app_trace-system-behaviour-analysis-with-segger-systemview:
System Behavior Analysis with SEGGER SystemView
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Another useful IDF feature built on top of application tracing library is the system level tracing which produces traces
compatible with SEGGER SystemView tool (see `SystemView <https://www.segger.com/products/development-tools/systemview/>`_).
SEGGER SystemView is a real-time recording and visualization tool that allows to analyze runtime behavior of an application.
It is possible to view events in real-time through the UART interface.
Another useful ESP-IDF feature built on top of application tracing library is the system level tracing which produces traces compatible with SEGGER SystemView tool (see `SystemView <https://www.segger.com/products/development-tools/systemview/>`_). SEGGER SystemView is a real-time recording and visualization tool that allows to analyze runtime behavior of an application. It is possible to view events in real-time through the UART interface.
How To Use It
"""""""""""""
Support for this feature is enabled by *Component config > Application Level Tracing > FreeRTOS SystemView Tracing* (:ref:`CONFIG_APPTRACE_SV_ENABLE`) menuconfig option. There are several other options enabled under the same menu:
Support for this feature is enabled by ``Component config`` > ``Application Level Tracing`` > ``FreeRTOS SystemView Tracing`` (:ref:`CONFIG_APPTRACE_SV_ENABLE`) menuconfig option. There are several other options enabled under the same menu:
1. SytemView destination. Select the destination interface: JTAG or UART. In case of UART, it will be possible to connect SystemView application to the {IDF_TARGET_NAME} directly and receive data in real-time.
2. {IDF_TARGET_NAME} timer to use as SystemView timestamp source: (:ref:`CONFIG_APPTRACE_SV_TS_SOURCE`) selects the source of timestamps for SystemView events. In the single core mode, timestamps are generated using {IDF_TARGET_NAME} internal cycle counter running at maximum 240 Mhz (~4 ns granularity). In the dual-core mode, external timer working at 40 Mhz is used, so the timestamp granularity is 25 ns.
1. SytemView destination. Select the destination interface: JTAG or UART. In case of UART
it will be possible to connect SystemView application to the {IDF_TARGET_NAME} directly and receive data in real-time.
2. {IDF_TARGET_NAME} timer to use as SystemView timestamp source: (:ref:`CONFIG_APPTRACE_SV_TS_SOURCE`) selects the source of timestamps for SystemView events. In single core mode timestamps are generated using {IDF_TARGET_NAME} internal cycle counter running at maximum 240 Mhz (~4 ns granularity). In dual-core mode external timer working at 40 Mhz is used, so timestamp granularity is 25 ns.
3. Individually enabled or disabled collection of SystemView events (``CONFIG_APPTRACE_SV_EVT_XXX``):
- Trace Buffer Overflow Event
@ -351,9 +351,9 @@ it will be possible to connect SystemView application to the {IDF_TARGET_NAME} d
- Timer Enter Event
- Timer Exit Event
IDF has all the code required to produce SystemView compatible traces, so user can just configure necessary project options (see above), build, download the image to target and use OpenOCD to collect data as described in the previous sections.
ESP-IDF has all the code required to produce SystemView compatible traces, so users can just configure necessary project options (see above), build, download the image to target, and use OpenOCD to collect data as described in the previous sections.
4. Select Pro or App CPU in menuconfig options *Component config > Application Level Tracing > FreeRTOS SystemView Tracing* to trace over UART interface in real-time.
4. Select Pro or App CPU in menuconfig options ``Component config`` > ``Application Level Tracing`` > ``FreeRTOS SystemView Tracing`` to trace over the UART interface in real-time.
OpenOCD SystemView Tracing Command Options
@ -381,27 +381,27 @@ Start command syntax:
``outfile2``
Path to file to save data from APP CPU. This argument should have the following format: ``file://path/to/file``.
``poll_period``
Data polling period (in ms) for available trace data. If greater then 0 then command runs in non-blocking mode. By default 1 ms.
Data polling period (in ms) for available trace data. If greater than 0, then command runs in non-blocking mode. By default, 1 ms.
``trace_size``
Maximum size of data to collect (in bytes). Tracing is stopped after specified amount of data is received. By default -1 (trace size stop trigger is disabled).
Maximum size of data to collect (in bytes). Tracing is stopped after specified amount of data is received. By default, -1 (trace size stop trigger is disabled).
``stop_tmo``
Idle timeout (in sec). Tracing is stopped if there is no data for specified period of time. By default -1 (disable this stop trigger).
Idle timeout (in sec). Tracing is stopped if there is no data for specified period of time. By default, -1 (disable this stop trigger).
.. note::
If ``poll_period`` is 0 OpenOCD telnet command line will not be available until tracing is stopped. You must stop it manually by resetting the board or pressing Ctrl+C in OpenOCD window (not one with the telnet session). Another option is to set ``trace_size`` and wait until this size of data is collected. At this point tracing stops automatically.
If ``poll_period`` is 0, OpenOCD telnet command line will not be available until tracing is stopped. You must stop it manually by resetting the board or pressing Ctrl+C in the OpenOCD window (not the one with the telnet session). Another option is to set ``trace_size`` and wait until this size of data is collected. At this point, tracing stops automatically.
Command usage examples:
.. highlight:: none
1. Collect SystemView tracing data to files "pro-cpu.SVDat" and "app-cpu.SVDat". The files will be saved in "openocd-esp32" directory.
1. Collect SystemView tracing data to files ``pro-cpu.SVDat`` and ``app-cpu.SVDat``. The files will be saved in ``openocd-esp32`` directory.
::
esp sysview start file://pro-cpu.SVDat file://app-cpu.SVDat
The tracing data will be retrieved and saved in non-blocking mode. To stop data this process enter ``esp sysview stop`` command on OpenOCD telnet prompt, optionally pressing Ctrl+C in OpenOCD window.
The tracing data will be retrieved and saved in non-blocking mode. To stop this process, enter ``esp sysview stop`` command on OpenOCD telnet prompt, optionally pressing Ctrl+C in the OpenOCD window.
2. Retrieve tracing data and save them indefinitely.
@ -409,49 +409,46 @@ Command usage examples:
esp sysview start file://pro-cpu.SVDat file://app-cpu.SVDat 0 -1 -1
OpenOCD telnet command line prompt will not be available until tracing is stopped. To stop tracing, press Ctrl+C in OpenOCD window.
OpenOCD telnet command line prompt will not be available until tracing is stopped. To stop tracing, press Ctrl+C in the OpenOCD window.
Data Visualization
""""""""""""""""""
After trace data are collected user can use special tool to visualize the results and inspect behavior of the program.
After trace data are collected, users can use a special tool to visualize the results and inspect behavior of the program.
.. only:: not CONFIG_FREERTOS_UNICORE
Unfortunately SystemView does not support tracing from multiple cores. So when tracing from {IDF_TARGET_NAME} working with JTAG in dual-core mode two files are
generated: one for PRO CPU and another one for APP CPU. User can load every file into separate instance of the tool. For tracing over UART, user can select in
menuconfig Pro or App *Component config > Application Level Tracing > FreeRTOS SystemView Tracing* with CPU has to be traced.
Unfortunately, SystemView does not support tracing from multiple cores. So when tracing from {IDF_TARGET_NAME} with JTAG interfaces in the dual-core mode, two files are generated: one for PRO CPU and another for APP CPU. Users can load each file into separate instances of the tool. For tracing over UART, users can select ``Component config`` > ``Application Level Tracing`` > ``FreeRTOS SystemView Tracing`` in menuconfig Pro or App to choose which CPU has to be traced.
It is uneasy and awkward to analyze data for every core in separate instance of the tool. Fortunately there is Eclipse plugin called *Impulse* which can load several trace files and makes it possible to inspect events from both cores in one view. Also this plugin has no limitation of 1,000,000 events as compared to free version of SystemView.
It is uneasy and awkward to analyze data for every core in separate instance of the tool. Fortunately, there is an Eclipse plugin called *Impulse* which can load several trace files, thus making it possible to inspect events from both cores in one view. Also, this plugin has no limitation of 1,000,000 events as compared to the free version of SystemView.
Good instruction on how to install, configure and visualize data in Impulse from one core can be found `here <https://mcuoneclipse.com/2016/07/31/impulse-segger-systemview-in-eclipse/>`_.
Good instructions on how to install, configure, and visualize data in Impulse from one core can be found `here <https://mcuoneclipse.com/2016/07/31/impulse-segger-systemview-in-eclipse/>`_.
.. note::
IDF uses its own mapping for SystemView FreeRTOS events IDs, so user needs to replace original file with mapping ``$SYSVIEW_INSTALL_DIR/Description/SYSVIEW_FreeRTOS.txt`` with ``$IDF_PATH/docs/api-guides/SYSVIEW_FreeRTOS.txt``.
Also contents of that IDF specific file should be used when configuring SystemView serializer using above link.
ESP-IDF uses its own mapping for SystemView FreeRTOS events IDs, so users need to replace the original file mapping ``$SYSVIEW_INSTALL_DIR/Description/SYSVIEW_FreeRTOS.txt`` with ``$IDF_PATH/docs/api-guides/SYSVIEW_FreeRTOS.txt``. Also, contents of that IDF-specific file should be used when configuring SystemView serializer using the above link.
.. only:: not CONFIG_FREERTOS_UNICORE
Configure Impulse for Dual Core Traces
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
After installing Impulse and ensuring that it can successfully load trace files for each core in separate tabs users can add special Multi Adapter port and load both files into one view. To do this, users need to do the following in Eclipse:
After installing Impulse and ensuring that it can successfully load trace files for each core in separate tabs, users can add special Multi Adapter port and load both files into one view. To do this, users need to do the following steps in Eclipse:
1. Open 'Signal Ports' view. Go to Windows->Show View->Other menu. Find 'Signal Ports' view in Impulse folder and double-click on it.
2. In 'Signal Ports' view right-click on 'Ports' and select 'Add ...'->New Multi Adapter Port
3. In open dialog Press 'Add' button and select 'New Pipe/File'.
4. In open dialog select 'SystemView Serializer' as Serializer and set path to PRO CPU trace file. Press OK.
5. Repeat steps 3-4 for APP CPU trace file.
6. Double-click on created port. View for this port should open.
7. Click Start/Stop Streaming button. Data should be loaded.
8. Use 'Zoom Out', 'Zoom In' and 'Zoom Fit' button to inspect data.
9. For settings measurement cursors and other features please see `Impulse documentation <https://toem.de/index.php/projects/impulse>`_).
1. Open the ``Signal Ports`` view. Go to ``Windows`` > ``Show View`` > ``Other menu``. Find the ``Signal Ports`` view in Impulse folder and double-click it.
2. In the ``Signal Ports`` view, right-click ``Ports`` and select ``Add`` > ``New Multi Adapter Port``.
3. In the open dialog box, click ``Add`` and select ``New Pipe/File``.
4. In the open dialog box, select ``SystemView Serializer`` as Serializer and set path to PRO CPU trace file. Click ``OK``.
5. Repeat the steps 3-4 for APP CPU trace file.
6. Double-click the created port. View for this port should open.
7. Click the ``Start/Stop Streaming`` button. Data should be loaded.
8. Use the ``Zoom Out``, ``Zoom In`` and ``Zoom Fit`` buttons to inspect data.
9. For settings measurement cursors and other features, please see `Impulse documentation <https://toem.de/index.php/projects/impulse>`_).
.. note::
If you have problems with visualization (no data are shown or strange behavior of zoom action is observed) you can try to delete current signal hierarchy and double click on the necessary file or port. Eclipse will ask you to create new signal hierarchy.
If you have problems with visualization (no data is shown or strange behaviors of zoom action are observed), you can try to delete current signal hierarchy and double-click on the necessary file or port. Eclipse will ask you to create a new signal hierarchy.
.. _app_trace-gcov-source-code-coverage:
@ -462,20 +459,20 @@ Gcov (Source Code Coverage)
Basics of Gcov and Gcovr
""""""""""""""""""""""""
Source code coverage is data indicating the count and frequency of every program execution path that has been taken within a program's runtime. `Gcov <https://en.wikipedia.org/wiki/Gcov>`_ is a GCC tool that, when used in concert with the compiler, can generate log files indicating the execution count of each line of a source file. The `Gcovr <https://gcovr.com>`_ tool is utility for managing Gcov and generating summarized code coverage results.
Source code coverage is data indicating the count and frequency of every program execution path that has been taken within a program's runtime. `Gcov <https://en.wikipedia.org/wiki/Gcov>`_ is a GCC tool that, when used in concert with the compiler, can generate log files indicating the execution count of each line of a source file. The `Gcovr <https://gcovr.com>`_ tool is a utility for managing Gcov and generating summarized code coverage results.
Generally, using Gcov to compile and run programs on the Host will undergo these steps:
Generally, using Gcov to compile and run programs on the host will undergo these steps:
1. Compile the source code using GCC with the ``--coverage`` option enabled. This will cause the compiler to generate a ``.gcno`` notes files during compilation. The notes files contain information to reconstruct execution path block graphs and map each block to source code line numbers. Each source file compiled with the ``--coverage`` option should have their own ``.gcno`` file of the same name (e.g., a ``main.c`` will generate a ``main.gcno`` when compiled).
2. Execute the program. During execution, the program should generate ``.gcda`` data files. These data files contain the counts of the number of times an execution path was taken. The program will generate a ``.gcda`` file for each source file compiled with the ``--coverage`` option (e.g., ``main.c`` will generate a ``main.gcda``.
2. Execute the program. During execution, the program should generate ``.gcda`` data files. These data files contain the counts of the number of times an execution path was taken. The program will generate a ``.gcda`` file for each source file compiled with the ``--coverage`` option (e.g., ``main.c`` will generate a ``main.gcda``).
3. Gcov or Gcovr can be used generate a code coverage based on the ``.gcno``, ``.gcda``, and source files. Gcov will generate a text based coverage report for each source file in the form of a ``.gcov`` file, whilst Gcovr will generate a coverage report in HTML format.
3. Gcov or Gcovr can be used to generate a code coverage based on the ``.gcno``, ``.gcda``, and source files. Gcov will generate a text-based coverage report for each source file in the form of a ``.gcov`` file, whilst Gcovr will generate a coverage report in HTML format.
Gcov and Gcovr in ESP-IDF
"""""""""""""""""""""""""""
Using Gcov in ESP-IDF is complicated by the fact that the program is running remotely from the Host (i.e., on the target). The code coverage data (i.e., the ``.gcda`` files) is initially stored on the target itself. OpenOCD is then used to dump the code coverage data from the target to the host via JTAG during runtime. Using Gcov in ESP-IDF can be split into the following steps.
Using Gcov in ESP-IDF is complicated due to the fact that the program is running remotely from the host (i.e., on the target). The code coverage data (i.e., the ``.gcda`` files) is initially stored on the target itself. OpenOCD is then used to dump the code coverage data from the target to the host via JTAG during runtime. Using Gcov in ESP-IDF can be split into the following steps.
1. :ref:`app_trace-gcov-setup-project`
2. :ref:`app_trace-gcov-dumping-data`
@ -492,30 +489,30 @@ Compiler Option
In order to obtain code coverage data in a project, one or more source files within the project must be compiled with the ``--coverage`` option. In ESP-IDF, this can be achieved at the component level or the individual source file level:
- To cause all source files in a component to be compiled with the ``--coverage`` option, you can add ``target_compile_options(${COMPONENT_LIB} PRIVATE --coverage)`` to the ``CMakeLists.txt`` file of the component.
- To cause a select number of source files (e.g. ``sourec1.c`` and ``source2.c``) in the same component to be compiled with the ``--coverage`` option, you can add ``set_source_files_properties(source1.c source2.c PROPERTIES COMPILE_FLAGS --coverage)`` to the ``CMakeLists.txt`` file of the component.
- To cause a select number of source files (e.g., ``source1.c`` and ``source2.c``) in the same component to be compiled with the ``--coverage`` option, you can add ``set_source_files_properties(source1.c source2.c PROPERTIES COMPILE_FLAGS --coverage)`` to the ``CMakeLists.txt`` file of the component.
When a source file is compiled with the ``--coverage`` option (e.g. ``gcov_example.c``), the compiler will generate the ``gcov_example.gcno`` file in the project's build directory.
When a source file is compiled with the ``--coverage`` option (e.g., ``gcov_example.c``), the compiler will generate the ``gcov_example.gcno`` file in the project's build directory.
Project Configuration
~~~~~~~~~~~~~~~~~~~~~
Before building a project with source code coverage, ensure that the following project configuration options are enabled by running ``idf.py menuconfig``.
Before building a project with source code coverage, make sure that the following project configuration options are enabled by running ``idf.py menuconfig``.
- Enable the application tracing module by choosing *Trace Memory* for the :ref:`CONFIG_APPTRACE_DESTINATION1` option.
- Enable Gcov to host via the :ref:`CONFIG_APPTRACE_GCOV_ENABLE`
- Enable the application tracing module by selecting ``Trace Memory`` for the :ref:`CONFIG_APPTRACE_DESTINATION1` option.
- Enable Gcov to the host via the :ref:`CONFIG_APPTRACE_GCOV_ENABLE`.
.. _app_trace-gcov-dumping-data:
Dumping Code Coverage Data
""""""""""""""""""""""""""
Once a project has been complied with the ``--coverage`` option and flashed onto the target, code coverage data will be stored internally on the target (i.e., in trace memory) whilst the application runs. The process of transferring code coverage data from the target to the Host is know as dumping.
Once a project has been complied with the ``--coverage`` option and flashed onto the target, code coverage data will be stored internally on the target (i.e., in trace memory) whilst the application runs. The process of transferring code coverage data from the target to the host is known as dumping.
The dumping of coverage data is done via OpenOCD (see :doc:`JTAG Debugging <../api-guides/jtag-debugging/index>` on how to setup and run OpenOCD). A dump is triggered by issuing commands to OpenOCD, therefore a telnet session to OpenOCD must be opened to issue such commands (run ``telnet localhost 4444``). Note that GDB could be used instead of telnet to issue commands to OpenOCD, however all commands issued from GDB will need to be prefixed as ``mon <oocd_command>``.
When the target dumps code coverage data, the ``.gcda`` files are stored in the project's build directory. For example, if ``gcov_example_main.c`` of the ``main`` component was compiled with the ``--coverage`` option, then dumping the code coverage data would generate a ``gcov_example_main.gcda`` in ``build/esp-idf/main/CMakeFiles/__idf_main.dir/gcov_example_main.c.gcda``. Note that the ``.gcno`` files produced during compilation are also placed in the same directory.
When the target dumps code coverage data, the ``.gcda`` files are stored in the project's build directory. For example, if ``gcov_example_main.c`` of the ``main`` component is compiled with the ``--coverage`` option, then dumping the code coverage data would generate a ``gcov_example_main.gcda`` in ``build/esp-idf/main/CMakeFiles/__idf_main.dir/gcov_example_main.c.gcda``. Note that the ``.gcno`` files produced during compilation are also placed in the same directory.
The dumping of code coverage data can be done multiple times throughout an application's life time. Each dump will simply update the ``.gcda`` file with the newest code coverage information. Code coverage data is accumulative, thus the newest data will contain the total execution count of each code path over the application's entire lifetime.
The dumping of code coverage data can be done multiple times throughout an application's lifetime. Each dump will simply update the ``.gcda`` file with the newest code coverage information. Code coverage data is accumulative, thus the newest data will contain the total execution count of each code path over the application's entire lifetime.
ESP-IDF supports two methods of dumping code coverage data form the target to the host:
@ -525,18 +522,18 @@ ESP-IDF supports two methods of dumping code coverage data form the target to th
Instant Run-Time Dump
~~~~~~~~~~~~~~~~~~~~~
An Instant Run-Time Dump is triggered by calling the ``{IDF_TARGET_NAME} gcov`` OpenOCD command (via a telnet session). Once called, OpenOCD will immediately preempt the {IDF_TARGET_NAME}'s current state and execute a builtin IDF Gcov debug stub function. The debug stub function will handle the dumping of data to the Host. Upon completion, the {IDF_TARGET_NAME} will resume it's current state.
An Instant Run-Time Dump is triggered by calling the ``{IDF_TARGET_NAME} gcov`` OpenOCD command (via a telnet session). Once called, OpenOCD will immediately preempt the {IDF_TARGET_NAME}'s current state and execute a built-in ESP-IDF Gcov debug stub function. The debug stub function will handle the dumping of data to the host. Upon completion, the {IDF_TARGET_NAME} will resume its current state.
Hard-coded Dump
~~~~~~~~~~~~~~~
A Hard-coded Dump is triggered by the application itself by calling :cpp:func:`esp_gcov_dump` from somewhere within the application. When called, the application will halt and wait for OpenOCD to connect and retrieve the code coverage data. Once :cpp:func:`esp_gcov_dump` is called, the Host must execute the ``esp gcov dump`` OpenOCD command (via a telnet session). The ``esp gcov dump`` command will cause OpenOCD to connect to the {IDF_TARGET_NAME}, retrieve the code coverage data, then disconnect from the {IDF_TARGET_NAME} thus allowing the application to resume. Hard-coded Dumps can also be triggered multiple times throughout an application's lifetime.
A Hard-coded Dump is triggered by the application itself by calling :cpp:func:`esp_gcov_dump` from somewhere within the application. When called, the application will halt and wait for OpenOCD to connect and retrieve the code coverage data. Once :cpp:func:`esp_gcov_dump` is called, the host must execute the ``esp gcov dump`` OpenOCD command (via a telnet session). The ``esp gcov dump`` command will cause OpenOCD to connect to the {IDF_TARGET_NAME}, retrieve the code coverage data, then disconnect from the {IDF_TARGET_NAME}, thus allowing the application to resume. Hard-coded Dumps can also be triggered multiple times throughout an application's lifetime.
Hard-coded dumps are useful if code coverage data is required at certain points of an application's lifetime by placing :cpp:func:`esp_gcov_dump` where necessary (e.g., after application initialization, during each iteration of an application's main loop).
GDB can be used to set a breakpoint on :cpp:func:`esp_gcov_dump`, then call ``mon esp gcov dump`` automatically via the use a ``gdbinit`` script (see Using GDB from :ref:`jtag-debugging-using-debugger-command-line`).
GDB can be used to set a breakpoint on :cpp:func:`esp_gcov_dump`, then call ``mon esp gcov dump`` automatically via the use a ``gdbinit`` script (see Using GDB from :ref:`jtag-debugging-using-debugger-command-line`).
The following GDB script is will add a breakpoint at :cpp:func:`esp_gcov_dump`, then call the ``mon esp gcov dump`` OpenOCD command.
The following GDB script will add a breakpoint at :cpp:func:`esp_gcov_dump`, then call the ``mon esp gcov dump`` OpenOCD command.
.. code-block:: none
@ -561,7 +558,7 @@ Both Gcov and Gcovr can be used to generate code coverage reports. Gcov is provi
Adding Gcovr Build Target to Project
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To make report generation more convenient, users can define additional build targets in their projects such report generation can be done with a single build command.
To make report generation more convenient, users can define additional build targets in their projects such that the report generation can be done with a single build command.
Add the following lines to the ``CMakeLists.txt`` file of your project.

Wyświetl plik

@ -5,21 +5,20 @@
概述
----
为了分析应用程序的行为IDF 提供了一个有用的功能:应用层跟踪。这个功能以库的形式提供,可以通过 menuconfig 开启。此功能使得用户可以在程序运行开销很小的前提下,通过 JTAG 接口在主机和 {IDF_TARGET_NAME} 之间传输任意数据。
ESP-IDF 中提供了应用层跟踪功能,用于分析应用程序的行为。这一功能在相应的库中实现,可以通过 menuconfig 开启。此功能允许用户在程序运行开销很小的前提下,通过 JTAG、UART 或 USB 接口在主机和 {IDF_TARGET_NAME} 之间传输任意数据。用户也可同时使用 JTAG 和 UART 接口。UART 接口主要用于连接 SEGGER SystemView 工具(参见 `SystemView <https://www.segger.com/products/development-tools/systemview/>`_)。
开发人员可以使用这功能库将应用程序的运行状态发送给主机,在运行时接收来自主机的命令或者其他类型的信息。该库的主要使用场景有:
开发人员可以使用这功能库将应用程序的运行状态发送给主机,在运行时接收来自主机的命令或者其他类型的信息。该库的主要使用场景有:
1. 收集应用程序特定的数据,具体请参阅 :ref:`app_trace-application-specific-tracing`
2. 记录到主机的轻量级日志具体请参阅 :ref:`app_trace-logging-to-host`
3. 系统行为分析具体请参阅 :ref:`app_trace-system-behaviour-analysis-with-segger-systemview`
4. 源代码覆盖率,具体请参阅 :ref:`app_trace-gcov-source-code-coverage`
1. 收集来自特定应用程序的数据。具体请参阅 :ref:`app_trace-application-specific-tracing`
2. 记录到主机的轻量级日志具体请参阅 :ref:`app_trace-logging-to-host`
3. 系统行为分析具体请参阅 :ref:`app_trace-system-behaviour-analysis-with-segger-systemview`
4. 获取源代码覆盖率。具体请参阅 :ref:`app_trace-gcov-source-code-coverage`
使用 JTAG 接口的跟踪组件工作示意图:
使用 JTAG 接口的跟踪组件工作示意图如下所示
.. figure:: ../../_static/app_trace-overview.jpg
:align: center
:alt: Tracing Components when Working Over JTAG
:figclass: align-center
使用 JTAG 接口的跟踪组件
@ -27,21 +26,21 @@
运行模式
--------
该库支持两种操作模式:
该库支持两种运行模式:
**后验模式:** 这是默认的模式,该模式不需要和主机进行交互。在这种模式下,跟踪模块不会检查主机是否已经从 *HW UP BUFFER* 缓冲区读走所有数据,而是直接使用新数据覆盖旧数据。该模式在用户仅对最新的跟踪数据感兴趣时会很有用,例如分析程序在崩溃之前的行为。主机可以稍后根据用户的请求来读取数据,例如通过特殊的 OpenOCD 命令(假如使用了 JTAG 接口)
**后验模式:** 后验模式为默认模式,该模式不需要和主机进行交互。在这种模式下,跟踪模块不会检查主机是否已经从 *HW UP BUFFER* 缓冲区读走所有数据,而是直接使用新数据覆盖旧数据。如果用户仅对最新的跟踪数据感兴趣,例如想要分析程序在崩溃之前的行为,则推荐使用该模式。主机可以稍后根据用户的请求来读取数据,例如在使用 JTAG 接口的情况下,通过特殊的 OpenOCD 命令进行读取
**流模式:** 当主机连接到 {IDF_TARGET_NAME} 时,跟踪模块会进入此模式。在这种模式下,跟踪模块在新数据写入 *HW UP BUFFER* 之前会检查其中是否有足够的空间,并在必要的时候等待主机读取数据并释放足够的内存。用户会将最长的等待时间作为超时时间参数传递给相应的 API 函数,如果超时时间是个有限值,那么应用程序有可能会因为超时而将待写的数据丢弃。尤其需要注意,如果在讲究时效的代码中(如中断处理函数,操作系统调度等)指定了无限的超时时间,那么系统会产生故障。为了避免丢失此类关键数据,开发人员可以通过在 menuconfig 中开启 :ref:`CONFIG_APPTRACE_PENDING_DATA_SIZE_MAX` 选项启用额外的数据缓冲区。此宏还指定了在上述条件下可以缓冲的数据大小,它有助于缓解由于 USB 总线拥塞等原因导致的向主机传输数据间歇性减缓的状况。但是,当跟踪数据流的平均比特率超过硬件接口的能力时,它也无能为力
**流模式:** 当主机连接到 {IDF_TARGET_NAME} 时,跟踪模块会进入此模式。在这种模式下,跟踪模块在新数据写入 *HW UP BUFFER* 之前会检查其中是否有足够的空间,并在必要的时候等待主机读取数据并释放足够的内存。最大等待时间是由用户传递给相应 API 函数的超时时间参数决定的。因此当应用程序尝试使用有限的最大等待时间值来将数据写入跟踪缓冲区时,这些数据可能会被丢弃。尤其需要注意的是,如果在对时效要求严格的代码中(如中断处理函数、操作系统调度等)指定了无限的超时时间,将会导致系统故障。为了避免丢失此类关键数据,开发人员可以在 menuconfig 中开启 :ref:`CONFIG_APPTRACE_PENDING_DATA_SIZE_MAX` 选项,以启用额外的数据缓冲区。此宏还指定了在上述条件下可以缓冲的数据大小,它有助于缓解由于 USB 总线拥塞等原因导致的向主机传输数据间歇性减缓的状况。但是,当跟踪数据流的平均比特率超出硬件接口的能力时,该选项无法发挥作用
配置选项与依赖项
----------------
使用此功能需要在主机端和目标端做相应的配置:
使用此功能需要在主机端和目标端进行以下配置:
1. **主机端:** 应用程序跟踪通过 JTAG 来完成,因此需要在主机上安装并运行 OpenOCD。相关详细信息请参阅 :doc:`JTAG Debugging <../api-guides/jtag-debugging/index>`。
1. **主机端:** 应用程序跟踪通过 JTAG 来完成,因此需要在主机上安装并运行 OpenOCD。详细信息请参阅 :doc:`JTAG 调试 <../api-guides/jtag-debugging/index>`。
2. **目标端:** 在 menuconfig 中开启应用程序跟踪功能。 *Component config > Application Level Tracing* 菜单允许选择跟踪数据的传输目标(具体用于传输的硬件接口),选择任一非 None 的目标都会自动开启 ``CONFIG_APPTRACE_ENABLE`` 这个选项。
2. **目标端:** 在 menuconfig 中开启应用程序跟踪功能。前往 ``Component config`` > ``Application Level Tracing`` 菜单,选择跟踪数据的传输目标(具体用于传输的硬件接口JTAG 和/或 UART),选择任一非 None 的目标都会自动开启 ``CONFIG_APPTRACE_ENABLE`` 这个选项。对于 UART 接口用户必须定义波特率、TX 和 RX 管脚及其他相关参数。
.. note::
@ -49,24 +48,27 @@
以下为前述未提及的另外两个 menuconfig 选项:
1. *Threshold for flushing last trace data to host on panic* :ref:`CONFIG_APPTRACE_POSTMORTEM_FLUSH_THRESH`)。由于在 JTAG 上工作的性质,此选项是必选项。在该模式下,跟踪数据以 16 KB 数据块的形式曝露给主机。在后验模式中,当一个块被填充时,它会曝露给主机,而之前的块会变得不可用。换句话说,跟踪数据以 16 KB 的粒度进行覆盖。在发生 panic 的时候,当前输入块的最新数据将会被曝露给主机,主机可以读取它们以进行后续分析。如果系统发生 panic 的时候仍有少量数据还没来得及曝光给主机,那么之前收集的 16 KB 的数据将丢失,主机只能看到非常少的最新的跟踪部分,它可能不足以用来诊断问题所在。此 menuconfig 选项允许避免此类情况,它可以控制在发生 panic 时刷新数据的阈值,例如用户可以确定它需要不少于 512 字节的最新跟踪数据,所以如果在发生 panic 时待处理的数据少于 512 字节,它们不会被刷新,也不会覆盖之前的 16 KB。该选项仅在后验模式和 JTAG 工作时有意义
1. *Threshold for flushing last trace data to host on panic* (:ref:`CONFIG_APPTRACE_POSTMORTEM_FLUSH_THRESH`)。使用 JTAG 接口时,此选项是必选项。在该模式下,跟踪数据以 16 KB 数据块的形式暴露给主机。在后验模式中,一个块被填充后会被暴露给主机,同时之前的块不再可用。也就是说,跟踪数据以 16 KB 的粒度进行覆盖。发生 Panic 时,当前输入块的最新数据将会被暴露给主机,主机可以读取数据以进行后续分析。如果系统发生 Panic 时,仍有少量数据还没来得及暴露给主机,那么之前收集的 16 KB 数据将丢失,主机只能获取少部分的最新跟踪数据,从而可能无法诊断问题。此 menuconfig 选项有助于避免此类情况,它可以控制发生 Panic 时刷新数据的阈值。例如,用户可以设置需要不少于 512 字节的最新跟踪数据,如果在发生 Panic 时待处理的数据少于 512 字节,则数据不会被刷新,也不会覆盖之前的 16 KB 数据。该选项仅在后验模式和使用 JTAG 工作时可发挥作用
2. *Timeout for flushing last trace data to host on panic* :ref:`CONFIG_APPTRACE_ONPANIC_HOST_FLUSH_TMO`)。该选项仅在流模式下才起作用,它控制跟踪模块在发生 panic 时等待主机读取最新数据的最长时间。
2. *Timeout for flushing last trace data to host on panic* (:ref:`CONFIG_APPTRACE_ONPANIC_HOST_FLUSH_TMO`)。该选项仅在流模式下才可发挥作用,它可用于控制跟踪模块在发生 Panic 时等待主机读取最新数据的最长时间。
3. *UART RX/TX ring buffer size* (:ref:`CONFIG_APPTRACE_UART_TX_BUFF_SIZE`)。缓冲区的大小取决于通过 UART 传输的数据量。
如何使用这个库
4. *UART TX message size* (ref:`CONFIG_APPTRACE_UART_TX_MSG_size`)。要传输的单条消息的最大尺寸。
如何使用此库
--------------
该库提供了用于在主机和 {IDF_TARGET_NAME} 之间传输任意数据的 API。当在 menuconfig 中启用时,目标应用程序的跟踪模块会在系统启动时自动初始化,因此用户需要做的就是调用相应的 API 来发送、接收或者刷新数据。
该库提供了用于在主机和 {IDF_TARGET_NAME} 之间传输任意数据的 API。在 menuconfig 中启用该库后,目标应用程序的跟踪模块会在系统启动时自动初始化。因此,用户需要做的就是调用相应的 API 来发送、接收或者刷新数据。
.. _app_trace-application-specific-tracing:
特定应用程序的跟踪
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
通常,用户需要决定在每个方向上待传输数据的类型以及如何解析(处理)这些数据。要想在目标和主机之间传输数据,用户必须要执行以下几个步骤。
通常,用户需要决定在每个方向上待传输数据的类型以及如何解析(处理)这些数据。要想在目标和主机之间传输数据,则需执行以下几个步骤:
1. 在目标端,用户需要实现将跟踪数据写入主机的算法下面的代码片段展示了如何执行此操作。
1. 在目标端,用户需要实现将跟踪数据写入主机的算法下面的代码片段展示了如何执行此操作。
.. code-block:: c
@ -122,7 +124,7 @@
...
}
``esp_apptrace_read()`` 函数使用 memcpy 把主机端的数据复制到用户缓存区。在某些情况下,使用 ``esp_apptrace_down_buffer_get()````esp_apptrace_down_buffer_put()`` 函数可能更为理想。它们允许开发人员占用一块读缓冲区并就地进行有关处理操作。下面的代码片段展示了如何执行此操作。
``esp_apptrace_read()`` 函数使用 memcpy 把主机端的数据复制到用户缓存区。在某些情况下,使用 ``esp_apptrace_down_buffer_get()````esp_apptrace_down_buffer_put()`` 函数可能更为理想。它们允许开发人员占用一块读缓冲区并就地进行有关处理操作。下面的代码片段展示了如何执行此操作。
.. code-block:: c
@ -152,24 +154,29 @@
return res;
}
2. 下一步是编译应用程序的镜像并将其下载到目标板上,这一步可以参考文档 :ref:`构建并烧写 <get-started-build>`
2. 下一步是编译应用程序的镜像,并将其下载到目标板上。这一步可以参考文档 :ref:`构建并烧写 <get-started-build>`
3. 运行 OpenOCD参见 :doc:`JTAG 调试 <../api-guides/jtag-debugging/index>`)。
4. 连接到 OpenOCD 的 telnet 服务器,在终端执行如下命令 ``telnet <oocd_host> 4444``。如果在运行 OpenOCD 的同一台机器上打开
telnet 会话,您可以使用 ``localhost`` 替换上面命令中的 ``<oocd_host>``
5. 使用特殊的 OpenOCD 命令开始收集待跟踪的命令,此命令将传输跟踪数据并将其重定向到指定的文件或套接字(当前仅支持文件作为跟踪数据目标)。相关命令的说明请参阅 :ref:`jtag-debugging-launching-debugger`
6. 最后一步是处理接收到的数据,由于数据格式由用户定义,因此处理阶段超出了本文档的范围。数据处理的范例可以参考位于 ``$IDF_PATH/tools/esp_app_trace`` 下的 Python 脚本 ``apptrace_proc.py`` (用于功能测试)和 ``logtrace_proc.py`` (请参阅 :ref:`app_trace-logging-to-host` 章节中的详细信息)。
4. 连接到 OpenOCD 的 telnet 服务器。用户可在终端执行命令 ``telnet <oocd_host> 4444``。如果用户是在运行 OpenOCD 的同一台机器上打开 telnet 会话,可以使用 ``localhost`` 替换上面命令中的 ``<oocd_host>``
5. 使用特殊的 OpenOCD 命令开始收集待跟踪的命令。此命令将传输跟踪数据并将其重定向到指定的文件或套接字(当前仅支持文件作为跟踪数据目标)。相关命令的说明,请参阅 :ref:`jtag-debugging-launching-debugger`
6. 最后,处理接收到的数据。由于数据格式由用户自己定义,本文档中省略数据处理的具体流程。数据处理的范例可以参考位于 ``$IDF_PATH/tools/esp_app_trace`` 下的 Python 脚本 ``apptrace_proc.py`` (用于功能测试)和 ``logtrace_proc.py`` (请参阅 :ref:`app_trace-logging-to-host` 章节中的详细信息)。
OpenOCD 应用程序跟踪命令
""""""""""""""""""""""""""""""
*HW UP BUFFER* 在用户数据块之间共享,并且会替 API 调用者(在任务或者中断上下文中)填充分配到的内存。在多线程环境中,正在填充缓冲区的任务/中断可能会被另一个高优先级的任务/中断抢占,有可能发生主机读取还未准备好的用户数据的情况。为了处理这样的情况跟踪模块在所有用户数据块之前添加一个数据头其中包含有分配的用户缓冲区的大小2 字节和实际写入的数据长度2 字节),也就是说数据头总共长 4 字节。负责读取跟踪数据的 OpenOCD 命令在读取到不完整的用户数据块时会报错,但是无论如何它都会将整个用户数据块(包括还未填充的区域)的内容放到输出文件中。
*HW UP BUFFER* 在用户数据块之间共享,并且会替 API 调用者(在任务或者中断上下文中)填充分配到的内存。在多线程环境中,正在填充缓冲区的任务/中断可能会被另一个高优先级的任务/中断抢占,因此主机可能会读取到还未准备好的用户数据。对此跟踪模块在所有用户数据块之前添加一个数据头其中包含有分配的用户缓冲区的大小2 字节和实际写入的数据长度2 字节),也就是说数据头总共长 4 字节。负责读取跟踪数据的 OpenOCD 命令在读取到不完整的用户数据块时会报错,但是无论如何它都会将整个用户数据块(包括还未填充的区域)的内容放到输出文件中。
面是 OpenOCD 应用程序跟踪命令的使用说明
文介绍了如何使用 OpenOCD 应用程序跟踪命令
.. note::
目前OpenOCD 还不支持将任意用户数据发送到目标的命令。
命令用法:
``esp apptrace [start <options>] | [stop] | [status] | [dump <cores_num> <outfile>]``
@ -195,23 +202,23 @@ Start 子命令的语法:
``poll_period``
轮询跟踪数据的周期(单位:毫秒),如果大于 0 则以非阻塞模式运行。默认为 1 毫秒。
``trace_size``
最多要收集的数据量(单位:字节),接收到指定数量的数据后将会停止跟踪。默认情况下是 -1禁用跟踪大小停止触发器
最多要收集的数据量(单位:字节),接收到指定数量的数据后将会停止跟踪。默认 -1禁用跟踪大小停止触发器
``stop_tmo``
空闲超时(单位:秒),如果指定的时间段内都没有数据就会停止跟踪。默认为 -1禁用跟踪超时停止触发器。还可以将其设置为比目标跟踪命令之间的最长暂停值更长的值可选
``wait4halt``
如果设置为 0 则立即开始跟踪,否则命令等待目标停止(复位,打断点等),然后自动恢复它并开始跟踪。默认值为 0。
如果设置为 0 则立即开始跟踪,否则命令会先等待目标停止(复位、打断点等),然后对其进行自动恢复并开始跟踪。默认值为 0。
``skip_size``
开始时要跳过的字节数,默认为 0。
.. note::
如果 ``poll_period`` 为 0则在跟踪停止之前OpenOCD 的 telnet 命令将不可用。必须通过复位电路板或者在 OpenOCD 的窗口中(不是 telnet 会话窗口)按下 Ctrl+C。另一种选择是设置 ``trace_size`` 并等待,当收集到指定数据量时,跟踪会自动停止。
如果 ``poll_period`` 为 0则在跟踪停止之前OpenOCD 的 telnet 命令将不可用。必须通过复位电路板或者在 OpenOCD 的窗口中(非 telnet 会话窗口)使用快捷键 Ctrl+C。另一种选择是设置 ``trace_size`` 并等待,当收集到指定数据量时,跟踪会自动停止。
命令使用示例:
.. highlight:: none
1. 将 2048 个字节的跟踪数据收集到 “trace.log” 文件中,该文件将保存在 “openocd-esp32” 目录中。
1. 将 2048 个字节的跟踪数据收集到 ``trace.log`` 文件中,该文件将保存在 ``openocd-esp32`` 目录中。
::
@ -221,7 +228,7 @@ Start 子命令的语法:
.. note::
在将数据提供给 OpenOCD 之前,会对其进行缓冲。如果看到 “Data timeout!” 的消息,则目标可能在超时之前没有发送足够的数据给 OpenOCD 来清空缓冲区。增加超时时间或者使用函数 ``esp_apptrace_flush()`` 以特定间隔刷新数据都可以解决这个问题
在将数据提供给 OpenOCD 之前,会对其进行缓冲。如果看到 “Data timeout!” 的消息,则表示目标可能在超时之前没有向 OpenOCD 发送足够的数据以清空缓冲区。要解决这个问题,可以增加超时时间或者使用函数 ``esp_apptrace_flush()`` 以特定间隔刷新数据。
2. 在非阻塞模式下无限地检索跟踪数据。
@ -229,7 +236,7 @@ Start 子命令的语法:
esp apptrace start file://trace.log 1 -1 -1 0 0
对收集数据的大小没有限制,并且没有设置任何超时时间。可以通过在 OpenOCD 的 telnet 会话窗口中发送 ``esp apptrace stop`` 命令,或者在 OpenOCD 窗口中使用快捷键 Ctrl+C 来停止此过程
对收集数据的大小没有限制,也不设置超时时间。要停止此过程,可以在 OpenOCD 的 telnet 会话窗口中发送 ``esp apptrace stop`` 命令,或者在 OpenOCD 窗口中使用快捷键 Ctrl+C。
3. 检索跟踪数据并无限期保存。
@ -254,19 +261,19 @@ Start 子命令的语法:
记录日志到主机
^^^^^^^^^^^^^^
记录日志到主机是 IDF 的一个非常实用的功能通过应用层跟踪库将日志保存到主机端。某种程度上这也算是一种半主机semihosting机制,相较于调用 ``ESP_LOGx`` 将待打印的字符串发送到 UART 的日志记录方式,这个功能的优势在于它减少了本地的工作量,而将大部分工作转移到了主机端
记录日志到主机是 ESP-IDF 中一个非常实用的功能:通过应用层跟踪库将日志保存到主机端。某种程度上,这也算是一种半主机 (semihosting) 机制,相较于调用 ``ESP_LOGx`` 将待打印的字符串发送到 UART 的日志记录方式,此功能将大部分工作转移到了主机端,从而减少了本地工作量
IDF 的日志库会默认使用类 vprintf 的函数将格式化的字符串输出到专用的 UART。一般来说,它涉及到以下几个步骤:
ESP-IDF 的日志库会默认使用类 vprintf 的函数将格式化的字符串输出到专用的 UART,一般来说涉及以下几个步骤:
1. 解析格式字符串以获取每个参数的类型。
2. 根据其类型,将每个参数都转换为字符串。
3. 格式字符串与转换后的参数一起发送到 UART。
虽然可以将类 vprintf 函数优化到一定程度,但是上述步骤在任何情况下都是必须要执行的,并且每个步骤都会消耗一定的时间(尤其是步骤 3所以经常会发生以下这种情况:向程序中添加额外的打印信息以诊断问题,却改变了应用程序的行为,使得问题无法复现。在最差的情况下,程序会无法正常工作,最终导致报错甚至挂起。
虽然可以对类 vprintf 函数进行一定程度的优化,但由于在任何情况下都必须执行上述步骤,并且每个步骤都会消耗一定的时间(尤其是步骤 3所以经常会发生以下这种情况:向程序中添加额外的打印信息以诊断问题,却改变了应用程序的行为,使得问题无法复现。在最严重的情况下,程序无法正常工作,最终导致报错甚至挂起。
解决此类问题的可能方法是使用更高的波特率或者其他更快的接口,并将字符串格式化的工作转移到主机端。
想要解决此类问题,可以使用更高的波特率或者其他更快的接口,并将字符串格式化的工作转移到主机端。
通过应用层跟踪库的 ``esp_apptrace_vprintf`` 函数,可以将日志信息发送到主机,该函数不执行格式字符串和参数的完全解析,而仅仅计算传递参数的数量,并将它们与格式字符串地址一起发送给主机。主机端会通过一个特殊的 Python 脚本来处理并打印接收到的日志数据。
通过应用层跟踪库的 ``esp_apptrace_vprintf`` 函数,可以将日志信息发送到主机,该函数不执行格式字符串和参数的完全解析,而仅仅计算传递参数的数量,并将它们与格式字符串地址一起发送给主机。主机端会通过一个特殊的 Python 脚本来处理并打印接收到的日志数据。
局限
@ -277,7 +284,7 @@ IDF 的日志库会默认使用类 vprintf 的函数将格式化的字符串输
1. 不支持使用 ``ESP_EARLY_LOGx`` 宏进行跟踪。
2. 不支持大小超过 4 字节的 printf 参数(例如 ``double````uint64_t``)。
3. 仅支持 .rodata 段中的格式字符串和参数。
4. printf 参数最多 256 个
4. 最多支持 256 个 printf 参数。
如何使用
@ -285,8 +292,8 @@ IDF 的日志库会默认使用类 vprintf 的函数将格式化的字符串输
为了使用跟踪模块来记录日志,用户需要执行以下步骤:
1. 在目标端,需要安装特殊的类 vprintf 函数,正如前面提到过的,这个函数是 ``esp_apptrace_vprintf``它会负责将日志数据发送给主机。示例代码参见 :example:`system/app_trace_to_host`
2. 按照 :ref:`app_trace-application-specific-tracing` 章节中第 2-5 步骤中的说明进行操作。
1. 在目标端,需要安装特殊的类 vprintf 函数 ``esp_apptrace_vprintf``该函数负责将日志数据发送给主机。示例代码参见 :example:`system/app_trace_to_host`
2. 按照 :ref:`app_trace-application-specific-tracing` 章节中第 2-5 步进行操作。
3. 打印接收到的日志记录,请在终端运行以下命令:``$IDF_PATH/tools/esp_app_trace/logtrace_proc.py /path/to/trace/file /path/to/program/elf/file``
@ -300,36 +307,35 @@ Log Trace Processor 命令选项
位置参数(必要):
``trace_file``
日志跟踪文件的路径
日志跟踪文件的路径
``elf_file``
程序 ELF 文件的路径
程序 ELF 文件的路径
可选参数:
``-h``, ``--help``
显示此帮助信息并退出
显示此帮助信息并退出
``--no-errors``, ``-n``
不打印错误信息
不打印错误信息
.. _app_trace-system-behaviour-analysis-with-segger-systemview:
基于 SEGGER SystemView 的系统行为分析
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
IDF 中另一个基于应用层跟踪库的实用功能是系统级跟踪,它会生成与 `SEGGER SystemView 工具 <https://www.segger.com/products/development-tools/systemview/>`_ 相兼容的跟踪信息。SEGGER SystemView 是一种实时记录和可视化工具,用来分析应用程序运行时的行为。
.. note::
目前,基于 IDF 的应用程序能够以文件的形式生成与 SystemView 格式兼容的跟踪信息,并可以使用 SystemView 工具软件打开。但是还无法使用该工具控制跟踪的过程。
ESP-IDF 中另一个基于应用层跟踪库的实用功能是系统级跟踪,它会生成与 `SEGGER SystemView 工具 <https://www.segger.com/products/development-tools/systemview/>`_ 相兼容的跟踪信息。SEGGER SystemView 是一款实时记录和可视化工具,用来分析应用程序运行时的行为,可通过 UART 接口实时查看事件。
如何使用
""""""""
若需使用这个功能,需要在 menuconfig 中开启 :ref:`CONFIG_APPTRACE_SV_ENABLE` 选项,具体路径为 *Component config > Application Level Tracing > FreeRTOS SystemView Tracing* 。在同一个菜单栏下还开启了其他几个选项:
若需使用这个功能,需要在 menuconfig 中开启 :ref:`CONFIG_APPTRACE_SV_ENABLE` 选项,具体路径为 ``Component config`` > ``Application Level Tracing`` > ``FreeRTOS SystemView Tracing``。同一菜单栏下还开启了其它几个选项:
1. *{IDF_TARGET_NAME} timer to use as SystemView timestamp source* :ref:`CONFIG_APPTRACE_SV_TS_SOURCE`)选择 SystemView 事件使用的时间戳来源。在单核模式下,使用 {IDF_TARGET_NAME} 内部的循环计数器生成时间戳,其最大的工作频率是 240 MHz时间戳粒度大约为 4 ns。在双核模式下使用工作在 40 MHz 的外部定时器,因此时间戳粒度为 25 ns。
2. 可以单独启用或禁用的 SystemView 事件集合(``CONFIG_APPTRACE_SV_EVT_XXX``
1. *SytemView destination*。选择需要使用的接口JTAG 或 UART。使用 UART 接口时,可以将 SystemView 应用程序直接连接到 {IDF_TARGET_NAME} 并实时接收数据。
2. *{IDF_TARGET_NAME} timer to use as SystemView timestamp source* :ref:`CONFIG_APPTRACE_SV_TS_SOURCE`)。选择 SystemView 事件使用的时间戳来源。在单核模式下,使用 {IDF_TARGET_NAME} 内部的循环计数器生成时间戳,其最大的工作频率是 240 MHz时间戳粒度大约为 4 ns。在双核模式下使用工作在 40 MHz 的外部定时器,因此时间戳粒度为 25 ns。
3. 可以单独启用或禁用的 SystemView 事件集合(``CONFIG_APPTRACE_SV_EVT_XXX``)
- Trace Buffer Overflow Event
- ISR Enter Event
@ -345,7 +351,9 @@ IDF 中另一个基于应用层跟踪库的实用功能是系统级跟踪,它
- Timer Enter Event
- Timer Exit Event
IDF 中已经包含了所有用于生成兼容 SystemView 跟踪信息的代码,用户只需配置必要的项目选项(如上所示),然后构建、烧写映像到目标板,接着参照前面的介绍,使用 OpenOCD 收集数据。
ESP-IDF 中已经包含了所有用于生成兼容 SystemView 跟踪信息的代码,用户只需配置必要的项目选项(如上所示),然后构建、烧写映像到目标板,接着参照前面的介绍,使用 OpenOCD 收集数据。
4. 想要通过 UART 接口进行实时跟踪,请在菜单配置选项 ``Component config`` > ``Application Level Tracing`` > ``FreeRTOS SystemView Tracing`` 中选择 Pro 或 App CPU。
OpenOCD SystemView 跟踪命令选项
@ -369,9 +377,9 @@ Start 子命令语法:
``start <outfile1> [outfile2] [poll_period [trace_size [stop_tmo]]]``
``outfile1``
保存 PRO CPU 数据的文件路径此参数需要具有如下格式:``file://path/to/file``
保存 PRO CPU 数据的文件路径此参数需要具有如下格式:``file://path/to/file``
``outfile2``
保存 APP CPU 数据的文件路径此参数需要具有如下格式:``file://path/to/file``
保存 APP CPU 数据的文件路径此参数需要具有如下格式:``file://path/to/file``
``poll_period``
跟踪数据的轮询周期(单位:毫秒)。如果该值大于 0则命令以非阻塞的模式运行。默认为 1 毫秒。
``trace_size``
@ -381,19 +389,19 @@ Start 子命令语法:
.. note::
如果 ``poll_period`` 为 0则在跟踪停止之前OpenOCD 的 telnet 命令行将不可用。你需要通过复位板卡或者在 OpenOCD 的窗口(不是 telnet 会话窗口)输入 Ctrl+C 命令来手动停止它。另一个办法是设置 ``trace_size`` 然后等到收集满指定数量的数据后自动停止跟踪。
如果 ``poll_period`` 为 0则在跟踪停止之前OpenOCD 的 telnet 命令行将不可用。您需要复位板卡或者在 OpenOCD 的窗口(非 telnet 会话窗口)输入 Ctrl+C 命令来手动停止跟踪。另一个办法是设置 ``trace_size``等到收集满指定数量的数据后自动停止跟踪。
命令使用示例:
.. highlight:: none
1. 将 SystemView 跟踪数据收集到文件 “pro-cpu.SVDat” 和 “pro-cpu.SVDat” 中。这些文件会被保存在 “openocd-esp32” 目录中。
1. 将 SystemView 跟踪数据收集到文件 ``pro-cpu.SVDat````pro-cpu.SVDat`` 中。这些文件会被保存在 ``openocd-esp32`` 目录中。
::
esp sysview start file://pro-cpu.SVDat file://app-cpu.SVDat
跟踪数据被检索并以非阻塞的方式保存要停止此过程,需要在 OpenOCD 的 telnet 会话窗口输入 ``esp sysview stop`` 命令,或者也可以在 OpenOCD 窗口中按下 Ctrl+C。
跟踪数据被检索并以非阻塞的方式保存要停止此过程,需要在 OpenOCD 的 telnet 会话窗口输入 ``esp sysview stop`` 命令,也可以在 OpenOCD 窗口中按下快捷键 Ctrl+C。
2. 检索跟踪数据并无限保存。
@ -401,26 +409,25 @@ Start 子命令语法:
esp32 sysview start file://pro-cpu.SVDat file://app-cpu.SVDat 0 -1 -1
OpenOCD 的 telnet 命令行在跟踪停止前会无法使用,要停止跟踪,请在 OpenOCD 窗口按下 Ctrl+C
OpenOCD 的 telnet 命令行在跟踪停止前会无法使用,要停止跟踪,请在 OpenOCD 窗口使用 Ctrl+C 快捷键
数据可视化
""""""""""
收集到跟踪数据后,用户可以使用特殊的工具来可视化结果并分析程序的行为。
收集到跟踪数据后,用户可以使用特殊的工具对结果进行可视化并分析程序行为。
.. only:: not CONFIG_FREERTOS_UNICORE
遗憾的是SystemView 不支持从多个核心进行跟踪。所以当追踪双核模式下的 {IDF_TARGET_NAME} 时会生成两个文件:一个用于 PRO CPU另一个用于 APP CPU。用户可以将每个文件加载到工具中单独分析。
遗憾的是SystemView 不支持从多个核心进行跟踪。所以当使用 JTAG 追踪双核模式下的 {IDF_TARGET_NAME} 时会生成两个文件:一个用于 PRO CPU另一个用于 APP CPU。用户可以将每个文件加载到工具中单独分析。使用 UART 进行追踪时,用户可以在 menuconfig Pro 或 App 中点击 ``Component config`` > ``Application Level Tracing`` > ``FreeRTOS SystemView Tracing`` 并选择要追踪的 CPU。
在工具中单独分析每个核的跟踪数据是比较棘手的,但是 Eclipse 提供了一个叫 *Impulse*插件可以加载多个跟踪文件,并且可以在同一视图中检查来自两个内核的事件。此外,与免费版的 SystemView 相比,此插件没有 1,000,000 个事件的限制。
在工具中单独分析每个核的跟踪数据是比较棘手的,但是 Eclipse 提供了 *Impulse* 插件,该插件可以加载多个跟踪文件,并且可以在同一视图中检查来自两个内核的事件。此外,与免费版的 SystemView 相比,此插件没有 1,000,000 个事件的限制。
关于如何安装、配置 Impulse 并使用它可视化来自单个核心的跟踪数据,请参阅 `官方教程 <https://mcuoneclipse.com/2016/07/31/impulse-segger-systemview-in-eclipse/>`_
关于如何安装、配置 Impulse 并使用它可视化来自单个核心的跟踪数据,请参阅 `官方教程 <https://mcuoneclipse.com/2016/07/31/impulse-segger-systemview-in-eclipse/>`_
.. note::
IDF 使用自己的 SystemView FreeRTOS 事件 ID 映射,因此用户需要将 ``$SYSVIEW_INSTALL_DIR/Description/SYSVIEW_FreeRTOS.txt`` 替换成 ``$IDF_PATH/docs/api-guides/SYSVIEW_FreeRTOS.txt``
在使用上述链接配置 SystemView 序列化程序时,也应该使用该 IDF 特定文件的内容。
ESP-IDF 使用自己的 SystemView FreeRTOS 事件 ID 映射,因此用户需要将 ``$SYSVIEW_INSTALL_DIR/Description/SYSVIEW_FreeRTOS.txt`` 替换成 ``$IDF_PATH/docs/api-guides/SYSVIEW_FreeRTOS.txt``。在使用上述链接配置 SystemView 序列化程序时,也应该使用该特定文件的内容。
.. only:: not CONFIG_FREERTOS_UNICORE
@ -429,14 +436,14 @@ Start 子命令语法:
在安装好 Impulse 插件并确保 Impulse 能够在单独的选项卡中成功加载每个核心的跟踪文件后,用户可以添加特殊的 Multi Adapter 端口并将这两个文件加载到一个视图中。为此,用户需要在 Eclipse 中执行以下操作:
1. 打开 “Signal Ports” 视图,前往 Windows->Show View->Other 菜单,在 Impulse 文件夹中找到 “Signal Ports” 视图,然后双击它
2. 在 “Signal Ports” 视图中,右键单击 “Ports” 并选择 “Add ...”,然后选择 New Multi Adapter Port。
3. 在打开的对话框中按下 “Add” 按钮,选择 “New Pipe/File”。
4. 在打开的对话框中选择 “SystemView Serializer” 并设置 PRO CPU 跟踪文件的路径,按下确定保存设置。
1. 打开 ``Signal Ports`` 视图,前往 ``Windows`` > ``Show View`` > ``Other`` 菜单,在 Impulse 文件夹中找到 ``Signal Ports`` 视图并双击
2. 在 ``Signal Ports`` 视图中,右键 ``Ports`` 并选择 ``Add``,然后选择 ``New Multi Adapter Port``
3. 在打开的对话框中按下 ``add`` 按钮,选择 ``New Pipe/File``
4. 在打开的对话框中选择 ``SystemView Serializer`` 并设置 PRO CPU 跟踪文件的路径,按下 ``OK`` 保存设置。
5. 对 APP CPU 的跟踪文件重复步骤 3 和 4。
6. 双击创建的端口,会打开此端口的视图。
7. 单击 Start/Stop Streaming 按钮,数据将会被加载。
8. 使用 “Zoom Out”“Zoom In” 和 “Zoom Fit” 按钮来查看数据。
7. 单击 ``Start/Stop Streaming`` 按钮,数据将会被加载。
8. 使用 ``Zoom Out````Zoom In````Zoom Fit`` 按钮来查看数据。
9. 有关设置测量光标和其他的功能,请参阅 `Impulse 官方文档 <http://toem.de/index.php/projects/impulse>`_
.. note::
@ -452,20 +459,20 @@ Gcov源代码覆盖
Gcov 和 Gcovr 简介
""""""""""""""""""""""""
源代码覆盖率显示程序运行时间内执行的每一条程序执行路径的数量和频率。`Gcov <https://en.wikipedia.org/wiki/Gcov>`_ 是一 GCC 工具,与编译器协同使用时,可生成日志文件,显示源文件每行的执行次数。`Gcovr <https://gcovr.com>`_ 是管理 Gcov 和生成代码覆盖率总结的工具。
源代码覆盖率显示程序运行时间内执行的每一条程序执行路径的数量和频率。`Gcov <https://en.wikipedia.org/wiki/Gcov>`_ 是一 GCC 工具,与编译器协同使用时,可生成日志文件,显示源文件每行的执行次数。`Gcovr <https://gcovr.com>`_ 是管理 Gcov 和生成代码覆盖率总结的工具。
一般来说,使用 Gcov 在主机上编译和运行程序会经过以下步骤:
1. 使用 GCC 以及 ``--coverage`` 选项编译源代码。这会让编译器在编译过程中生成一个 ``.gcno`` 注释文件,该文件包含重建执行路径块图以及将每个块映射到源代码行号等信息。每个用 ``--coverage`` 选项编译的源文件都会自己的同名 ``.gcno`` 文件(如 ``main.c`` 在编译时会生成 ``main.gcno``)。
1. 使用 GCC 以及 ``--coverage`` 选项编译源代码。编译器在编译过程中生成一个 ``.gcno`` 注释文件,该文件包含重建执行路径块图以及将每个块映射到源代码行号等信息。每个用 ``--coverage`` 选项编译的源文件都会生成自己的同名 ``.gcno`` 文件(如 ``main.c`` 在编译时会生成 ``main.gcno``)。
2. 执行程序。在执行过程中,程序会生成 ``.gcda`` 数据文件。这些数据文件包含了执行路径的次数统计。程序将为每个用 ``--coverage`` 选项编译的源文件生成一个 ``.gcda`` 文件(如 ``main.c`` 将生成 ``main.gcda``)。
3. Gcov 或 Gcovr 可用于生成基于 ``.gcno````.gcda`` 和源文件的代码覆盖。Gcov 将以 ``.gcov`` 文件的形式为每个源文件生成基于文本的覆盖报告,而 Gcovr 将以 HTML 格式生成覆盖报告。
ESP-IDF 中 Gcov 和 Gcovr 应用
"""""""""""""""""""""""""""""""
ESP-IDF 中 Gcov 和 Gcovr 应用
"""""""""""""""""""""""""""""""""
在 ESP-IDF 中使用 Gcov 比较复杂,因为程序不在主机上运行(即在目标机上运行)。代码覆盖率数据(即 ``.gcda`` 文件)最初存储在目标机上。然后 OpenOCD 在运行时通过 JTAG 将代码覆盖数据从目标机转储到主机上。在 ESP-IDF 中使用 Gcov 可以分为以下几个步骤:
在 ESP-IDF 中使用 Gcov 的过程比较复杂,因为程序不在主机上运行,而在目标机上运行。代码覆盖率数据(即 ``.gcda`` 文件)最初存储在目标机上OpenOCD 在运行时通过 JTAG 将代码覆盖数据从目标机转储到主机上。在 ESP-IDF 中使用 Gcov 可以分为以下几个步骤:
1. :ref:`app_trace-gcov-setup-project`
2. :ref:`app_trace-gcov-dumping-data`
@ -479,20 +486,19 @@ ESP-IDF 中 Gcov 和 Gcovr 应用
编译器选项
~~~~~~~~~~~~~~~
为了获得项目中的代码覆盖率数据,项目中的一个或多个源文件必须用 ``--coverage`` 选项进行编译。在 ESP-IDF 中,这可以在组件级或单个源文件级实现:
为了获取项目中的代码覆盖率数据,必须用 ``--coverage`` 选项编译项目中的一个或多个源文件。在 ESP-IDF 中,这可以在组件级或单个源文件级实现:
在组件的 CMakeLists.txt 文件中添加 target_compile_options(${COMPONENT_LIB} PRIVATE --coverage) 可将组件中的所有源文件用 --coverage 选项进行编译。
在组件的 CMakeLists.txt 文件中添加 set_source_files_properties(source1.c source2.c PROPERTIES COMPILE_FLAGS --coverage) 可将同一组件中选定的一些源文件(如 sourec1.c 和 source2.c通过 --coverage 选项编译。
- 在组件的 ``CMakeLists.txt`` 文件中添加 ``target_compile_options(${COMPONENT_LIB} PRIVATE --coverage)`` 可确保使用 ``--coverage`` 选项编译组件中的所有源文件。
- 在组件的 ``CMakeLists.txt`` 文件中添加 ``set_source_files_properties(source1.c source2.c PROPERTIES COMPILE_FLAGS --coverage)`` 可确保使用 ``--coverage`` 选项编译同一组件中选定的一些源文件(如 ``source1.c````source2.c``)。
当一个源文件用 ``--coverage`` 选项编译时(例如 ``gcov_example.c``),编译器会在项目的构建目录下生成 ``gcov_example.gcno`` 文件。
项目配置
~~~~~~~~~~~~~~~~~
在构建一个有源代码覆盖的项目之前,请通过运行 ``idf.py menuconfig`` 启用以下项目配置选项。
在构建有源代码覆盖的项目之前,请运行 ``idf.py menuconfig`` 启用以下项目配置选项。
- 通过 :ref:`CONFIG_APPTRACE_DESTINATION1` 选项选择 *Trace Memory* 来启用应用程序跟踪模块。
- 通过 :ref:`CONFIG_APPTRACE_DESTINATION1` 选项选择 ``Trace Memory`` 来启用应用程序跟踪模块。
- 通过 :ref:`CONFIG_APPTRACE_GCOV_ENABLE` 选项启用 Gcov 主机。
.. _app_trace-gcov-dumping-data:
@ -500,11 +506,11 @@ ESP-IDF 中 Gcov 和 Gcovr 应用
转储代码覆盖数据
""""""""""""""""""""""""""
一旦一个项目使用 ``--coverage`` 选项编译并烧录到目标机上,在应用程序运行时,代码覆盖数据将存储在目标机内部(即在跟踪存储器中)。将代码覆盖率数据从目标机转移到主机上的过程称为转储。
一旦项目使用 ``--coverage`` 选项编译并烧录到目标机上,在应用程序运行时,代码覆盖数据将存储在目标机内部(即在跟踪存储器中)。将代码覆盖率数据从目标机转移到主机上的过程称为转储。
覆盖率数据的转储通过 OpenOCD 进行(关于如何设置和运行 OpenOCD请参考 :doc:`JTAG调试 <../api-guides/jtag-debugging/index>`)。由于是通过向 OpenOCD 发出命令来触发转储,因此必须打开 telnet 会话来向 OpenOCD 发出这些命令(运行 ``telnet localhost 4444``。GDB 也可以代替 telnet 来向 OpenOCD 发出命令,但是所有从 GDB 发出的命令都需要以 ``mon <oocd_command>`` 为前缀。
覆盖率数据的转储通过 OpenOCD 进行(关于如何设置和运行 OpenOCD请参考 :doc:`JTAG调试 <../api-guides/jtag-debugging/index>`)。由于该过程需要通过向 OpenOCD 发出命令来触发转储,因此必须打开 telnet 会话,以向 OpenOCD 发出这些命令(运行 ``telnet localhost 4444``。GDB 也可以代替 telnet 来向 OpenOCD 发出命令,但是所有从 GDB 发出的命令都需要以 ``mon <oocd_command>`` 为前缀。
当目标机转储代码覆盖数据时,``.gcda`` 文件存储在项目的构建目录中。例如,如果 ``main`` 组件的 ``gcov_example_main.c`` 在编译时使用了 ``--coverage`` 选项,那么转储代码覆盖数据将在 ``build/esp-idf/main/CMakeFiles/__idf_main.dir/gcov_example_main.c.gcda`` 中生成一个 ``gcov_example_main.gcda`` 文件。注意,编译过程中产生的 ``.gcno`` 文件也放在同一目录下。
当目标机转储代码覆盖数据时,``.gcda`` 文件存储在项目的构建目录中。例如,如果 ``main`` 组件的 ``gcov_example_main.c`` 在编译时使用了 ``--coverage`` 选项,那么转储代码覆盖数据将在 ``build/esp-idf/main/CMakeFiles/__idf_main.dir/gcov_example_main.c.gcda`` 中生成 ``gcov_example_main.gcda`` 文件。注意,编译过程中产生的 ``.gcno`` 文件也放在同一目录下。
代码覆盖数据的转储可以在应用程序的整个生命周期内多次进行。每次转储都会用最新的代码覆盖信息更新 ``.gcda`` 文件。代码覆盖数据是累积的,因此最新的数据将包含应用程序整个生命周期中每个代码路径的总执行次数。
@ -516,16 +522,16 @@ ESP-IDF 支持两种将代码覆盖数据从目标机转储到主机的方法:
运行中实时转储
~~~~~~~~~~~~~~~~~~~~~
通过 telnet 会话调用 OpenOCD 命令 ``{IDF_TARGET_NAME} gcov`` 来触发运行时的实时转储。一旦被调用OpenOCD 将立即抢占 {IDF_TARGET_NAME} 的当前状态,并执行一个内置的 IDF Gcov 调试存根函数。调试存根函数将数据转储到主机。完成后,{IDF_TARGET_NAME} 将恢复当前状态。
通过 telnet 会话调用 OpenOCD 命令 ``{IDF_TARGET_NAME} gcov`` 来触发运行时的实时转储。一旦被调用OpenOCD 将立即抢占 {IDF_TARGET_NAME} 的当前状态,并执行内置的 ESP-IDF Gcov 调试存根函数。调试存根函数将数据转储到主机。完成后,{IDF_TARGET_NAME} 将恢复当前状态。
硬编码转储
~~~~~~~~~~~~~~~
硬编码转储是由应用程序本身从程序内部调用 :cpp:func:`esp_gcov_dump` 函数触发的。在调用时,应用程序将停止并等待 OpenOCD 连接检索代码覆盖数据。一旦 :cpp:func:`esp_gcov_dump` 函数被调用,主机将通过 telnet 会话执行 ``esp gcov dump`` OpenOCD 命令``esp gcov dump`` 命令会让 OpenOCD 连接到 {IDF_TARGET_NAME},检索代码覆盖数据,然后断开与 {IDF_TARGET_NAME} 的连接,从而恢复应用程序。可以在应用程序的生命周期中多次触发硬编码转储。
硬编码转储是由应用程序本身从程序内部调用 :cpp:func:`esp_gcov_dump` 函数触发的。在调用时,应用程序将停止并等待 OpenOCD 连接,同时检索代码覆盖数据。一旦 :cpp:func:`esp_gcov_dump` 函数被调用,主机将通过 telnet 会话执行 ``esp gcov dump`` OpenOCD 命令,该命令会将 OpenOCD 连接到 {IDF_TARGET_NAME},检索代码覆盖数据,然后断开与 {IDF_TARGET_NAME} 的连接,从而恢复应用程序。在应用程序的生命周期中多次触发硬编码转储。
通过在必要地方放置 :cpp:func:`esp_gcov_dump` (如在应用程序初始化后,在应用程序主循环的每次迭代期间),当应用程序在生命周期的某刻需要代码覆盖率数据时,硬编码转储会非常有用。
在必要时(如应用程序初始化后或是应用程序主循环的每次迭代期间)放置 :cpp:func:`esp_gcov_dump`,当应用程序在生命周期的某刻需要代码覆盖率数据时,硬编码转储会非常有用。
GDB 可以用来在 :cpp:func:`esp_gcov_dump` 上设置一个断点,然后通过使用 ``gdbinit`` 脚本自动调用 ``mon esp gcov dump`` (关于 GDB 的使用可参考 :ref:`jtag-debugging-using-debugger-command-line`)
GDB 可以用来在 :cpp:func:`esp_gcov_dump` 上设置断点,然后使用 ``gdbinit`` 脚本自动调用 ``mon esp gcov dump`` (关于 GDB 的使用可参考 :ref:`jtag-debugging-using-debugger-command-line`
以下 GDB 脚本将在 :cpp:func:`esp_gcov_dump` 处添加一个断点,然后调用 ``mon esp gcov dump`` OpenOCD 命令。
@ -538,7 +544,7 @@ GDB 可以用来在 :cpp:func:`esp_gcov_dump` 上设置一个断点,然后通
.. note::
注意所有的 OpenOCD 命令都应该在 GDB 中以 ``mon <oocd_command>`` 方式调用。
注意所有的 OpenOCD 命令都应该在 GDB 中以 ``mon <oocd_command>`` 方式调用。
.. _app_trace-gcov-generate-report:
@ -547,12 +553,12 @@ GDB 可以用来在 :cpp:func:`esp_gcov_dump` 上设置一个断点,然后通
一旦代码覆盖数据被转储,``.gcno````.gcda`` 和源文件可以用来生成代码覆盖报告。该报告会显示源文件中每行被执行的次数。
Gcov 和 Gcovr 都可以用来生成代码覆盖报告。安装 Xtensa 工具链时会一起安装 Gcov但 Gcovr 可能需要单独安装。关于如何使用 Gcov 或 Gcovr请参考 `Gcov documentation <https://gcc.gnu.org/onlinedocs/gcc/Gcov.html>`_`Gcovr documentation <http://gcovr.com/>`_
Gcov 和 Gcovr 都可以用来生成代码覆盖报告。安装 Xtensa 工具链时会一起安装 Gcov但 Gcovr 可能需要单独安装。关于如何使用 Gcov 或 Gcovr请参考 `Gcov 文档 <https://gcc.gnu.org/onlinedocs/gcc/Gcov.html>`_`Gcovr 文档 <http://gcovr.com/>`_
在工程中添加 Gcovr 构建目标
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
用户可以在自己的工程中定义额外的构建目标从而更方便地生成报告。可以通过一个简单的构建命令生成这样的报告。
用户可以在自己的工程中定义额外的构建目标从而通过一个简单的构建命令即可更方便地生成报告。
请在您工程的 ``CMakeLists.txt`` 文件中添加以下内容:
@ -562,7 +568,7 @@ Gcov 和 Gcovr 都可以用来生成代码覆盖报告。安装 Xtensa 工具链
idf_create_coverage_report(${CMAKE_CURRENT_BINARY_DIR}/coverage_report)
idf_clean_coverage_report(${CMAKE_CURRENT_BINARY_DIR}/coverage_report)
可使用以下命令:
可使用以下命令:
* ``cmake --build build/ --target gcovr-report``:在 ``$(BUILD_DIR_BASE)/coverage_report/html`` 目录下生成 HTML 格式代码覆盖报告。
* ``cmake --build build/ --target cov-data-clean``:删除所有代码覆盖数据文件。