Enhance the wait_queue benchmark to output resulting summary metrics as records, so when it runs with Twister the results are parsed and saved into twister.json and recording.csv files for further analysis. Minor documentation edits and make ClangFormat happy. Signed-off-by: Dmitrii Golovanov <dmitrii.golovanov@intel.com>
38 lines
1.2 KiB
Plaintext
38 lines
1.2 KiB
Plaintext
# Copyright (c) 2024 Intel Corporation
|
|
# SPDX-License-Identifier: Apache-2.0
|
|
|
|
mainmenu "Wait Queue Benchmark"
|
|
|
|
source "Kconfig.zephyr"
|
|
|
|
config BENCHMARK_NUM_ITERATIONS
|
|
int "Number of iterations to gather data"
|
|
default 1000
|
|
help
|
|
This option specifies the number of times each test will be executed
|
|
before calculating the average times for reporting.
|
|
|
|
config BENCHMARK_NUM_THREADS
|
|
int "Number of threads"
|
|
default 100
|
|
help
|
|
This option specifies the maximum number of threads that the test
|
|
will add to a wait queue. Increasing this value will places greater
|
|
stress on the wait queues and better highlight the performance
|
|
differences as the number of threads in the wait queue changes.
|
|
|
|
config BENCHMARK_RECORDING
|
|
bool "Log statistics as records"
|
|
default n
|
|
help
|
|
Log summary statistics as records to pass results
|
|
to the Twister JSON report and recording.csv file(s).
|
|
|
|
config BENCHMARK_VERBOSE
|
|
bool "Display detailed results"
|
|
default n
|
|
help
|
|
This option displays the average time of all the iterations done for
|
|
each thread in the tests. This generates large amounts of output. To
|
|
analyze it, it is recommended redirect or copy the data to a file.
|