CONFIG_COVERAGE has been incorrectly used to
change other kconfig options (stack sizes, etc)
code defaults, as well as some samples behaviour,
which should not have dependend on it.
Instead those should have depended on COVERAGE_GCOV,
which, being the one which adds special code and
temporary RAM storage for embedded targets,
require changes to many features.
When building for the native targets, all this was
unnecessary.
=> Fix the dependency.
Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
A new Z_SPIN_DELAY() macro has been added which
can be used to reduce a bit the amount of noise
due to the POSIX arch need to break busy loops with
k_busy_wait().
Use it.
Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
As of today <zephyr/zephyr.h> is 100% equivalent to <zephyr/kernel.h>.
This patch proposes to then include <zephyr/kernel.h> instead of
<zephyr/zephyr.h> since it is more clear that you are including the
Kernel APIs and (probably) nothing else. <zephyr/zephyr.h> sounds like a
catch-all header that may be confusing. Most applications need to
include a bunch of other things to compile, e.g. driver headers or
subsystem headers like BT, logging, etc.
The idea of a catch-all header in Zephyr is probably not feasible
anyway. Reason is that Zephyr is not a library, like it could be for
example `libpython`. Zephyr provides many utilities nowadays: a kernel,
drivers, subsystems, etc and things will likely grow. A catch-all header
would be massive, difficult to keep up-to-date. It is also likely that
an application will only build a small subset. Note that subsystem-level
headers may use a catch-all approach to make things easier, though.
NOTE: This patch is **NOT** removing the header, just removing its usage
in-tree. I'd advocate for its deprecation (add a #warning on it), but I
understand many people will have concerns.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
The global variable thread_idx should be properly initialized
for test_slice_scheduling. This issue is found when run the
test repeatedly with the new ztest fx.
Signed-off-by: Ming Shao <ming.shao@intel.com>
Add a bunch of missing "zephyr/" prefixes to #include statements in
various test and test framework files.
Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
This commit updates all deprecated `K_THREAD_STACK_ARRAY_EXTERN` macro
usages to use the `K_THREAD_STACK_ARRAY_DECLARE` macro instead.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
This commit updates all deprecated `K_THREAD_STACK_EXTERN` macro usages
to use the `K_THREAD_STACK_DECLARE` macro instead.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
When test tests/kernel/sched/schedule_api, it shows "ASSERTION FAIL:
timeslice in ticks much be divisible by two", then break and fail
the test.
Fixes#44887.
On board it8xxx2_evb, it can pass schedule_api test without
the assertion, so I add the floating part back to half_slice_cyc
when the timeslice in ticks can't be divisible by two.
After change, the time slice will be:
1.slice_ticks = (200x8192+999)/1000 = 1639 (not changed)
2.before add the deviation (not handle the floating part):
half_slice_cyc = (1639/2) * (32768/8192) = (819) * (32768/8192) = 3276,
after add the deviation:
half_slice_cyc = 3276 + (32768/8192/2) = 3278,
and it's equal (1639/2) * (32768/8192) = 3278.
Verified by test pattern:
west build -p always -b it8xxx2_evb tests/kernel/sched/schedule_api
Signed-off-by: Ruibin Chang <Ruibin.Chang@ite.com.tw>
In order to bring consistency in-tree, migrate all tests to the new
prefix <zephyr/...>. Note that the conversion has been scripted, refer
to #45388 for more details.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Simple coverage exerciser for the per-thread timeslice feature. Added
as a(nother) new variant of the schedule_api test.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The stack array tstacks was declared in the header file using
the same macro which defines the same stack array but with
an added "extern" in front. This macro adds alignment and section
attribute which are actually not the same as the actual stack array
defined in main.c. The section name used in the section attribute
contains the file name where the stack array is defined or extern
declared. So the same symbol, in this case z_interrupt_stacks, has
different attributes in two places, and GCC 11 starts to complain
about this. So use the newly introduced macro to extern declare
the stack array without adding/replacing any symbol attributes.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
When calculating the expected interval for threads other than
the first one, the test uses ms->ticks->cycles conversion to
figure out the bound of cycles permitted. Both lower and upper
bound conversions are using the k_*_to_*_floor32(). When
numbers involved are not wholly divisible, decimal points are
being truncated, resulting in incorrect intervals, and thus
failing tests. So change the calculation to appropriate
floor() or ceil() based on the boundary.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Increase thread counter before the assert, otherwise in case of fail
the output will give the impression that the same thread ran more than
once.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Adjusting the tick alignment of this test caused it to start breaking
on nRF5 platforms, which use a 32768 Hz clock that doesn't divide
evenly into the millisecond precision used by the test. The "half
slice" math ended up being wrong by a bit.
Convert to ticks first before computing the cycle delay needed.
Fixes#29705
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This test is a little subtle: it wants to spawn three threads to run
and be switched out by a timeslice interrupt. And it wants to consume
half a time slice itself before it starts running. And, because
timeslicing runs out of the same tick framework in the timer driver,
it needs to align to the start of a tick before the process starts.
And further: it does its own time math not in ticks but in timer
cycles, so it's quite sensitive to slop.
But it's "synchronize to tick boundary" code was actually
synchronizing to a CYCLE boundary, which is just wrong. And it was
doing this in the wrong order. It was resetting the timeslice first
and then synchronizing to a tick by spinning, which means that the
test was always going to begin as much as a tick late. Do the tick
synchronization (via a sleep) first.
Finally, the manager thread that was spawning the new threads lives at
the same priority as the highest priority child threads, which means
it can potentitially wake up on the semaphores that they are giving in
the middle of the test and consume CPU unexpectedly. Make sure it's
sleeping for the duration.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
I reviewed that test to find a bug root cause, unfortunately,
bug dissapeared, so nothing to fix, but I noticed several
misprints and wrong comment styles. It's something at least.
Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
Add a test case of preemptive thread scheduling.
The scheduler will select the highest priority and
waiting longest thread to be the current thread.
Signed-off-by: Ying ming <mingx.ying@intel.com>
Kernel timeouts have always been a 32 bit integer despite the
existence of generation macros, and existing code has been
inconsistent about using them. Upcoming commits are going to make the
timeout arguments opaque, so fix things up to be rigorously correct.
Changes include:
+ Adding a K_TIMEOUT_EQ() macro for code that needs to compare timeout
values for equality (e.g. with K_FOREVER or K_NO_WAIT).
+ Adding a k_msleep() synonym for k_sleep() which can continue to take
integral arguments as k_sleep() moves away to timeout arguments.
+ Pervasively using the K_MSEC(), K_SECONDS(), et. al. macros to
generate timeout arguments.
+ Removing the usage of K_NO_WAIT as the final argument to
K_THREAD_DEFINE(). This is just a count of milliseconds and we need
to use a zero.
This patch include no logic changes and should not affect generated
code at all.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Remove leading/trailing blank lines in .c, .h, .py, .rst, .yml, and
.yaml files.
Will avoid failures with the new CI test in
https://github.com/zephyrproject-rtos/ci-tools/pull/112, though it only
checks changed files.
Move the 'target-notes' target in boards/xtensa/odroid_go/doc/index.rst
to get rid of the trailing blank line there. It was probably misplaced.
Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
Mark the old time conversion APIs deprecated, leave compatibility
macros in place, and replace all usage with the new API.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
There are two set of code supporting x86_64: x86_64 using x32 ABI,
and x86 long mode, and this consolidates both into one x86_64
architecture and SoC supporting truly 64-bit mode.
() Removes the x86_64:x32 architecture and SoC, and replaces
them with the existing x86 long mode arch and SoC.
() Replace qemu_x86_64 with qemu_x86_long as qemu_x86_64.
() Updates samples and tests to remove reference to
qemu_x86_long.
() Renames CONFIG_X86_LONGMODE to CONFIG_X86_64.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This test case is has a tolerance of 1ms, but systems with a tick slower
than 1000 ticks/sec may spil outside the 1ms tolerance.
Tolerance adapts to system's ticks/sec, e.g. QEMU targets have
100ticks/sec -> tolerance is 10ms.
Signed-off-by: Andrei Gansari <andrei.gansari@nxp.com>
Re-run with updated script to convert integer literal delay arguments
to k_thread_create and K_THREAD_DEFINE to use the standard timeout
macros.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Re-run with updated script to convert integer literal delay arguments to
k_sleep to use the standard timeout macros.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Use the int_literal_to_timeout Coccinelle script to convert literal
integer arguments for kernel API timeout parameters to the standard
timeout value representations.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
This test was testing for an undocumented and somewhat hyperspecific
behavior: when a process reaches a reschedule point and yields to a
higher priority thread, and there is another equal priority thread
active, which thread gets to run when the higher priority thread
finishes its work? The original scheduler (because it leaves the
older thread in place in the list) implements the preemption like an
interrupt and returns to the original thread, despite the fact that
this then resets is time slice quantum unfairly. In SMP mode, where
the current threads cannot live in the active list, the thread gets
added back to the end of the queue and the other thread runs. In
effect, in UP mode "yield" and "reschedule" mean very slightly
different things where in SMP they act the same.
We don't document either behavior, as it happens. Relax the test
constraints by adding a single deliberate k_yield() to unify behavior.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
We were testing the value of a volatile variable inside a zassert,
which static analysis doesn't like. In principle, it might be
volatile because it's an MMIO register or something and the read is a
side effect, and an assertion will be optionally compiled. (Except
here the value is just regular memory marked volatile for
threadsafety, and zassert will never be elided in a test, but the tool
doesn't know that).
Refactor a little so we always read the variable in a way the tool can
detect is consistent.
Fixes#18446
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This commit includes tweaks in several tests, so
that the tests can be passing on ARM QEMU targets,
mps2_an385 and mps2_an521 with Qemu 4.x release.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Add a new test for unlocking nested scheduler lock. Make sure that
k_sched_unlock() isn't unconditionally a preemption point.
Signed-off-by: Yasushi SHOJI <y-shoji@ispace-inc.com>
When ticks are sub-millisecond, the math produces minimum and maximum
values for the slice duration test that are equal. But because of
aliasing across tick boundaries, it's always possible (for any time
period, nothing specific to time slicing here) to measure one tick
more than an intended duration. So make sure there's always at least
a range of 1ms.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This test case is so timing sensitive that gathering code
coverage data screws up the results.
Since this is an abnormal execution environment anyway,
just skip the assertions if CONFIG_COVERAGE=y.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We didn't have any coverage of the system call handlers for
k_wakeup() and k_is_preempt().
Increase RAM requirements due to stack alignment constraints
on MPU platforms when user mode is enabled.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Fix how the tstacks array was declared extern so this
actually compiles on all platforms with user mode enabled.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Using void pointers as universal arguments is widely used. However, when
compiling a 64-bit target, the compiler doesn't like when an int is
converted to a pointer and vice versa despite the presence of a cast.
This is due to a width mismatch between ints (32 bits) and pointers
(64 bits). The trick is to cast to a widening integer type such as
intptr_t and then cast to
void*.
When appropriate, the INT_TO_POINTER macro is used instead of this
double cast to make things clearer. The converse with POINTER_TO_INT
is also done which also serves as good code annotations.
While at it, remove unneeded casts to specific pointer types from void*
in the vicinity, and move to typed variable upon function entry to make
the code cleaner.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>