After enabled FPU context switch, one condvar testcase failed due to
the order of spawning thread cannot be guaranteed. Add a delay to
make sure the thread which initializing the condvar run first.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
The newly added testcase test_nop failed the CI. Give RISCV more
arch_nop() instructions to archieve one cycle.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
new test failed which means we missed something in CI or the failing
platform changed after CI was initially run. skip it for now while we
investigate.
Do some minor cleanup in the metadata.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Add test cases of direct interrupt for arch x86 and posix.
We register two direct interrupt at build time, then triggering
interrupt and check if ISR handler has executed or not. We also
check irq_enable and irq_disable works.
Why we add an extra compiler option "-mgeneral-regs-only" to make
it works in arch x86. because there might be some existing x87
instructions executing inside interrupt context.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
Add test cases of regular interrupt for arch x86. This tests basic
functionailty of IRQ_CONNECT(), irq_enable(), irq_disable(),
irq_lock(), irq_unlock().
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
Add a test case to test arch interface arch_nop(), the main focus here
is for coverage of the code. arch_nop() is a special implementation
and it will behave differently on different platforms. By the way, this
also measures how many cycles it spends for platforms that support it.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
Remove this intrusive tracing feature in favor of the new object tracing
using the main tracing feature in zephyr. See #33603 for the new tracing
coverage for all objects.
This will allow for support in more tools and less reliance on GDB for
tracing objects.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Legacy k_work API has been marked deprecated, but it is still present
in tree and should be tested. Avoid CI warnings by disabling warnings
on use of deprecated API within the test source files.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Add 3 test cases to test offload job from isr, include:
1. test_isr_offload_job_multiple()
Validate the offloaded work executes immediately or not depends on its
priority, and it offloads to different k_work.
2. test_isr_offload_job_identi()
Validate the offloaded work executes immediately or not depends on its
priority, and it offloads to the identical k_work.
3. test_isr_offload_job()
Use dynamic interrupt instead of irq_offload() to verify the offloaded
work.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
Add an testcase. Creat two preempt threads with equal priority to
atomiclly access the same atomic value. Because these preempt
threads are of equal priority, so enable time slice to make
them scheduled. The thread will execute for some time.
In this time, the two sub threads will be scheduled separately
according to the time slice.
Signed-off-by: Ying ming <mingx.ying@intel.com>
If calling function k_thread_resume() when the thread is not suspend,
it takes no effect. This change improve coverage of function
k_thread_resume() in sched.c
Signed-off-by: Ying ming <mingx.ying@intel.com>
First, this test is a little suspect. It's assuming that the value
returned from k_cycle_get_32() represents the time since system
power-on. While that's an obvious implementation choice and surely
often true, it's definitely not the way we document this API to the
arch layer. It's perfectly legal for a platform to return any value
as long as the counter is increasing at the correct rate. Leaving for
now as there's no other way to test CONFIG_BOOT_DELAY, but this will
likely be coming back to confuse us at some point.
Regardless, that convention holds for x86 devices using any of the
existing drivers. But on an EFI PC using the TSC counter as the clock
source: (1) the counter is running at 1-2 GHz and (2) the time to get
through an EFI BIOS and into Zephyr is routinely 10+ seconds,
especially on reference hardware. The poor 32 bit API will roll over
several times, and effectively be a random number by the time it
reaches this test.
Just skip this test with fast counter.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Add QEMU board with single core ARCv3 HS6x 64 bit CPU
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
This adds a test for z_phys_unmap() to make sure that memory
can be unmapped and is no longer accessible.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This waits a bit for NRU eviction algorithm (which is the default)
to work its magic to clear the access bit of physical frames.
This increases the number of clean pages which can be evicted,
to make sure the number of clean pages evicted is not zero, which
would cause an assertion.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The test itself is highly sensitive to the size of the kernel
image. When the kernel gets larger, the number of pages used by
the backing store needs to shrink. So here this is.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds some tests to make sure sys_bitarray_*() are
working correctly.
Signed-off-by: Lauren Murphy <lauren.murphy@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This test fails on qemu_arc_{em|hs} consistently,
due to bug in quem_arc, details:
https://github.com/foss-for-synopsys-dwc-arc-processors/qemu/issues/14.
To get clean results we need to disable this test until the
bug is fixed and fix gets propagated to new Zephyr-SDK.
Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
This PR add 2 module test cases:
- test_smp_release_global_lock() and test_smp_release_global_lock_irq()
verify z_smp_release_global_lock() works.
And 1 integration test cases:
- test_inc_concurrency() to verify parallelly increase operations will
fail if not applying synchronization on SMP.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
Use `pm_device_*` prefix for the device runtime PM API. This adds the
API to the `pm` namespace, making it clear part of the PM subsystem.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
This adds k_thread_join() to the thread being used in
test_sem_take_timeout_isr() to avoid a thread re-use error
in the test after this one.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The permission to use multiple_condvar is not granted to test
test_condvar_multiple_threads_wait_wake, which results in
bunch or permission error messages, and actually not testing
the conditional variables. This grants the permission to
the those conditional variables to the test threads. Also,
replace the k_yield() with k_msleep() to allow all created
threads time to run. A simply k_yield() might let a few to
run before the next batch of "waking" threads start to run,
resulting in some conditional variables not being initialized
but trying to wake.
Fixes#34777
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds FPU sharing support with a lazy context switching algorithm.
Every thread is allowed to use FPU/SIMD registers. In fact, the compiler
may insert FPU reg accesses in anycontext to optimize even non-FP code
unless the -mgeneral-regs-only compiler flag is used, but Zephyr
currently doesn't support such a build.
It is therefore possible to do FP access in IRS as well with this patch
although IRQs are then disabled to prevent nested IRQs in such cases.
Because the thread object grows in size, some tests have to be adjusted.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Added test which verifies that when multithreading is disabled
exception as correctly handled by the kernel.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Extended mheap_api_concept test suite to support case when
multithreading is disabled.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Extended test to validate following functionality:
- k_busy_wait
- k_timer
- irq_lock/irq_unlock
- k_cpu_idle
- SYS_INIT()
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Extended test to validate that timer API is working as expected
when CONFIG_MULTITHREADING=n.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Add a test case to validate when adding a new partition into a memory
domain with over its maximum specified limit number, an assertion
failure happens.
Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
The comment in this test says that it cannot use ztest, as the latter
spawns some threads. However, still format the output in a way
compatible with ztest output, by using tc_util.h macros. This is
similar to a few other tests which can't use ztest library directly.
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
Found out that important requirements are not tested by current
kernel objects tests. Decided to fix that situation
New added tests:
1. test_kobj_assign_perms_on_alloc_obj()
Create kernel object semaphore, dynamically allocate it from the
calling thread's resource pool.
Check that object's address is in bounds of that memory pool.
Then check the requestor thread will implicitly be assigned
permission on the allocated object by using
semaphore API k_sem_init()
2. test_no_ref_dyn_kobj_release_mem()
Dynamically allocated kernel objects whose access is controlled by
the permission system will use object permission as a reference count
If no threads have access to an object, the object's memory released.
Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
- If device PM is not supported -ENOSYS is returned, update test case to
account for that
- Remove usage of device_pm_control_nop
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Make some change on two codvar test cases to fit testing under SMP,
and shorter the test cases execution time.
Fixes#33558.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
Try to remove CONFIG_MP_NUM_CPUS=1 configuration for the test of
condvar, sysmutex and semaphore, in order to test SMP condition more.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
The return value is documented to be true if the work was pending, but
the implementation returned true only if the work was actually running
(i.e. the caller had to wait). It should also return true if
scheduled or submitted work was cancelled.
Note that this means the return value cannot be used to determine
whether the call slept.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
After the introduction of usart1 the kernel/genisr_table test could
no longer build, due to an interrupt conflict.
Adopt the TEST_NUM_IRQS to resolve the conflict.
Signed-off-by: Thomas Stranger <thomas.stranger@outlook.com>
There's a typedef for non-pointer values compatible with atomic
non-pointer objects. Add a similar typedef for pointer values, and
the corresponding macro for initializing atomic pointer types.
This also will simplify replacing the Zephyr atomic API with one
based on C11 atomics, should that be desirable. C11 atomic pointer
values are not void*.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Modify the testcase design to solve some threads
can't lock mutex. Using array index to detect the order
of threads getting mutex instead of delaying.
Fixed#34116
Signed-off-by: Ningx Zhao <ningx.zhao@intel.com>
This adds bits to support using timing functions for displaying
paging histograms. Currently on qemu_x86_tiny is supported.
Also shorten the test names.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds more bits to gather statistics on demand paging,
e.g. clean vs dirty pages evicted, # page faults with
IRQ locked/unlocked, etc.
Also extends this to gather per-thread demand paging
statistics.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The test_prevent_interruption() uses a key for the irq_lock(),
but the key has incorrect data type. This commit makes the key
unsigned int according to API docs.
Fixes#34023
Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
Design a new testing that send and receive mailbox message with
different priority thread, and verify the high priority of receive
thread will receive firstly.
Signed-off-by: Jian Kang <jianx.kang@intel.com>
Add two testcases to test semaphore feature on system side. For example,
test semaphore usage and sync process between different priority threads
to verify the semaphore can be take by higher priority thread, and give
sem more than max value of semaphore that set in init step to verify sem
count is correct or not.
Signed-off-by: Jian Kang <jianx.kang@intel.com>
This is an integration testcase for smp. It tests
the situation when smp is configed. Fatal can be invoked on
different core and system workq can also be run on different
core.
Signed-off-by: Ying ming <mingx.ying@intel.com>
Test point:
1. Any number of threads may wait on an empty FIFO simultaneously.
2. When a data item is added, it is given to the highest priority
thread that has waited longest.
Signed-off-by: Ningx Zhao <ningx.zhao@intel.com>
Shared data can't live on thread stacks if they are incoherent. These
are all just per-test-case data, so make them static.
Fixes#33898
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This functions is being called across the tree, no reason why it should
not be a public API.
The current usage violates a few MISRA rules.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Split ARM and ARM64 architectures.
Details:
- CONFIG_ARM64 is decoupled from CONFIG_ARM (not a subset anymore)
- Arch and include AArch64 files are in a dedicated directory
(arch/arm64 and include/arch/arm64)
- AArch64 boards and SoC are moved to soc/arm64 and boards/arm64
- AArch64-specific DTS files are moved to dts/arm64
- The A72 support for the bcm_vk/viper board is moved in the
boards/bcm_vk/viper directory
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The interrupt_util.h provides utils of trigger irq, now move them into
testsuite. All of the needed test cases can make use of them.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
Add a testcase to test pop data from stack by
mutil-threads, verify data will be poped by
the highest priority thread that has waited longest
firstly.
stack integration
Signed-off-by: Ningx Zhao <ningx.zhao@intel.com>
k_work_schedule() is supposed to be a no-op if the work item is
already scheduled or submitted: the previous schedule is left
unchanged. The check incorrectly inhibited the schedule operation
when the work item was neither scheduled nor submitted, but was
running.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
There was a linker script change the broke the sorting such that
priority 2 and 20 would not necessary get sorted correctly. Modify
the test to try and catch any such issues in the future.
We modify the DEVICE_DEFINE of the larger priority first, so if the
linker isn't sorting it would get linked first in theory, and we also
tweak the priority value from 4 to 20 so if we aren't sorting correctly
between 2 and 20 we'll catch that.
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
This patch replaces ENOSYS into ENOTSUP to keep consistency with
the return value specification of k_float_enable().
Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
We are setting CONFIG_GEN_PRIV_STACKS when AArch64 actually uses a
statically allocated privileged stack.
This error was not captured by the tests because we only verify whether
a read/write to a privileged stack is failing, but it can fail for a lot
of reasons including when the pointer to the privileged stack is not
initialized at all, like in this case.
With this patch we deselect CONFIG_GEN_PRIV_STACKS and we fix the
mem_protect/userspace test to correctly probe the privileged stack.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
* Rename CPU_ARCV2 to ISA_ARCV2. That helps to avoid conflict between
CPU families naming and ISAs naming and aligns this options
with other ARC OSS projects.
* Generalize ARCV2 check to ARC check where it is required.
NOTE: we add ISA_ARCV2 option in a choice list as a preparation
for ISA_ARCV3 addition.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Add some error test cases for userspace of memory protection module.
This increase the code coverage of testing.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
This feature predated the tickless kernel and has been in legacy mode
for a while. We now have no drivers or systems that do not support
tickless, so remove this option and cleanup the code to only use
tickless.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
The clock/timer APIs are not application facing APIs, however, similar
to arch_ and a few other APIs they are available to implement drivers
and add support for new hardware and are documented and available to be
used outside of the clock/kernel subsystems.
Remove the leading z_ and provide them as clock_* APIs for someone
writing a new timer driver to use.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Added test_pipe_get_large to cover branches in both k_pipe_put and
k_pipe_get. Added trivial testcases in test_pipe_avail_no_buffer to
cover trivial branches for k_pipe_read_avail and k_pipe_write_avail.
This is the second patch as the continuation of #31037.
Signed-off-by: Shihao Shen <shihao.shen@intel.com>
Added back test_pipe_alloc because the z_thread_malloc called in the
API has been updated to use k_heap instead of k_mem_pool.
Adjusted test_resource_pool_auto_free by replacing z_mem_pool_malloc
with k_heap_alloc. Added new test_k_pipe_cleanup to cover one more
branch in k_pipe_cleanup. Modified test_half_pipe_put_get to cover
branches for (reader != NULL) in k_pipe_put. Added test
test_pipe_get_put to cover branches for (writer != NULL)
in k_pipe_get. Added trivial tests to cover input validity checks.
Line coverage has been improved by 52%, function cov by 56%, and
branch cov by 46%.
Signed-off-by: Shihao Shen <shihao.shen@intel.com>
This is an intergration testcase for mem_heap.
Add an testcase to verify that multiple threads
can share the same heap space without interfering
with each other.
Signed-off-by: Ying ming <mingx.ying@intel.com>
Fix issue #33114#33120. Modify the testcase that run failed on iotdk
and nsim. This testing do not need receive thread ID when invoke
k_mbox_data_get() with NULL param. The testcase purpose is invoke
this API with NULL buffer and NULL receive_id. It will cause fatal
error if use a uninitialize receive id.
Signed-off-by: Jian Kang <jianx.kang@intel.com>
This reverts commit 9de70a78fe.
The tests have been updated so there is no need to skip tests
when the kernel is linked in virtual address space.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Replace *_thread_resource_pool_assign() in the reference with the new
k_thread_heap_assign() since both k_thread_resource_pool_assign() and
z_thread_resource_pool_assign() has been removed prio to v2.5 (by the
commit c770cab1a3 and 3c2c1d85b0 respectively) along with the
k_mem_pool API removal.
For the resource pool inheritance test, the variables with "res_pool"
string has been replaced by "heap_mem" to align with the documentation
fix. No functionality has been changed.
Signed-off-by: Yasushi SHOJI <yashi@spacecubics.com>
This is the same problem as seen for #32053. Refer to that for the
details and propose a similar fix.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The original implementation of resubmitting a delayed work item
removed the item not only from the schedule, but also from the work
queue if it was already in the work queue. This is not the semantics
of the new implementation, which will leave the work item in the queue
if the previous deadline had elapsed and the work item was submitted.
The new semantics is preferred, as it improves consistency with SMP
targets where once an item has been submitted to a queue it can be run
at any time, and scheduling it again doesn't magically reverse the
submission. The original test would never have passed on an SMP
target, and passes now on qemu_x86 only because the timing granularity
prevents the work item from being both scheduled and queued at the
same time.
The problematic test application is the one developed for the original
implementation. Correct functioning of the new implementation is
fully verified by the sibling work test. That the legacy API does not
precisely preserve the original behavior where it was not consistent
between SMP and uniprocessor targets is regrettable, but unavoidable.
Remove the tests that cannot pass reliably.
Also fix a missing reset() after a test.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Improve the test case of CPU exception.
Add equivalence classes and input partition testing
when give an integer reason code.
Signed-off-by: Ying ming <mingx.ying@intel.com>
Putting IPC elements on the stack isn't allowed when KERNEL_COHERENCE
is set, just make test case data static (not all apps or subsystems
are going to work with incoherent stacks, but we should support it
where we can).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This platform has a tiny handful of remaining tests that fail. We
will track them as issues, but let's exclude them from integration
testing to allow full runs to complete. Often a hung device in one
test will break an entire twister run.
Tests with known (and tracked) failures:
samples/application_development/external_lib
samples/posix/eventfd
samples/userspace/hello_world_user
tests/kernel/fatal/message_capture
tests/net/socket/socketpair
tests/portability/cmsis_rtos_v2
These tests never fail in isolated testing, but are reliable timeouts
when run in sequence in a big twister run. It's possible that the bug
here may be in twister or the flash/serial scripts:
tests/crypto/tinycrypt
tests/subsys/logging/log_immediate
tests/subsys/logging/log_output
See: #32836
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
CONFIG_KERNEL_COHERENCE forbids synchronized data on the stack: no
spinlocks, IPC primitives, or things that contain them. Application
code obviously doesn't have to follow these inconvenient rules, but
our test code needs to run on platforms with incoherent stack memory.
Make these things static.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This test understood that it can't demand equality in timing because
of races against real time, so it simply validated that the test
started at or later than the expected timeout expiration.
But when calculating the expected time, it called k_uptime_ticks()
AFTER the timeout was registered. So on systems with fast ticks (or
just bad luck) a tick expiring between the two steps will look like an
"early" expiration and fail the test. Do things in the proper order.
Also, use the correct APIs for unit conversion and timeout
construction.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
There is a race between k_sem_take() and k_object_access_grant() so it
is possible (especially when testing SMP) that the thread tries to take
the semaphore before the originating thread has had the chance to
grant it permission.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Add testcase for deadline_set. Test the situation when threads are
in unqueued state. The k_thread_deadline_set() call should not make
these threads run before there delay time pass.
Signed-off-by: Ying ming <mingx.ying@intel.com>
These tests would pass pointers to data on their own stacks to other
threads, which is forbidden when CONFIG_KERNEL_COHERENCE (because
stack memory isn't cache-coherent). Make the variables static.
Also, queue had two sleeps of 2 ticks (having been written in an era
where that meant "20-30ms"), and on a device with a 50 kHz tick rate
that's not very much time at all. It would sometimes fail spuriously
because the spawned threads didn't consume the queue entries in time.
How about 10ms of real time instead?
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Putting spinlocks (or things containing them) onto the stack is a
KERNEL_COHERENCE violation. This doesn't need to be there so just
make it static.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
New power states have more granularity than deep sleep and sleep
states. Just get rid of this and keep the same behavior for now.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Test to ensure that a reset with a waiting thread properly aborts the
wait, and the semaphore remains functional after.
Signed-off-by: James Harris <james.harris@intel.com>
Debugging long-tail semaphore test failures currently is rather
annoying, both because many semaphore test failures do not print
their failing values, and because some semaphore tests do not
check return codes, leading to test failures well after the actual
failure.
Redo the semaphore tests to at least give consistent failure
messages including the actual return code and consistently check
return codes of k_sem_* APIs.
Also driveby-fix several places that used an insufficiently-
sized type to store k_uptime.
Signed-off-by: James Harris <james.harris@intel.com>
Improve dynamic interrupt test cases of interrupt for platform such as
x86, x86_64, native_posix, this improve code coverage of it.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
This kconfig isn't actually exercised in this test, it's just being
used to compute some sleep durations. Also I want it gone.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
When the kernel links in virtual address space, the data
structures needed for the z_phys_map() no longer point to physical
addresses (which are required for z_phys_map() to work). So skips
these tests if CONFIG_KERNEL_LINK_IN_VIRT=y.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Now that the old API has been reimplemented with the new API remove
the old implementation and its tests.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
The new API cannot be used from userspace because it is not merely a
wrapper around existing userspace-capable objects (threads and
queues), but instead requires much more complex and lower-level access
to memory that can't be touched from userspace. The vast majority of
work queue users are operating from privileged mode, so there's little
motivation to go through the pain and complexity of converting all
functions to system calls.
Copy the necessary pieces of the existing userspace work queue API out
and expose them with new names and types:
* k_work_handler_t becomes k_work_user_handler_t
* k_work becomes k_work_user
* k_work_q becomes k_work_user_q
etc. Because the replacement API cannot use the same types new API
names are also introduced to make it more clear that the userspace
work queue API is a separate functionality.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Enable the null-pointer dereferencing detection by default
throughout the test-suite. Explicitly disable this for the
gen_isr_table test which needs to perform vector table reads.
Disable null-pointer exception detection on qemu_cortex_m3
board, as DWT it is not emulated by QEMU on this platform.
Additionally, disable null-pointer exception detection on
mps2_an521 (QEMU target), as DWT is not present and the MPU
based solution won't work, since the target does not have
the area 0x0 - 0x400 mapped, but the QEMU still permits
read access.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Add some testcases to tesing different condition of mailbox
API. Check API robust in different input parameters is handled.
such as invoke API to get data with NULL input.
Signed-off-by: Jian Kang <jianx.kang@intel.com>
Add some testcases to test some unnormal branches,
for covering branches that not covered.Meanwhile,
Using the public fatal handler function to handle
fatal errors.
Signed-off-by: Ningx Zhao <ningx.zhao@intel.com>
Enable CONFIG_TEST in the message_capture test suite.
This allows certain Kconfig configurations, depending
on TEST, to be enabled.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Allow the test to run for non-secure firmware builds, by
removing the test-case for nonsense string, as this test-case
will likely produce a secure fault which will crash the kernel.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
tests/kernel/interrupt tests interrupt trigger functionality,
however, the Non-Secure Cortex-M mode does not have full control
of the interrupt handling, so this test cannot be guaranteed to
pass when executing in Non-Secure mode. Filter the test out for
Non-Secure Cortex-M builds.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Exclude the test_null_dynamic_name test-suite from running the
test, in Non-Secure mode (Cortex-M), because passing a NULL
device name de-references memory at 0x0 which is likely to
cause a SecureFault and crash the kernel.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
The test_timeout_abs case had baked in similar mistakes to the
off-by-one in the absolute timer implementation. FOR THE RECORD:
If you have an absolute timeout expiration set for a tick value "N",
and the current time returned by k_uptime_ticks() is "T", then the
time returned (at the same moment) by any of the *_remaining_ticks()
APIs must ALWAYS AND FOREVER BE EXACTLY "N - T" (also: "N - T > 0"
always, until the moment the kernel ISR hands off control to the first
timeout handler expiring at that tick).
The tick math is exact. No slop is needed on any systems, no matter
whether their clocks divide by milliseconds or not.
The only gotcha is that we need to be sure that the calls don't
interleave with a real time tick advance, which we do here with a
simple retry loop.
But, about slop... This patch also includes a related fix for the
test_sleep_abs(). On an intel_adsp (which has 50 kHz ticks, a
comparatively slow idle resume and interrupt entry, and even has two
CPUs to mess with latency measurements) I would occasionally see the
k_sleep() take more than a tick to wake up from the interrupt handler
until the return to application code. Add some real time slop there
(just 100us) to handle systems like this.
Fixes#32572
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
THIS COMMIT DELIBERATELY BREAKS BISECTABILITY FOR EASE OF REVIEW.
SKIP IF YOU LAND HERE.
Remove the existing implementatoin of k_thread_abort(),
k_thread_join(), and the attendant facilities in the thread subsystem
and idle thread that support them.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
When the kernel is TICKLESS, timeouts are set as needed, and drivers
all have some minimum amount of time before which they can reliably
schedule an interrupt. When this happens, drivers will kick the
requested interrupt out by one tick. This means that it's not
reliably possible to get a timeout set for "one tick in the
future"[1].
And attempting to do that is dangerous anyway. If the driver will
delay a one-tick interrupt, then code that repeatedly tries to
schedule an imminent interrupt may end up in a state where it is
constantly pushing the interrupt out into the future, and timer
interrupts stop arriving! The timeout layer actually has protection
against this case.
Finally getting to the point: in recent changes, the timeslice layer
lost its integration with the "imminent" test in the timeout code, so
it's now able to run into this situation: very rapidly context
switching code (or rapidly arriving interrupts) will have the effect
of infinitely[2] delaying timeouts and stalling the whole timeout
subsystem.
Don't try to be fancy. Just clamp timeslice duration such that a
slice is 2 ticks at minimum and we'll never hit the problem. Adjust
the two tests that were explicitly requesting very short slice rates.
[1] Of course, the tradeoff is that the tick rate can be 100x higher
or more, so on balance tickless is a huge win.
[2] Actually it only lasts until a 31 bit signed rollover in the HPET
cycle count in practice.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
gen_isr_table uses 6 IRQs for testing. Originally, it uses IRQ 41-36.
However, the IRQ37 & 36 are enabled by other modules in NPCX chips.
Change TEST_NUM_IRQS to use 45-40 for the test.
Signed-off-by: Wealian Liao <WHLIAO@nuvoton.com>
When calculating the expected interval for threads other than
the first one, the test uses ms->ticks->cycles conversion to
figure out the bound of cycles permitted. Both lower and upper
bound conversions are using the k_*_to_*_floor32(). When
numbers involved are not wholly divisible, decimal points are
being truncated, resulting in incorrect intervals, and thus
failing tests. So change the calculation to appropriate
floor() or ceil() based on the boundary.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Absolute timeouts were covered, but nothing was testing their actual
expiration time and there was an off-by-one.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Besides implementing a customized pm_policy_next_state() in the
application layer, a customized device policy handler of power
management, pm_policy_low_power_devices(), is also needed if
CONFIG_PM_POLICY_APP is enabled. This CL adds this function to prevent
build errors.
Signed-off-by: Mulin Chao <mlchao@nuvoton.com>
This adds qemu_x86_lakemont to the platform allow list
for the FPU sharing tests. Since Lakemont supports SSE3
and SSSE3, it is better to test them also.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Correct a bunch of precision/analysis errors in this test:
* Test items weren't consistent about tick alignment and resetting of
the timestamp, so put these steps into init_timer_data() and call
that immediately before k_timer_start().
* Many items would calculate the initial timestamp AFTER
k_timer_start(), leading to an extra (third!) point where the timer
computation could alias by an extra tick. Always do this
consistently before the timer is started (via init_timer-data()).
* Tickless systems with high tick rates can easily advance the system
uptime while the timer ISR is running, so the system can't expect
perfect accuracy even there (this test was originally written for
ticked systmes where the ISR was by definition happening "at the
same time").
(Unfortunately our most popular high tick rate tickless system,
nRF5, also has a clock that doesn't divide milliseconds exactly, so
it had a special path through all these precision comparisons and
avoided the bugs. We finally found it on a x86 HPET system with 10
kHz ticks.)
* The interval validation was placing a minimum bound on the interval
time but not a maximum (this mistake was what had hidden the failure
to reset the timestamp mentioned above).
Longer term, the millisecond precision math in these tests is at this
point an out of control complexity explosion. We should look at
reworking the core OS tests of k_timer to use tick precision (which is
by definition exact) pervasively and leave the millisecond stuff to a
separate layer testing the alternative/legacy APIs.
Fixes#31964 (probably -- that was reported against up_squared, on
which I had trouble reproducing, but it was a common failure on
ehl_crb).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This adds X86 keyword to the kconfigs to indicate these are
for x86. The old options are still there marked as
deprecated.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
A fairly common idiom in our test code is to put test-local data
structures onto the stack, even when they are to be used from another
thread. But stacks are incoherent memory on some platforms, which
means that such things may not get a consistent view of memory between
threads.
Just make these things static. A few of these spots were causing test
failures on intel_adsp_cavs15. More were found by inspection while
hunting for mistakes.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Thread stack memory on coherence platforms needs to be linked into a
special section (so it can be cached).
Also, the test_idle_stack case just can't work with coherence. It's
measuring the CPU's idle stack's unused data, which was initialized at
boot from CPU0, and not necessarily the CPU on which the test is
running. In practice on intel_adsp_cavs15, our CPU has stale zeroes
in the cache for its unused stack area (presumably from a firmware
memory clear at boot or something?). Making this work would require a
cache invalidate on all CPUs at boot time before the idle threads
start, we can't do it here in the test because we don't know where the
idle stack pointer is.
Too much work for an esoteric stack size test, basically. Just
disable on these platforms.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
It was discovered that TLS data/bss in stack need to be
aligned correctly or else incorrect variables would be
accessed. This makes tdata and tbss sections to have
odd sizes to make sure everything still works.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The deadline scheduler as of commit ef626571b2 ("kernel/sched:
Optimize deadline comparison") got an optimization that requires that
the the cycle difference of the deadline time of the "first" and
"last" runnable thread never be higher than 2^31.
The test code here was masking off the bottom 31 bits of the generated
deadlines, so it looked OK. But because the actual setting of the
deadline values takes time too, it was still possible to select values
that would roll over. The window was VERY small, but the RNG on one
platform (up_squared) hit it.
Shrink the selected deadlines to live in a 30 bit space for safety.
Fixes#31508
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This reverts commit 79d73063af.
The issue #31333 is fixed so this can be reverted to
enable tests/kernel/context to run with demand paging enabled.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The inheritance test first creates a parent thread, and then
creates a child thread inside this parent thread. At the same
time inside the main thread, a resource pool is assigned to
the parent thread. However, when under SMP, it is possible
that the pool assignment is done between both parent/child
threads are getting pointers to the pool, due to multiple
threads are running. So when doing pool pointer comparison,
there is a mismatch between those pointers (like parent has
a null pool pointer while child is pointing to the actual
pool), and thus failing the test. So fix this by delaying
the parent from running under after pool assignment is done.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
When coverage is enabled on x86_64, GCC uses relative addressing
to increment the gcov counters. The generated code of the test
function assumes execution is in the same location where
the linker places the test function. This does not work with
the execution test as it copies the function into another part
of memory and tries to execute from there. When the copied
function starts to run, the instruction pointer is at the newly
copied function. So any relative addressing with regard to
the instruction pointer now is invalid. Instead of
<generated code RIP + offset> for gcov counter as it should be,
now the copied code is trying to access the counter at
<copied code RIP + offset>, which points to incorrect
memory location (and possibly invalid/non-mapped memory).
To fix this, we need to tell GCC not to use relative addressing.
This can be accomplished by telling GCC to use the large memory
model. This is only used for this test as this option increases
code size quite a bit, and should not be used in general.
Fixes#30434
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This puts the transplanted_function into its own section so that
z_phys_map() can correctly map the whole range of memory used
by the function, in case someone decides to expand the function
to be bigger than a MMU page.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The test_mem_domain_migration test creates a new thread with
different priority based on whether SMP is enabled. This causes
an issue where SMP=y and MP_NUM_CPUS=1 where the spin_entry()
would spin forever (with k_busy_wait()) and not yielding since
it has cooperative priority. Fix this by using MP_NUM_CPUS to
figure out which priority to use, as it is valid configuration
to have SMP=y and MP_NUM_CPUS=1.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Increase thread counter before the assert, otherwise in case of fail
the output will give the impression that the same thread ran more than
once.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Adjusting the tick alignment of this test caused it to start breaking
on nRF5 platforms, which use a 32768 Hz clock that doesn't divide
evenly into the millisecond precision used by the test. The "half
slice" math ended up being wrong by a bit.
Convert to ticks first before computing the cycle delay needed.
Fixes#29705
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
the implementation of spinlock validation uses two LSB bits in the
bottom of a pointer union to store a CPU index, which only has space
for 4 CPUS. the MP_NUM_CPUS should be <= 4.
Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
Clean up logging menuconfig by grouping configuration into
sections like: mode, processing configuration, backends.
Additionlly, removed LOG_ENABLE_FANCY_OUTPUT_FORMATTING which is no
longer in use.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Add some error condition of testcases to verify whether the
robustness of API. Such as give a NULL to some API and check
the response if get result that we were expacted.
Signed-off-by: Jian Kang <jianx.kang@intel.com>
Some arches like x86 need all memory mapped so that they can
fetch information placed arbitrarily by firmware, like ACPI
tables.
Ensure that if this is the case, the kernel won't accidentally
clobber it by thinking the relevant virtual memory is unused.
Otherwise this has no effect on page frame management.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Until #31333 is resolved, the periodic timer in the eviction
algorithm interacts with this test in such a way that the system
deadlocks.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
If we evict enough pages to completely fill the backing store,
through APIs like k_mem_map(), z_page_frame_evict(), or
z_mem_page_out(), this will produce a crash the next time we
try to handle a page fault.
The backing store now always reserves a free storage location
for actual page faults.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
More to be added, but for now show that we can map more
anonymous memory than we physically have, and that reading/
writing to it works as expected.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Show we can measure free memory properly and map a page of
anonymous memory, which has been zeroed and is writable.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Add a conf file to make sure the kernel will use simple linked-list
ready queue as scheduling algorithm. This operation will increase module
testcase coverage and z_priq_dumb_add z_prj_dum_remove function are
called.
Signed-off-by: Ying ming <mingx.ying@intel.com>
This reverts commit b98058ecd0.
With icount finally working in QEMU for ARC these tests start to
pass reliably, so no need to exclude them any longer.
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
This reverts commit 6f4f5b1fe5.
With icount finally working in QEMU for ARC these tests start to
pass reliably, so no need to exclude them any longer.
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
This reverts commit 27d42f060d.
With icount finally working in QEMU for ARC these tests start to
pass reliably, so no need to exclude them any longer.
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Increase the heap memory pool size in the
prj_armv8m_mpu_stack_guard.conf, to match
the value in the default configuration in
proj.conf (and fix an out-of memory issue
when allocating a kernel object).
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Change subsystem to use struct pm_state with substate-id instead of
using only the power state category.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Migrate the whole pm subsystem to use new power states information
from power_state.h and get states and residency properties from
device tree.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Upcoming changes from Andrew that add a global timeout to the kernel
broke because of some voodoo behavior in the kernel/context test. It
will use arch_irq_disable() on the timer interrupt directly to prevent
interrupts and measure timekeeping in their absence. But some
architectures[1] don't reliably deliver interrupts that arrive, which
means that a running timeout that exists across this period will
result in a corrupt timeout queue.
Document that rule for architectures, move the offending test to the
end of the test suite (to minimize the chance of interacting with
other test code) and put a giant warning about the situation on it.
Long term, we may want to rework this test to do its job in other
ways.
[1] On x86, the interrupt disable happens at the IO-APIC level, while
interrupt latching and delivery is downstream in each CPU's Local
APIC. An IO-APIC masked interrupt is completely invisible to the APIC
and can never be delivered once the line goes low.
Fixes#31333
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Until #31333 is resolved, the periodic timer in the eviction
algorithm interacts with this test in such a way that the system
deadlocks.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
If we evict enough pages to completely fill the backing store,
through APIs like k_mem_map(), z_page_frame_evict(), or
z_mem_page_out(), this will produce a crash the next time we
try to handle a page fault.
The backing store now always reserves a free storage location
for actual page faults.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
More to be added, but for now show that we can map more
anonymous memory than we physically have, and that reading/
writing to it works as expected.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Show we can measure free memory properly and map a page of
anonymous memory, which has been zeroed and is writable.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Add some error condition or testing cases to verify whether the
robustness of API. Such as give a NULL to some API and check
the response if get result that we were expacted.
Signed-off-by: Jian Kang <jianx.kang@intel.com>
Add some testcases to test some failure scenario
to enhance the coverage of queue's source code.
And add the fatal error function to handler the
fatal error by ourself.
Signed-off-by: Ningx Zhao <ningx.zhao@intel.com>
Add some error case tesing such as invoke k_sem_take with duration
timeout or set input to NULL. This is check if API robust in error
condition handing.
Signed-off-by: Jian Kang <jianx.kang@intel.com>
When using Red/black tree ready queue as scheduling algorithm,
there are no limit to the number of priority levels. So set the
CONFIG_NUM_COOP_PRIORITIES to 30, testcase test_bad_prooritiesi()
will prove both cooperative and preemptive thread have no quantitative
limit.
Signed-off-by: Ying ming <mingx.ying@intel.com>
1. Add a null dynamic name testing for device_get_binding().
2. Add a driver which initialization failed in SYS_INIT.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
1. Remove the error test cases that trigger assertion.
2. Refine the NULL and invalid kobject parameter test case.
3. Use the common fatal error handler to reduce code.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
As FPU enalbed the printf code size is changed,
so increase main stack size to make test pass on NXP RT platforms
Signed-off-by: Hake Huang <hake.huang@oss.nxp.com>
Skip the memory mapping execution test case when code coverage enabled
for qemu_x86_64 platform. See issue #30434.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
The "sentinel" variant of this test runs the same code, but enables
the stack sentinel feature. Inexplicably, it's also disabling
TICKLESS_KERNEL, forcing a timer interrupt at every tick boundary.
That doesn't seem to be required for any test functionality I can see.
And worse, by changing that setting without adjusting the tick rate,
it runs afoul of more modern platforms which were designed with
tickless operation in mind. Specifically, the intel_adsp platforms
have a default tick rate of 50 kHz, which is just too fast for
reasonable operation. It leaves almost no time available for
application code and something falls behind and fails.
Just remove it.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Leftover from old renaming commits. This function is not private and
should not start with underscore.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Add some error test cases for spinlock, include:
1.Validate indentical spinlock cannot be used recursively.
2.Validate unlocking incorrect spinlock will trigger assertion.
3.Validate releasing incorrect spinlock will trigger assertion.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
For a while now, we've had two APIC drivers. The older was preserved
initially as the new (much smaller, "new style") code didn't have
support for Quark interrupt handling. But that's long dead now. Just
remove it.
Note that this migrates the one board using this driver (acrn) to
CONFIG_APIC_TIMER instead.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Add some testcases for stack source code coverage,
and add a fatal handler function to hand the error
by null parameter.
Signed-off-by: Ningx Zhao <ningx.zhao@intel.com>
Convert tests to DEVICE_{DT_}DEFINE instead of DEVICE_AND_API_INIT
so we can deprecate DEVICE_AND_API_INIT in the future.
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
Renamed to make its semantics clearer; this function maps
*physical* memory addresses and is not equivalent to
posix mmap(), which might confuse people.
mem_map test case remains the same name as other memory
mapping scenarios will be added in the fullness of time.
Parameter names to z_phys_map adjusted slightly to be more
consistent with names used in other memory mapping functions.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Add some error case testing such as invoking mutex with null parameter
or using it interrupt context. This is for checking if API robust
enough in error condition handling.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
tls rely on both arch has tls and toolchain support tls, add filter:
CONFIG_TOOLCHAIN_SUPPORTS_THREAD_LOCAL_STORAGE for
some tests enabled tls.
Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
Modify test .yaml file, to allow the .gap_filling test
variant to execute only on cortex-m33 platforms with
sufficient number of MPU regions. Copy pasting the
configuration from mem_protect/userspace test.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Skip the test_disable_mmu_mpu test case for
Cortex-M non-secure builds, since the test
may enter a BusFault which is not banked
between security states and the system
may hang.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Skip the scenario of accesing a faulty address
in test_string_nlen for Non-Secure Zephyr builds,
because accessing faulty addresses in this case
triggers SecureFault that may hang the system
completely.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
- Remove SYS_ prefix
- shorten POWER_MANAGEMENT to just PM
- DEVICE_POWER_MANAGEMENT -> PM_DEVICE
and use PM_ as the prefix for all PM related Kconfigs
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Use of a printk that supports floating point changes the stack
requirements causing kernel.common.stack_protection_arm_fpu_sharing to
fail. The test doesn't need this capability so revert to nano
formatting.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Use the core k_heap API pervasively within our tree instead of the
z_mem_pool wrapper that provided compatibility with the older mempool
implementation.
Almost all of this is straightforward swapping of one alloc/free call
for another. In a few cases where code was holding onto an old-style
"mem_block" a local compatibility struct with a single field has been
swapped in to keep the invasiveness of the changes down.
Note that not all the relevant changes in this patch have in-tree test
coverage, though I validated that it all builds.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Remove test cases that exercise the deprecated mem_pool features of
the pipe utility.
Note that this leaves comparatively few cases left, we should probably
audit coverage after this merges and rewrite tests that aren't
interdependent.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The mailbox and msgq utilities had API variants that could pass old
mem_pool blocks through the data structure. That API is being
deprected (and the features were obscure), so remove the internal
support.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The sys_mem_pool data structure is going away. And this test case
didn't actually do much. All it did was create a sys_mem_pool in the
app data section (I guess that's the "mem_protect" part?) and validate
that it was usable. We have tests for sys_heap to do that already
elsewhere anyway; no point in porting.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This test was written to use a TINY system heap (64 bytes) from which
it has to allocate on behalf of a userspace process. The change in
convention from mem_pool (where the byte count now includes metadata
overhead) means it runs out of space. Bump to 192 bytes. Still tiny.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
These two test cases were making whitebox assumptions of both the
block header size and memory layout of an old-style k_mem_pool that
aren't honored by the k_heap allocator. They aren't testing anything
that isn't covered elsewhere.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The kernel resource pool is now a k_heap. There is a compatibility
API still, but this is a core test that should be exercising the core
API.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The k_mem_pool allocator is no more, and the z_mem_pool compatibility
API is going away. The internal allocator should be a k_heap always.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
These were implemented in terms of the mem_pool/block API directly
(for complicated reasons, the pointers returned from this API may have
been allocated from allocators other than the single system heap).
Have them use a k_heap instead.
Requires a tweak to one test which had hard-coded an assumption about
the header size.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Mark all k_mem_pool APIs deprecated for future code. Remaining
internal usage now uses equivalent "z_mem_pool" symbols instead.
Fixes#24358
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Remove the MEM_POOL_HEAP_BACKEND kconfig, treating it as true always.
Now the legacy mem_pool cannot be enabled and all usage uses the
k_heap/sys_heap backend.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Set work item's flag in pending state, it cannot be append to a
workqueue. Improve branch coverage of function k_work_submit_to_queue().
Signed-off-by: Ying ming <mingx.ying@intel.com>
When defining system calls, it is very important to ensure that
access to the API’s private data is done exclusively through system
call interfaces. Private kernel data should never be made available
to user mode threads directly. For example, the k_queue APIs were
intentionally not made available as they store bookkeeping
information about the queue directly in the queue buffers which are
visible from user mode.
Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
build_on_all here was supposed to be a smoke test to test building on
all platforms, it should not be used for more than 1 just test.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Adds a K_DELAYED_WORK_DEFINE, matching the K_WORK_DEFINE macro, with
accompanying Z_DELAYED_WORK_INITIALIZER macro.
Makes k_delayed_work_init a static inline function, like its K_WORK
counterpart.
Signed-off-by: Trond Einar Snekvik <Trond.Einar.Snekvik@nordicsemi.no>
When adding the new partition to a memory domain the system must
assert that it does not overlap with any other existing partitions
in the domain.
Test to add new partition which has same start address as an
existing one, after that must happen an assertion error indicating
that new partition overlaps existing one.
Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
Nothing in the API description the delayed work structure sanctions
direct reference to internal fields. Do not assume that a delayed
work item can be initialized in any way other than by invoking the
delayed work item init function. Do not assume that a delayed work
item can be submitted without delay by invoking k_work_submit() with a
reference to the contained work item.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
In test_pipe_user_thread2thread(), what should be tested is
the pipe, which allocated with k_object_alloc(), rather than
"pipe" and "kpipe". That two pipes are already teseted in
test_pipe_thread2thread().
Signed-off-by: Steven Wang <steven.l.wang@linux.intel.com>
Adds a new CONFIG_MPU which is set if an MPU is enabled. This
is a menuconfig will some MPU-specific options moved
under it.
MEMORY_PROTECTION and SRAM_REGION_PERMISSIONS have been merged.
This configuration depends on an MMU or MPU. The protection
test is updated accordingly.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
CONFIG_FPU selects support for formatting floating point numbers,
which increases the stack requirements for cbprintf, causing this test
to overrun its stack.
Since this test doesn't format floating point numbers, use
CBPRINTF_NANO to revert to using the small-footprint formatter.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Tests that include floating-point format specifications may need
cbprintf FP support. Make sure it's available.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
The test reads and writes outside the bounds of an array allocated on
the stack in check_input(). This commit disables the test on SPARC.
Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
The BIT_INDEX() macro assumed little-endian. This commit adds
big-endian support, conditioned on the preprocessor define
CONFIG_BIG_ENDIAN.
Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
Using the same implementation as the rest of Zephyr reduces code size.
Update options and expected results for formatting test.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
These tests were suppressed when KERNEL_COHERENCE=y because of a
feature collision with CONFIG_POLL that has since been fixed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
These test variants were there to test an older backend to the kernel
queue utility that used k_poll() as the blocking mechanism. That code
got removed a while back, so these tests were just dupicates of the
main cases now. Remove.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Some platforms may have multiple RAM regions which are
dis-continuous in the physical memory map. We really want
these to be in a continuous virtual region, and we need to
stop assuming that there is just one SRAM region that is
identity-mapped.
We no longer use CONFIG_SRAM_BASE_ADDRESS and CONFIG_SRAM_SIZE
as the bounds of kernel RAM, and no longer assume in the core
kernel that these are identity mapped at boot.
Two new Kconfigs, CONFIG_KERNEL_VM_BASE and
CONFIG_KERNEL_RAM_SIZE now indicate the bounds of this region
in virtual memory.
We are currently only memory-mapping physical device driver
MMIO regions so we do not need virtual-to-physical calculations
to re-map RAM yet. When the time comes an architecture interface
will be defined for this.
Platforms which just have one RAM region may continue to
identity-map it.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Execute tests are disabled for RISC-V because is isn't able
to set an execution restriction. From RISC-V documentation:
"Instruction address-translation and protection are unaffected
by the setting of MPRV"
MPRV is used to apply memory protection restriction when CPU is
running in machine mode (kernel).
Signed-off-by: Alexandre Mergnat <amergnat@baylibre.com>
Add a memory region allocation for RISCV architecture.
Also fix an arbitraty value which can't work with
RISC-V granularity.
Signed-off-by: Alexandre Mergnat <amergnat@baylibre.com>
Add support for the following tests:
- test_write_control
- test_disable_mmu_mpu
- test_read_priv_stack
- test_write_priv_stack
Signed-off-by: Alexandre Mergnat <amergnat@baylibre.com>
Add doxygen comments for details of test_pipe_thread2thread().
By the way, plan to do the same thing to all test cases
in test_pipe_contexts.c.
Signed-off-by: Steven Wang <steven.l.wang@linux.intel.com>
We need to make sure that if we migrate a thread to another
memory domain, the migration process doesn't cause the target
thread to explode. This is mostly a concern on SMP systems;
the thread could be running on another CPU at the same time.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Do minor change of the descriptions and doxygen group name in order to
pave the way for generation the test specification.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
We can't control ticks accurately enough to detect the transition
between on a queue and being handled, so relax the checks to make
things pass.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Modify the gen_isr_table test case to using ztest. Although it was
split up to three test cases, the test logic and the tested platform
are totally the same as previous one.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
This test is a little subtle: it wants to spawn three threads to run
and be switched out by a timeslice interrupt. And it wants to consume
half a time slice itself before it starts running. And, because
timeslicing runs out of the same tick framework in the timer driver,
it needs to align to the start of a tick before the process starts.
And further: it does its own time math not in ticks but in timer
cycles, so it's quite sensitive to slop.
But it's "synchronize to tick boundary" code was actually
synchronizing to a CYCLE boundary, which is just wrong. And it was
doing this in the wrong order. It was resetting the timeslice first
and then synchronizing to a tick by spinning, which means that the
test was always going to begin as much as a tick late. Do the tick
synchronization (via a sleep) first.
Finally, the manager thread that was spawning the new threads lives at
the same priority as the highest priority child threads, which means
it can potentitially wake up on the semaphores that they are giving in
the middle of the test and consume CPU unexpectedly. Make sure it's
sleeping for the duration.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
There is a race condition between the child threads
exiting, and the child threads getting re-used in the
next scenario. This reproduces more often on SMP systems.
Close the race by joining on the child threads before
exiting any test scenario.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The existing testcase's doxygen describes are the general
implementation idea of a function.On this basis, adding
more descriptive statements to describe which conditions need
to be preset when running the testcase, which test techniques
are applied, and describe the testcase Design steps in detail.
Make it more readable.
Signed-off-by: Ying ming <mingx.ying@intel.com>
1. Add more detail info to make the purpose and process
of the test cases more clear which include test goal,
test step, input, judging criteria, constraints, etc.,
and these can be seen in our Zephyr documentations.
2. Add some negative test code.
Signed-off-by: YouhuaX Zhu <youhuax.zhu@intel.com>
1. Add more detail info to make the purpose and process
of the test cases more clear which include test goal,
test step, input, judging criteria, constraints, etc.,
and these can be seen in our Zephyr documentations.
2. Add more negative testcase.
Signed-off-by: YouhuaX Zhu <youhuax.zhu@intel.com>
This suite now uses far less memory and is much simpler.
We still maintain coverage of all the memory domain APIs
and ensure that the maximum number of partitions can be
applied.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The tests test_mem_part_auto_determ_size and
test_mem_part_auto_determ_size_per_mpu are supposed to
just be checking the construction of automatic memory
partitions.
test_mem_part_auto_determ_size had a bunch of extraneous
stuff covered by other test cases and reserved three
different thread stacks.
These two tests have been drastically simplified and
combined.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
test_mem_part_inherity_by_child_thr duplicates logic already
present in test_permission_inheritance. That test puts a
buffer called 'inherit_buf' in 'inherit_memory_partition'
and shows that it is accessible by a child thread by
writing to it.
Delete this unnecessary test.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Tests are now grouped in the C file they occur in.
test_mark_thread_exit_uninitialized no longer occurs twice.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
put all globals only used in this C file in static scope, which
revealed that a few of them were not used at all.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We now just use two memory domains; the default domain and an
'alternate_domain' used for tests that need to handle a memory
domain switch.
Along the way the test code was simplified.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
For compatibility layers like CMSIS where thread objects
are drawn from a pool, provide a context pointer to the
exited thread object so it may be freed.
This is somewhat obscure and has no supporting APIs or
overview documentation and should be considered a private
kernel feature. Applications should really be using
k_thread_join() instead.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Add a k_usleep() in test_timer_duration_period test to align ticks
before starting the timer. This fixes some rare off-by-1 failures.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
There are now two timer drivers available for various xtensa
platforms. Select based on their driver and not the architecture.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The k_poll implementation places a struct _poller on the stack and
shares it with other threads, which is incompatible with the
KERNEL_COHERENCE model of cached stacks.
Make this a hard build failure instead of a kconfig dependency for
clarity. The failures if a user actually enables both are subtle and
difficult to debug.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Ensure that both the main thread and any static threads are
properly assigned to the default memory domain.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This test is generating build warnings as it is making
checks that can never be false.
This reverts commit a4f1a5f58f.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Found out that important requirements are not tested by current
kernel objects tests. Decided to fix that situation
New added tests:
1. test_kobj_assign_perms_on_alloc_obj()
Create kernel object semaphore, dynamically allocate it from the
calling thread's resource pool.
Check that object's address is in bounds of that memory pool.
Then check the requestor thread will implicitly be assigned
permission on the allocated object by using
semaphore API k_sem_init()
2. test_no_ref_dyn_kobj_release_mem()
Dynamically allocated kernel objects whose access is controlled by
the permission system will use object permission as a reference count
If no threads have access to an object, the object's memory released.
3. test_krnl_obj_static_alloc_build_time()
Take addresses of the kernel objects which are statically allocated
during the build time and verify that they are not null.
That kernel objects shouldn't require manual
registration by the end user.
4. Clean-up. Removed unused variable from userspace test.
Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
I reviewed that test to find a bug root cause, unfortunately,
bug dissapeared, so nothing to fix, but I noticed several
misprints and wrong comment styles. It's something at least.
Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
This was causing stack usage to be right on the margin
for some platforms, without a clear reason why it
needs to be here (it was copied from another test case
which no longer exists).
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We don't need 3 different threads/stacks and the stack size
can be smaller, the threads don't do much.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Detection of transition from delayed to pending can fail in some cases
if the timeouts are not precisely managed.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
The current implementation of delayed work will cancel and re-submit a
pending work item that is no-wait, putting it at the back of the
queue. Verify this behavior.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
The current implementation of delayed work retains a pointer to the
queue unless the work item is successfully cancelled, preventing a
completed item from being resubmitted to a different queue. Confirm
this behavior and its workaround.
Also validates some unsuccessful cancel return values.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Pass a pointer to the work item member rather than casting the
augmented work item pointer to a base work item pointer.
Also the return type of k_work_pending() is bool, so use that rather
than comparing it to zero.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
The memset in the 'blow_up_stack' function can be optimized
away as it is called in the end of the function on the buffer
allocated on the stack (so it has 'no' effect on program
execution)
The 'stack_smasher' call can be optimized away as it's results
isn't used anywhere and stack_smasher function has no visible
side effects.
Fix that by disabling optimization on these functions.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
We use several variables (like do_sleep, etc...) to share
statuses between threads, however they are not marked as
volatile. That may lead to their unexpected optimization
(tat really happens with ARC MWDT when loop with waiting
on the sleep timeout in 'wakeup_src_thread' is optimized
away). Fix that by defining these variables as volatile.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
The __attribute__((optimize("-O0"))) attribute is used to disable
optimization of some test functions. ARC MWDT toolchain doesn't
support it, however it supports __attribute__((optnone)) with
similar functionality.
Define __no_optimization attribute across all toolchains so it
can be used in tests.
NOTE: we don't define __no_optimization for XCC as it includes
GCC header with __no_optimization defined.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Compiler may optimize away write to RO region and following
readback so we won't trigger fault (that actually happens with
arc MWDT toolchain).
Add volatile to avoid that.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
The test_triggered_wait_expired test submits the items with
2*SUBMIT_WAIT timeout and waits for the timeout to expire
so the items are being worked on. It waits one SUBMIT_WAIT
and checks none of the items have started. Then waits
another SUBMIT_WAIT to check if they have all finished.
However, since the timeout is at 2*SUBMIT_WAIT, the work
queue may have just started going through the list of items.
This means some items may have started while others have not.
This results in the test failing as not all items have
finished. So lengthen the second sleep to allow items to
finish before checking.
Fixes#28589
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Add more detail description for test case of arch_curr_cpu() and
arch_sched_ipi(). This is in order to make the purpose and process of
the test cases more clear.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
Update pm test case to fix some error last time submission. Add test
case that simply check device_pm_enable and device_pm_disable interface.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
Add new tests to improve of the Zephyr QA testing of the memory
protection for memory domains and partitions.
I created new tests for memory protection->memory partitions for
the requirements which I think necessary to be tested.
I added Doxygen tag for each test to make it clear to understand
what each test is doing and how.
New tests for memory domains and partitions:
-test_mem_domain_api_kernel_thread_only()
By creating that test I wanted to prove that access to memory
domain APIs must be restricted only to supervisor threads.
At the same time I wanted to prove that system can support the
definition of memory domains.
-test_mem_part_auto_determ_size()
By creating that test I want to prove that system can automatically
determine application memory partition base addresses and sizes
at build time, determined by its contents. Also system can support
definition of memory partitions. At the same time test proves that OS
supports adding and removing a thread from its memory domain
assignment.
-test_mem_part_auto_determ_size_per_mmu()
That test is very important and it proves that memory partitions are
automatically sized and aligned per the constraints of the platform's
memory management hardware.
-test_mem_part_inheirt_by_child_thr()
Prove that child thread inherits memory domain assignment of its
parent.
-test_macros_obtain_names_data_bss()
Test system provides tools to obtain the names of the data and BSS
sections related to a particular application memory partition at
build time.
-test_mem_part_assign_bss_vars_zero()
Test that global data and BSS values can be assigned to application
memory partitions using macros at build time. Test that BSS values
will be zeroed at the build time.
1. According to the reviews made changes.
2. Switched test_mem_part_assert_add_overmax
and test_create_new_invalid_prio_thread_from_user
That way I exposed problem (bug) with assertion
in L171 kernel/mem_protect.c
Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
Show that trampolining thread self-aborts to the idle thread
works and that we have sufficiently set the idle stack size
for this, PM hooks, and dynamic kernel object cleanup.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
No functional change, just add some extra printouts and comments
to make it a little clearer the expected sequencing.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Reduce the error between the timer (which is tick-aligned) and
busy_wait (which is not) by aligning the busy_wait to start at
a tick boundary.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Some ARM platforms, now, enable HW Stack Protection by
default in the Board definition. So if some tests
need to run without stack protection, it is not
sufficient to disable TEST_HW_STACK_PROTECTION;
we need to explicitly disable HW_STACK_PROTECTION.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Move init_timer_data() out of k_usleep() tick alignment.
Compute rem_ticks just after busy_wait_ms() to avoid slew
due to 'now' and 'rem_ms' computations.
With slow CPU 32MHz: -2 Ticks.
Signed-off-by: Alexandre Bourdiol <alexandre.bourdiol@st.com>
Insert k_usleep(1) just before k_timer_start()
to guaranty tick alignment for step "test_timer_k_define"
Signed-off-by: Alexandre Bourdiol <alexandre.bourdiol@st.com>
When doing test_thread_join with OTHER_ABORT_TIMEOUT, the interval
between two k_uptime_get() includes the two k_thread_create() which
means the interval delta does not exactly count the time spent
in k_thread_join(). On x86_64 with userspace, time spent inside
k_thread_create() scales with memory size as it needs to create
a new page table for the thread. So to actually measure
the time spent in k_thread_join(), the locations where uptime is
obtained need to be moved.
Fixes#28549
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
nRF51 MCUs are Cortex-M0 running with a 16 MHz clock. The overhead of
work done in k_usleep() requires adding three more ticks (92 us) to the
expected loop iteration time. (Two ticks is enough on most boards, but
some require a little more time.)
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
- They all had the wrong prototype and hard-casts can sometimes
lead to problems
- Several renamed to something more descriptive
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Align to ticks so the first iteration sleeps long enough
(k_timer_start() rounds its duration argument down, not up,
to a tick boundary)
Fixes#28319
Signed-off-by: Alexandre Bourdiol <alexandre.bourdiol@st.com>