Every va_start() currently triggers a FPU access trap if FPU is not
already used. This is due to the fact that va_start() must copy FPU
registers that are used for float argument passing into the va_list
object. Flushing the FPU context to its owner and granting access to
the current thread is wasteful if this is only for va_start(),
especially since in most cases there are simply no FP arguments
being passed by the caller.
This is made even worse with exception code (syscalls, IRQ handlers,
etc.) where the exception code has to be resumed with interrupts
disabled upon FPU access as there is no provision for preserving an
interrupted exception mode's FPU context.
Fix those issues by simply simulating the sequence of STR instructions
that the va_start() generates without actually granting FPU access.
We limit ourselves only to exception context to keep changes to a
minimum for now.
This also allows for reverting the ARM64 exception in the nested IRQ
test as it now works properly even if FPU_SHARING is enabled.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Add test to statically allocate a minimum-size heap, verify that it
works to allocate a single byte and that it doesn't overrun its memory
bounds.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The interrupt offload testcases fail on some boards because the timing
of the delay is too short. Refine the testcases and make it not rely
on the delay timing.
Fixes#35097Fixes#35241
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
IRQ 57 is reserved in the NXP LPC55S16 SoC. Thus, limit the number of
interrupts reported to the test, so that it does not try to use it.
Fixes#34915
Signed-off-by: Henrik Brix Andersen <henrik@brixandersen.dk>
work_q.c is not being built or used, it was replaced by user_work.c
which now has k_work_user_queue_start.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This test takes an interrupt lock and tries to call z_swap_unlocked()
while holding it. That's not legal (in the general case it means
you're breaking a caller's lock!), though in this particular case it
was safe because we'll never return to this.
Regardless, there is a natural z_swap_irqlock() that releases the lock
atomically. Use that.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
It's not at all clear to me why this was set to 1cpu, it's a single
thread doing sequential things. (I tripped over it because the 1cpu
happened to tickle an unrelated arm64 bug with interrupt state. But
we might as well fix it here.)
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This test case was taking a (traditional) irq_lock(), which masks
interrupts, and then calling k_mutex_lock() with a timeout of
K_FOREVER, which is a blocking call. That's not legal, because it
will obviously schedule other threads to run in a context where the
code was promised it would not. This used to be an uncaught error,
but now we have an assertion that catches this.
It's not clear what this test case is supposed to be testing, as the
behavior is actually identical to the release_global_lock case except
for the (incorrect) addition of the irq_lock(). If this is needed for
code coverage we can work to figure out the real root cause of the
missing coverage later.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Kernel objects that contain embedded synchronization structures like
spinlocks can't be palced in the (cached/incoherent) stack memory on
coherence platforms like intel_adsp.
The normal fix in a test case is just to make the offending data
static, but that's painful here because SYS_BITARRAY_DEFINE declares
two objects (i.e. you can't put a "static" in front of it as with
similar macros) and it happens to be used in this case to define local
variables with collliding names, so I'd have to go in and rename
everything.
And there's little value anyway. Bitarrays are nearly-pure data
structures and extremely unlikely to show up platform-dependent
behavior.
Fixes#35242
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Two testcases of semaphore failed in ADSP due to the timeout value
we got back from the child thread is invalid. We put the variable in
the bss instead of in a stack, trying to avoid this.
Fixes#34687
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
add semaphores to ensure all services started before client query.
otherwise client query services may fail.
Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
A few mistakes in recent changes to this test:
There was a "LOCK_NO" (i.e. no locking!) case being exercised in
test_inc_concurrency, where three threads would race against each
other incrementing and decrementing a single count without
synchronization. And... it failed on cAVS. Because there was no
synchronization. Just remove.
The LOCK_IRQ (irq_un/lock()) case of the same test was was casting
taking a pointer to an integer (that stored the irq_lock() result) and
casting the pointer value to an integer instead of dereferencing it.
Also the workq test had a work item on the stack, which is forbidden
when KERNEL_COHERENCE=y
Fixes#34152
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Should not use -1 as an input parameter for unsigned int. Use zero
instead of -1 as invaild interrupt number to fix coverity warning.
Fixes#35146
CID: 235994
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
Update testcase test_fatal_on_smp(), and refine it and correct some
inappropriate usage such as unnecessary irq_lock(). This prevents
the error propagation to the later executing testcase.
Fixes#35200Fixes#35202
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
in test case test_mheap_threadsafe, we will create 3 threads using
same thread handler tmheap_handler, we should make thread_id
to be a local variable, otherwise tmheap_handler is non-reentrant.
Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
The variable cnt is assigned twice in a row, so remove
the first one.
Coverity-CID: 235962
Fixes#35161
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Treat ARMV7_M_ARMV8_M_MAINLINE similarly to ARMV6_M_ARMV8_M_BASELINE
and add arch_nop() calls to test_nop function.
Additionally add one arch_nop() call to fit comment and update
comments when required on other archs.
Signed-off-by: Erwan Gouriou <erwan.gouriou@linaro.org>
After enabled FPU context switch, one condvar testcase failed due to
the order of spawning thread cannot be guaranteed. Add a delay to
make sure the thread which initializing the condvar run first.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
The newly added testcase test_nop failed the CI. Give RISCV more
arch_nop() instructions to archieve one cycle.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
new test failed which means we missed something in CI or the failing
platform changed after CI was initially run. skip it for now while we
investigate.
Do some minor cleanup in the metadata.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Add test cases of direct interrupt for arch x86 and posix.
We register two direct interrupt at build time, then triggering
interrupt and check if ISR handler has executed or not. We also
check irq_enable and irq_disable works.
Why we add an extra compiler option "-mgeneral-regs-only" to make
it works in arch x86. because there might be some existing x87
instructions executing inside interrupt context.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
Add test cases of regular interrupt for arch x86. This tests basic
functionailty of IRQ_CONNECT(), irq_enable(), irq_disable(),
irq_lock(), irq_unlock().
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
Add a test case to test arch interface arch_nop(), the main focus here
is for coverage of the code. arch_nop() is a special implementation
and it will behave differently on different platforms. By the way, this
also measures how many cycles it spends for platforms that support it.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
Remove this intrusive tracing feature in favor of the new object tracing
using the main tracing feature in zephyr. See #33603 for the new tracing
coverage for all objects.
This will allow for support in more tools and less reliance on GDB for
tracing objects.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Legacy k_work API has been marked deprecated, but it is still present
in tree and should be tested. Avoid CI warnings by disabling warnings
on use of deprecated API within the test source files.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Add 3 test cases to test offload job from isr, include:
1. test_isr_offload_job_multiple()
Validate the offloaded work executes immediately or not depends on its
priority, and it offloads to different k_work.
2. test_isr_offload_job_identi()
Validate the offloaded work executes immediately or not depends on its
priority, and it offloads to the identical k_work.
3. test_isr_offload_job()
Use dynamic interrupt instead of irq_offload() to verify the offloaded
work.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
Add an testcase. Creat two preempt threads with equal priority to
atomiclly access the same atomic value. Because these preempt
threads are of equal priority, so enable time slice to make
them scheduled. The thread will execute for some time.
In this time, the two sub threads will be scheduled separately
according to the time slice.
Signed-off-by: Ying ming <mingx.ying@intel.com>
If calling function k_thread_resume() when the thread is not suspend,
it takes no effect. This change improve coverage of function
k_thread_resume() in sched.c
Signed-off-by: Ying ming <mingx.ying@intel.com>
First, this test is a little suspect. It's assuming that the value
returned from k_cycle_get_32() represents the time since system
power-on. While that's an obvious implementation choice and surely
often true, it's definitely not the way we document this API to the
arch layer. It's perfectly legal for a platform to return any value
as long as the counter is increasing at the correct rate. Leaving for
now as there's no other way to test CONFIG_BOOT_DELAY, but this will
likely be coming back to confuse us at some point.
Regardless, that convention holds for x86 devices using any of the
existing drivers. But on an EFI PC using the TSC counter as the clock
source: (1) the counter is running at 1-2 GHz and (2) the time to get
through an EFI BIOS and into Zephyr is routinely 10+ seconds,
especially on reference hardware. The poor 32 bit API will roll over
several times, and effectively be a random number by the time it
reaches this test.
Just skip this test with fast counter.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Add QEMU board with single core ARCv3 HS6x 64 bit CPU
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
This adds a test for z_phys_unmap() to make sure that memory
can be unmapped and is no longer accessible.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This waits a bit for NRU eviction algorithm (which is the default)
to work its magic to clear the access bit of physical frames.
This increases the number of clean pages which can be evicted,
to make sure the number of clean pages evicted is not zero, which
would cause an assertion.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The test itself is highly sensitive to the size of the kernel
image. When the kernel gets larger, the number of pages used by
the backing store needs to shrink. So here this is.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds some tests to make sure sys_bitarray_*() are
working correctly.
Signed-off-by: Lauren Murphy <lauren.murphy@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This test fails on qemu_arc_{em|hs} consistently,
due to bug in quem_arc, details:
https://github.com/foss-for-synopsys-dwc-arc-processors/qemu/issues/14.
To get clean results we need to disable this test until the
bug is fixed and fix gets propagated to new Zephyr-SDK.
Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
This PR add 2 module test cases:
- test_smp_release_global_lock() and test_smp_release_global_lock_irq()
verify z_smp_release_global_lock() works.
And 1 integration test cases:
- test_inc_concurrency() to verify parallelly increase operations will
fail if not applying synchronization on SMP.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
Use `pm_device_*` prefix for the device runtime PM API. This adds the
API to the `pm` namespace, making it clear part of the PM subsystem.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
This adds k_thread_join() to the thread being used in
test_sem_take_timeout_isr() to avoid a thread re-use error
in the test after this one.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The permission to use multiple_condvar is not granted to test
test_condvar_multiple_threads_wait_wake, which results in
bunch or permission error messages, and actually not testing
the conditional variables. This grants the permission to
the those conditional variables to the test threads. Also,
replace the k_yield() with k_msleep() to allow all created
threads time to run. A simply k_yield() might let a few to
run before the next batch of "waking" threads start to run,
resulting in some conditional variables not being initialized
but trying to wake.
Fixes#34777
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds FPU sharing support with a lazy context switching algorithm.
Every thread is allowed to use FPU/SIMD registers. In fact, the compiler
may insert FPU reg accesses in anycontext to optimize even non-FP code
unless the -mgeneral-regs-only compiler flag is used, but Zephyr
currently doesn't support such a build.
It is therefore possible to do FP access in IRS as well with this patch
although IRQs are then disabled to prevent nested IRQs in such cases.
Because the thread object grows in size, some tests have to be adjusted.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>