A fairly common idiom in our test code is to put test-local data
structures onto the stack, even when they are to be used from another
thread. But stacks are incoherent memory on some platforms, which
means that such things may not get a consistent view of memory between
threads.
Just make these things static. A few of these spots were causing test
failures on intel_adsp_cavs15. More were found by inspection while
hunting for mistakes.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Thread stack memory on coherence platforms needs to be linked into a
special section (so it can be cached).
Also, the test_idle_stack case just can't work with coherence. It's
measuring the CPU's idle stack's unused data, which was initialized at
boot from CPU0, and not necessarily the CPU on which the test is
running. In practice on intel_adsp_cavs15, our CPU has stale zeroes
in the cache for its unused stack area (presumably from a firmware
memory clear at boot or something?). Making this work would require a
cache invalidate on all CPUs at boot time before the idle threads
start, we can't do it here in the test because we don't know where the
idle stack pointer is.
Too much work for an esoteric stack size test, basically. Just
disable on these platforms.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
It was discovered that TLS data/bss in stack need to be
aligned correctly or else incorrect variables would be
accessed. This makes tdata and tbss sections to have
odd sizes to make sure everything still works.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The deadline scheduler as of commit ef626571b2 ("kernel/sched:
Optimize deadline comparison") got an optimization that requires that
the the cycle difference of the deadline time of the "first" and
"last" runnable thread never be higher than 2^31.
The test code here was masking off the bottom 31 bits of the generated
deadlines, so it looked OK. But because the actual setting of the
deadline values takes time too, it was still possible to select values
that would roll over. The window was VERY small, but the RNG on one
platform (up_squared) hit it.
Shrink the selected deadlines to live in a 30 bit space for safety.
Fixes#31508
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This reverts commit 79d73063af.
The issue #31333 is fixed so this can be reverted to
enable tests/kernel/context to run with demand paging enabled.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The inheritance test first creates a parent thread, and then
creates a child thread inside this parent thread. At the same
time inside the main thread, a resource pool is assigned to
the parent thread. However, when under SMP, it is possible
that the pool assignment is done between both parent/child
threads are getting pointers to the pool, due to multiple
threads are running. So when doing pool pointer comparison,
there is a mismatch between those pointers (like parent has
a null pool pointer while child is pointing to the actual
pool), and thus failing the test. So fix this by delaying
the parent from running under after pool assignment is done.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
When coverage is enabled on x86_64, GCC uses relative addressing
to increment the gcov counters. The generated code of the test
function assumes execution is in the same location where
the linker places the test function. This does not work with
the execution test as it copies the function into another part
of memory and tries to execute from there. When the copied
function starts to run, the instruction pointer is at the newly
copied function. So any relative addressing with regard to
the instruction pointer now is invalid. Instead of
<generated code RIP + offset> for gcov counter as it should be,
now the copied code is trying to access the counter at
<copied code RIP + offset>, which points to incorrect
memory location (and possibly invalid/non-mapped memory).
To fix this, we need to tell GCC not to use relative addressing.
This can be accomplished by telling GCC to use the large memory
model. This is only used for this test as this option increases
code size quite a bit, and should not be used in general.
Fixes#30434
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This puts the transplanted_function into its own section so that
z_phys_map() can correctly map the whole range of memory used
by the function, in case someone decides to expand the function
to be bigger than a MMU page.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The test_mem_domain_migration test creates a new thread with
different priority based on whether SMP is enabled. This causes
an issue where SMP=y and MP_NUM_CPUS=1 where the spin_entry()
would spin forever (with k_busy_wait()) and not yielding since
it has cooperative priority. Fix this by using MP_NUM_CPUS to
figure out which priority to use, as it is valid configuration
to have SMP=y and MP_NUM_CPUS=1.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Increase thread counter before the assert, otherwise in case of fail
the output will give the impression that the same thread ran more than
once.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Adjusting the tick alignment of this test caused it to start breaking
on nRF5 platforms, which use a 32768 Hz clock that doesn't divide
evenly into the millisecond precision used by the test. The "half
slice" math ended up being wrong by a bit.
Convert to ticks first before computing the cycle delay needed.
Fixes#29705
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
the implementation of spinlock validation uses two LSB bits in the
bottom of a pointer union to store a CPU index, which only has space
for 4 CPUS. the MP_NUM_CPUS should be <= 4.
Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
Clean up logging menuconfig by grouping configuration into
sections like: mode, processing configuration, backends.
Additionlly, removed LOG_ENABLE_FANCY_OUTPUT_FORMATTING which is no
longer in use.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Add some error condition of testcases to verify whether the
robustness of API. Such as give a NULL to some API and check
the response if get result that we were expacted.
Signed-off-by: Jian Kang <jianx.kang@intel.com>
Some arches like x86 need all memory mapped so that they can
fetch information placed arbitrarily by firmware, like ACPI
tables.
Ensure that if this is the case, the kernel won't accidentally
clobber it by thinking the relevant virtual memory is unused.
Otherwise this has no effect on page frame management.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Until #31333 is resolved, the periodic timer in the eviction
algorithm interacts with this test in such a way that the system
deadlocks.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
If we evict enough pages to completely fill the backing store,
through APIs like k_mem_map(), z_page_frame_evict(), or
z_mem_page_out(), this will produce a crash the next time we
try to handle a page fault.
The backing store now always reserves a free storage location
for actual page faults.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
More to be added, but for now show that we can map more
anonymous memory than we physically have, and that reading/
writing to it works as expected.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Show we can measure free memory properly and map a page of
anonymous memory, which has been zeroed and is writable.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Add a conf file to make sure the kernel will use simple linked-list
ready queue as scheduling algorithm. This operation will increase module
testcase coverage and z_priq_dumb_add z_prj_dum_remove function are
called.
Signed-off-by: Ying ming <mingx.ying@intel.com>
This reverts commit b98058ecd0.
With icount finally working in QEMU for ARC these tests start to
pass reliably, so no need to exclude them any longer.
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
This reverts commit 6f4f5b1fe5.
With icount finally working in QEMU for ARC these tests start to
pass reliably, so no need to exclude them any longer.
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
This reverts commit 27d42f060d.
With icount finally working in QEMU for ARC these tests start to
pass reliably, so no need to exclude them any longer.
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Increase the heap memory pool size in the
prj_armv8m_mpu_stack_guard.conf, to match
the value in the default configuration in
proj.conf (and fix an out-of memory issue
when allocating a kernel object).
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Change subsystem to use struct pm_state with substate-id instead of
using only the power state category.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Migrate the whole pm subsystem to use new power states information
from power_state.h and get states and residency properties from
device tree.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Upcoming changes from Andrew that add a global timeout to the kernel
broke because of some voodoo behavior in the kernel/context test. It
will use arch_irq_disable() on the timer interrupt directly to prevent
interrupts and measure timekeeping in their absence. But some
architectures[1] don't reliably deliver interrupts that arrive, which
means that a running timeout that exists across this period will
result in a corrupt timeout queue.
Document that rule for architectures, move the offending test to the
end of the test suite (to minimize the chance of interacting with
other test code) and put a giant warning about the situation on it.
Long term, we may want to rework this test to do its job in other
ways.
[1] On x86, the interrupt disable happens at the IO-APIC level, while
interrupt latching and delivery is downstream in each CPU's Local
APIC. An IO-APIC masked interrupt is completely invisible to the APIC
and can never be delivered once the line goes low.
Fixes#31333
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>