Split ARM and ARM64 architectures.
Details:
- CONFIG_ARM64 is decoupled from CONFIG_ARM (not a subset anymore)
- Arch and include AArch64 files are in a dedicated directory
(arch/arm64 and include/arch/arm64)
- AArch64 boards and SoC are moved to soc/arm64 and boards/arm64
- AArch64-specific DTS files are moved to dts/arm64
- The A72 support for the bcm_vk/viper board is moved in the
boards/bcm_vk/viper directory
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
We are setting CONFIG_GEN_PRIV_STACKS when AArch64 actually uses a
statically allocated privileged stack.
This error was not captured by the tests because we only verify whether
a read/write to a privileged stack is failing, but it can fail for a lot
of reasons including when the pointer to the privileged stack is not
initialized at all, like in this case.
With this patch we deselect CONFIG_GEN_PRIV_STACKS and we fix the
mem_protect/userspace test to correctly probe the privileged stack.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Add some error test cases for userspace of memory protection module.
This increase the code coverage of testing.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
This reverts commit 9de70a78fe.
The tests have been updated so there is no need to skip tests
when the kernel is linked in virtual address space.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Replace *_thread_resource_pool_assign() in the reference with the new
k_thread_heap_assign() since both k_thread_resource_pool_assign() and
z_thread_resource_pool_assign() has been removed prio to v2.5 (by the
commit c770cab1a3 and 3c2c1d85b0 respectively) along with the
k_mem_pool API removal.
For the resource pool inheritance test, the variables with "res_pool"
string has been replaced by "heap_mem" to align with the documentation
fix. No functionality has been changed.
Signed-off-by: Yasushi SHOJI <yashi@spacecubics.com>
This is the same problem as seen for #32053. Refer to that for the
details and propose a similar fix.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
When the kernel links in virtual address space, the data
structures needed for the z_phys_map() no longer point to physical
addresses (which are required for z_phys_map() to work). So skips
these tests if CONFIG_KERNEL_LINK_IN_VIRT=y.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
When the kernel is TICKLESS, timeouts are set as needed, and drivers
all have some minimum amount of time before which they can reliably
schedule an interrupt. When this happens, drivers will kick the
requested interrupt out by one tick. This means that it's not
reliably possible to get a timeout set for "one tick in the
future"[1].
And attempting to do that is dangerous anyway. If the driver will
delay a one-tick interrupt, then code that repeatedly tries to
schedule an imminent interrupt may end up in a state where it is
constantly pushing the interrupt out into the future, and timer
interrupts stop arriving! The timeout layer actually has protection
against this case.
Finally getting to the point: in recent changes, the timeslice layer
lost its integration with the "imminent" test in the timeout code, so
it's now able to run into this situation: very rapidly context
switching code (or rapidly arriving interrupts) will have the effect
of infinitely[2] delaying timeouts and stalling the whole timeout
subsystem.
Don't try to be fancy. Just clamp timeslice duration such that a
slice is 2 ticks at minimum and we'll never hit the problem. Adjust
the two tests that were explicitly requesting very short slice rates.
[1] Of course, the tradeoff is that the tick rate can be 100x higher
or more, so on balance tickless is a huge win.
[2] Actually it only lasts until a 31 bit signed rollover in the HPET
cycle count in practice.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The inheritance test first creates a parent thread, and then
creates a child thread inside this parent thread. At the same
time inside the main thread, a resource pool is assigned to
the parent thread. However, when under SMP, it is possible
that the pool assignment is done between both parent/child
threads are getting pointers to the pool, due to multiple
threads are running. So when doing pool pointer comparison,
there is a mismatch between those pointers (like parent has
a null pool pointer while child is pointing to the actual
pool), and thus failing the test. So fix this by delaying
the parent from running under after pool assignment is done.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
When coverage is enabled on x86_64, GCC uses relative addressing
to increment the gcov counters. The generated code of the test
function assumes execution is in the same location where
the linker places the test function. This does not work with
the execution test as it copies the function into another part
of memory and tries to execute from there. When the copied
function starts to run, the instruction pointer is at the newly
copied function. So any relative addressing with regard to
the instruction pointer now is invalid. Instead of
<generated code RIP + offset> for gcov counter as it should be,
now the copied code is trying to access the counter at
<copied code RIP + offset>, which points to incorrect
memory location (and possibly invalid/non-mapped memory).
To fix this, we need to tell GCC not to use relative addressing.
This can be accomplished by telling GCC to use the large memory
model. This is only used for this test as this option increases
code size quite a bit, and should not be used in general.
Fixes#30434
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This puts the transplanted_function into its own section so that
z_phys_map() can correctly map the whole range of memory used
by the function, in case someone decides to expand the function
to be bigger than a MMU page.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The test_mem_domain_migration test creates a new thread with
different priority based on whether SMP is enabled. This causes
an issue where SMP=y and MP_NUM_CPUS=1 where the spin_entry()
would spin forever (with k_busy_wait()) and not yielding since
it has cooperative priority. Fix this by using MP_NUM_CPUS to
figure out which priority to use, as it is valid configuration
to have SMP=y and MP_NUM_CPUS=1.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Some arches like x86 need all memory mapped so that they can
fetch information placed arbitrarily by firmware, like ACPI
tables.
Ensure that if this is the case, the kernel won't accidentally
clobber it by thinking the relevant virtual memory is unused.
Otherwise this has no effect on page frame management.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
If we evict enough pages to completely fill the backing store,
through APIs like k_mem_map(), z_page_frame_evict(), or
z_mem_page_out(), this will produce a crash the next time we
try to handle a page fault.
The backing store now always reserves a free storage location
for actual page faults.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
More to be added, but for now show that we can map more
anonymous memory than we physically have, and that reading/
writing to it works as expected.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Show we can measure free memory properly and map a page of
anonymous memory, which has been zeroed and is writable.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
If we evict enough pages to completely fill the backing store,
through APIs like k_mem_map(), z_page_frame_evict(), or
z_mem_page_out(), this will produce a crash the next time we
try to handle a page fault.
The backing store now always reserves a free storage location
for actual page faults.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
More to be added, but for now show that we can map more
anonymous memory than we physically have, and that reading/
writing to it works as expected.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Show we can measure free memory properly and map a page of
anonymous memory, which has been zeroed and is writable.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Skip the memory mapping execution test case when code coverage enabled
for qemu_x86_64 platform. See issue #30434.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
Renamed to make its semantics clearer; this function maps
*physical* memory addresses and is not equivalent to
posix mmap(), which might confuse people.
mem_map test case remains the same name as other memory
mapping scenarios will be added in the fullness of time.
Parameter names to z_phys_map adjusted slightly to be more
consistent with names used in other memory mapping functions.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Modify test .yaml file, to allow the .gap_filling test
variant to execute only on cortex-m33 platforms with
sufficient number of MPU regions. Copy pasting the
configuration from mem_protect/userspace test.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Skip the test_disable_mmu_mpu test case for
Cortex-M non-secure builds, since the test
may enter a BusFault which is not banked
between security states and the system
may hang.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Skip the scenario of accesing a faulty address
in test_string_nlen for Non-Secure Zephyr builds,
because accessing faulty addresses in this case
triggers SecureFault that may hang the system
completely.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Use the core k_heap API pervasively within our tree instead of the
z_mem_pool wrapper that provided compatibility with the older mempool
implementation.
Almost all of this is straightforward swapping of one alloc/free call
for another. In a few cases where code was holding onto an old-style
"mem_block" a local compatibility struct with a single field has been
swapped in to keep the invasiveness of the changes down.
Note that not all the relevant changes in this patch have in-tree test
coverage, though I validated that it all builds.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>