This adds data cache manipulations, flushing and invalidation,
to the tests where buffer content are being written and
compared. These tests map different virtual pages to the same
physical pages, and write to one of the mapped virtual
addresses. Some SoCs may cache the virtual address separately
and writes to one virtual address will not be reflected to
another virtual address, this failing the comparison. So we
need to manually flush the cache after writing to the buffer,
and invalidating cache before reading. Note that not all reads
and writes need this treatment as some of them only needs to
test for access permissions, and not the memory content.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
For CPUs with incoherent cache under SMP, the tests to read/write
buffer (... majority of tests here) may not work correctly if
the test thread jumps between CPUs. So use the test infrastructure
to limit the test to 1 CPU.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This is part of a series to move memory management functions
away from the z_ namespace and into its own namespace. Also
make documentation available via doxygen.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This renames z_phys_map() and z_phys_unmap() to
k_mem_map_phys_bare() and k_mem_unmap_phys_bare()
respectively. This is part of the series to move memory
management functions away from the z_ namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Make `struct arch_esf` compulsory for all architectures by
declaring it in the `arch_interface.h` header.
After this commit, the named struct `z_arch_esf_t` is only used
internally to generate offsets, and is slated to be removed
from the `arch_interface.h` header in the future.
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Use TC_END_REPORT() to report "PROJECT EXECUTION FAILED"
instead of a call to printk() for ZTEST enabled tests. as
this prints the failed message as well as the RunID (and
whatever will be added in future).
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This test needs to include zephyr/kernel/mm/demand_paging.h, not
zephyr/kernel/mm.h, due to its use of k_mem_pin().
Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
This moves the k_* memory management functions from sys/ into
kernel/ includes, as there are kernel public APIs. The z_*
functions are further separated into the kernel internal
header directory.
Also made a quick change to doxygen to group sys_mem_* into
the OS Memory Management group so they will appear in doc.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
For some kernel tests, faults and exceptions are expected.
They are caught and the test would continue if the reasons
for faults are as expected. However, when the unexpected
reasons are encountered, the code simply prints a message
and calls k_fatal_halt(). When running under twister,
these messages are not the expected failed messages so
twister will spin till timeout although the execution
has already been halted. This adds another printk() before
halt to signal twister that the test has failed and bails
early.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds a test to see if z_phys_unmap() can reclaim memory
correctly, so that the next z_phys_map() re-uses the same
address (with identical input arguments).
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
After writing to mapped_rw, we should also check if the backing
buffer has the correct data. Or we could have a situation where
on systems that need explicit cache controls, the newly updated
mapped_rw is cached but the backing buffer still contain old
data. Comparing the backing buffer to mapped_ro does not really
matter in this case as the content would certain match.
Also, this moves the mapping of mapped_ro earlier so that we
map both mapped_rw and mapped_ro because data manipulation.
And that we also need to verify the values of the backing and
mapped buffers.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
There is an assumption on test_page buffer that the MMU page
size is 4kb so that there is a 8kb buffer for read/write.
However, page size may not be 4kb on all architectures.
We need to make sure the test buffer is large enough for
the read/write test.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Add a bunch of missing "zephyr/" prefixes to #include statements in
various test and test framework files.
Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
The tests is currently testing all the memory mapping parameters but
K_MEM_PERM_USER. Add a test case to test that as well.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
In order to bring consistency in-tree, migrate all tests to the new
prefix <zephyr/...>. Note that the conversion has been scripted, refer
to #45388 for more details.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
This pins the test_page in memory for tests about memory
mapping. This is simply to make sure the whole array
is in physical memory for mapping or else the mapping
function would fail due to having nothing to map.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The variable cnt is assigned twice in a row, so remove
the first one.
Coverity-CID: 235962
Fixes#35161
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds a test for z_phys_unmap() to make sure that memory
can be unmapped and is no longer accessible.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This reverts commit 9de70a78fe.
The tests have been updated so there is no need to skip tests
when the kernel is linked in virtual address space.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This is the same problem as seen for #32053. Refer to that for the
details and propose a similar fix.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
When the kernel links in virtual address space, the data
structures needed for the z_phys_map() no longer point to physical
addresses (which are required for z_phys_map() to work). So skips
these tests if CONFIG_KERNEL_LINK_IN_VIRT=y.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
When coverage is enabled on x86_64, GCC uses relative addressing
to increment the gcov counters. The generated code of the test
function assumes execution is in the same location where
the linker places the test function. This does not work with
the execution test as it copies the function into another part
of memory and tries to execute from there. When the copied
function starts to run, the instruction pointer is at the newly
copied function. So any relative addressing with regard to
the instruction pointer now is invalid. Instead of
<generated code RIP + offset> for gcov counter as it should be,
now the copied code is trying to access the counter at
<copied code RIP + offset>, which points to incorrect
memory location (and possibly invalid/non-mapped memory).
To fix this, we need to tell GCC not to use relative addressing.
This can be accomplished by telling GCC to use the large memory
model. This is only used for this test as this option increases
code size quite a bit, and should not be used in general.
Fixes#30434
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This puts the transplanted_function into its own section so that
z_phys_map() can correctly map the whole range of memory used
by the function, in case someone decides to expand the function
to be bigger than a MMU page.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Show we can measure free memory properly and map a page of
anonymous memory, which has been zeroed and is writable.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Show we can measure free memory properly and map a page of
anonymous memory, which has been zeroed and is writable.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Skip the memory mapping execution test case when code coverage enabled
for qemu_x86_64 platform. See issue #30434.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
Renamed to make its semantics clearer; this function maps
*physical* memory addresses and is not equivalent to
posix mmap(), which might confuse people.
mem_map test case remains the same name as other memory
mapping scenarios will be added in the fullness of time.
Parameter names to z_phys_map adjusted slightly to be more
consistent with names used in other memory mapping functions.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Memory mapping, for now, will be a private kernel API
and is not intended to be application-facing at this time.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We no longer plan to support a split address space with
the kernel in high memory and per-process address spaces.
Because of this, we can simplify some things. System RAM
is now always identity mapped at boot.
We no longer require any virtual-to-physical translation
for page tables, and can remove the dual-mapping logic
from the page table generation script since we won't need
to transition the instruction point off of physical
addresses.
CONFIG_KERNEL_VM_BASE and CONFIG_KERNEL_VM_LIMIT
have been removed. The kernel's address space always
starts at CONFIG_SRAM_BASE_ADDRESS, of a fixed size
specified by CONFIG_KERNEL_VM_SIZE.
Driver MMIOs and other uses of k_mem_map() are still
virtually mapped, and the later introduction of demand
paging will result in only a subset of system RAM being
a fixed identity mapping instead of all of it.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>