Let eviction algorithms be notified when a given page frame:
- should be considered as possible candidate
- should no longer be considered as candidate
- has just been marked as "accessed"
The NRU algorithm is unchanged so it implements those as empty stubs.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
If a page is paged out or paged in but unaccessible for the purpose of
tracking the "accessed" flag then k_mem_unmap() may fails. Add the code
needed to support those cases.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
This is part of a series of move memory management related
stuff out of Z_ namespace into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Also any demand paging and page frame related bits are
renamed.
This is part of a series to move memory management related
stuff out of the Z_ namespace into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Renames:
Z_KERNEL_VIRT_START to K_MEM_KERNEL_VIRT_START
Z_KERNEL_VIRT_SIZE to K_MEM_KERNEL_VIRT_SIZE
Z_KERNEL_VIRT_END to K_MEM_KERNEL_VIRT_END
This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Renames:
Z_VIRT_RAM_START to K_MEM_VIRT_RAM_START
Z_VIRT_RAM_SIZE to K_MEM_VIRT_RAM_SIZE
Z_VIRT_RAM_END to K_MEM_VIRT_RAM_END
This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Renames:
Z_PHYS_RAM_START to K_MEM_PHYS_RAM_START
Z_PHYS_RAM_SIZE to K_MEM_PHYS_RAM_SIZE
Z_PHYS_RAM_END to K_MEM_PHYS_RAM_END
This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Rename Z_BOOT_VIRT_TO_PHYS() and Z_BOOT_PHYS_TO_VIRT() to
K_MEM_BOOT_VIRT_TO_PHYS() and K_MEM_BOOT_PHYS_TO_VIRT()
respectively. This is part of a series to move memory management
functions away from the Z_ namespace and into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This renames z_phys_map() and z_phys_unmap() to
k_mem_map_phys_bare() and k_mem_unmap_phys_bare()
respectively. This is part of the series to move memory
management functions away from the z_ namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The internal functions k_mem_map_impl() and k_mem_unmap_impl()
are renamed to k_mem_map_phys_guard() and
k_mem_unmap_phys_guard() respectively to better clarify
their usage.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The struct z_page_frame is marked __packed to avoid extra padding as
such padding may represent significant memory waste when lots of page
frames are used. However this is a bad strategy.
The code contained this somewhat dubious comment and code in
free_page_frame_list_put():
/* The structure is packed, which ensures that this is true */
void *node = pf;
sys_slist_append(&free_page_frame_list, node);
This is bad for many reasons:
- type checking is completely bypassed;
- if the sys_snode_t node member is no longer located at the front of
struct z_page_frame then the code will still compile and possibly run
but be broken with memory corruption as a likely outcome;
- the sys_slist_append() code is completely unaware of the packed
attribute which breaks architectures with alignment restrictions.
Let's improve code efficiency as well as memory usage by removing the
packed attribute and manually packing the flags in the unused virtual
address bits. This way the page frame array remains naturally aligned,
data access becomes optimal and the actual array size gets even smaller.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Introduce z_page_frame_set() and z_page_frame_clear() to manipulate
flags. Obtain the virtual address using the existing
z_page_frame_to_virt(). This will make changes to the page frame
structure easier.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Spell checking tools do not recognize "iff", replace with "if and only if".
See https://en.wikipedia.org/wiki/If_and_only_if
Signed-off-by: Pieter De Gendt <pieter.degendt@basalte.be>
This is similar to k_mem_map()/_unmap(). But instead of using
anonymous memory, the provided physical region is mapped
into virtual address instead. In addition to simple mapping
physical ro virtual addresses, the mapping also adds two
guard pages before and after the virtual region to catch
buffer under-/over-flow.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Add a closing comment to the endif with the configuration
information to which the endif belongs too.
To make the code more clearer if the configs need adaptions.
Signed-off-by: Simon Hein <Shein@baumer.com>
k_mem_map additionally allocates two guard pages that are not mapped.
These pages are not being accounted when checking the provided size and
when they are added an overflow can happen and the mapped memory is not
correct.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
z_free_page_count is only used in one file, so there is
no need to expose it, even to other part of kernel.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Instead of performing a set of relative address comparisons using
pointers of type 'uint8_t *', we leverage the existing IN_RANGE()
macro and perform the comparisons with 'uintptr_t'.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
This moves including of demand_paging.h out of kernel/mm.h,
so that users of demand paging APIs must include the header
explicitly. Since the main user is kernel itself, we can be
more discipline about header inclusion.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Move the syscall_handler.h header, used internally only to a dedicated
internal folder that should not be used outside of Zephyr.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
virt_page_phys_get can be called with phy parameter NULL when
the intention is just checking if a virtual address is mapped.
This function is generally overwritten by a an arch API that checks if
phys is null before using it but this default implementation doesn't.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
When CONFIG_KERNEL_DIRECT_MAP enabled, the region to be mapped
or unmapped can be outside of the virtual memory space, wholly
within it, or overlap partially. Additional processing is
needed to make sure we only manipulate the bits within
the bitmap, in other words, only the pages represented by
the bitmap.
Fixes#59549
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Many RTOS applications assume the virtual and physical address
is 1:1 mapping, so add the 1:1 mapping support in z_phys_map()
to easy adapt these applications.
Signed-off-by: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
In z_phys_unmap(), the call to virt_region_free() is not using
aligned virtual address and space. This can result in freeing
smaller region that allocated given that inputs to z_phys_unmap()
may not be aligned. So use the already calculated aligned
virtual address and size as input to virt_region_free().
Note that the assertion and if-block in virt_region_free() to
check whether the to-be-unmapped region is within the virtual
memory region needs to be trimmed by one byte at the end.
The assertion and if-block are checking against the region
end address but (start + size) is just one byte over the end.
So subtract one.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
MISRA C:2012 Rule 14.4 (The controlling expression of an if statement
and the controlling expression of an iteration-statement shall have
essentially Boolean type.)
Use `bool' instead of `int' to represent Boolean values.
Use `do { ... } while (false)' instead of `do { ... } while (0)'.
Use comparisons with zero instead of implicitly testing integers.
This commit is a subset of the original commit:
5d02614e34a86b549c7707d3d9f0984bc3a5f22a
Signed-off-by: Simon Hein <SHein@baumer.com>
irq_lock() returns an unsigned integer key.
Generated by spatch using semantic patch
scripts/coccinelle/irq_lock.cocci
Signed-off-by: Johann Fischer <johann.fischer@nordicsemi.no>
In order to bring consistency in-tree, migrate all kernel code to the
new prefix <zephyr/...>. Note that the conversion has been scripted,
refer to zephyrproject-rtos#45388 for more details.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
This adds a LOG_DBG() line for z_phys_unmap which mirrors
what is in z_phys_map(). This also fixes a warning from
Clang about a variable being set but never used (addr_offset).
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The following warning is triggered by GCC when
-Waddress-of-packed-member is enabled:
/home/carles/src/zephyr/zephyr/kernel/mmu.c: In function
'free_page_frame_list_put':
/home/carles/src/zephyr/zephyr/kernel/mmu.c:383:42: warning: taking
address of packed member of 'struct z_page_frame' may result in an
unaligned pointer value [-Waddress-of-packed-member]
383 | sys_slist_append(&free_page_frame_list, &pf->node);
This is due to the fact that sys_snode_t node is an unpacked structure
inside a packed z_page_frame structure, so that the alignment of the
former cannot be ensured if placed inside the latter.
Given that alignment of z_page_frame is ensured by the code, silence the
compiler by going through an intermediate variable.
More info in #16587.
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
The virtual region bitmap bitarray struct is only used within
the source file, so it can be declared static.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
page_frame_dump() and z_page_frames_dump() are used for
debug print, so there is no need to cover those funcs.
__weak function is also excluded, every test overrides it.
Signed-off-by: Lixin Guo <lixinx.guo@intel.com>
In the case where the aligned memory range is on top of the allocated
memory range, freeing the 0 sized top unused memory will trigger
an assert in the virt_region_free() call since vaddr could be equal
to Z_VIRT_REGION_END_ADDR.
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
On ARM64 platforms, when mapping multiple memory zones with size
not multiple of a L2 block size (2MiB), all the following mappings
will probably use L3 tables.
And a huge mapping will consume all possible L3 tables.
In order to reduce usage of L3 tables, this introduces a new
arch_virt_region_align() optional architecture specific
call to eventually return a more optimal virtual address
alignment than the default MMU_PAGE_SIZE.
This alignment is used in virt_region_alloc() by:
- requesting more pages in virt_region_bitmap to make sure we request
up to the possible aligned virtual address
- freeing the supplementary pages used for alignment
Suggested-by: Nicolas Pitre <npitre@baylibre.com>
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
This adds the kconfig to allow reserving a number of page frames
which do not count towards free memory. This is to ensure that
there are enough page frames available for paging code and data.
Or else, it would be possible to exhaust all page frames via
anonymous memory mappings.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
During boot process, the boot sections need to be pinned in
memory to prevent them from being paged out (to avoid
pages being paged out and immediately paged in again).
Once the boot process is completed (just before calling main()),
the boot sections can be unpinned so the memory can be
used for demand paging for paging in data pages.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
If BSS section is not present in memory at boot, it would not
have been cleared as the data pages are not in physical memory.
Manipulating those pages would result in page faults.
In this scenario, zeroing BSS can only be done once the paging
mechanism has been initialized. So do it there.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The beginning of code in do_page_fault() is to pin the page
in memory if it is already present in physical memory.
It is there so that if a page is not present, it can proceed
to perform page-in and then pin it. So the counting of
page faults needs to be moved after the pinning code so
it actually counts page faults, and not counting pinning
operations when the page is already present.
Also clarify the comment on the goto statement as it is not
correct.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
In do_page_fault(), the incoming page fault address is not
aligned, and it was unconditionally assigned to the page
frame virtual address field. If the backing store simply
returns the virtual address without processing in
k_mem_paging_backing_store_location_get(), this unaligned
address will be passed to arch_mem_page_out(). On x86,
it is further passed to range_map() which asserts if
the physical address is not page aligned. So align
the address to page size before assigning it to the page
frame virtual address field.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
These functions are those that need be implemented by backing
store outside kernel. Promote them from z_* so these can be
included in documentation.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
These functions and data structures are those that need
to be implemented by eviction algorithm and application
outside kernel. Promote them from z_* so these can be
included in documentation.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
When marking the reserved region at the end of virtual address
space, call virt_to_bitmap_offset() is not needed as we already
know the offset. So remove it.
Coverity-CID: 235930
Fixes#35160
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds the necessary bits for linker scripts and source code
to specify which symbols need to be pinned in memory. This is
needed for demand paging as some functions and data must reside
in memory all the time and cannot be paged out (e.g. paging,
scheduler, and interrupt routines for functionality).
This is up to the arch/SoC/board to define the sections in
their linker scripts as the pinned section may need special
alignment which cannot be done in common script snippets.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This uses bitarrays for allocating and deallocating virtual
addresses with k_mem_map() and k_mem_unmap(). This will
allow us to reuse virtual addresses.
Fixes#28900
Signed-off-by: Daniel Leung <daniel.leung@intel.com>