These functions are those that need be implemented by backing
store outside kernel. Promote them from z_* so these can be
included in documentation.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
These functions and data structures are those that need
to be implemented by eviction algorithm and application
outside kernel. Promote them from z_* so these can be
included in documentation.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This waits a bit for NRU eviction algorithm (which is the default)
to work its magic to clear the access bit of physical frames.
This increases the number of clean pages which can be evicted,
to make sure the number of clean pages evicted is not zero, which
would cause an assertion.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The test itself is highly sensitive to the size of the kernel
image. When the kernel gets larger, the number of pages used by
the backing store needs to shrink. So here this is.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds bits to support using timing functions for displaying
paging histograms. Currently on qemu_x86_tiny is supported.
Also shorten the test names.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds more bits to gather statistics on demand paging,
e.g. clean vs dirty pages evicted, # page faults with
IRQ locked/unlocked, etc.
Also extends this to gather per-thread demand paging
statistics.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
If we evict enough pages to completely fill the backing store,
through APIs like k_mem_map(), z_page_frame_evict(), or
z_mem_page_out(), this will produce a crash the next time we
try to handle a page fault.
The backing store now always reserves a free storage location
for actual page faults.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
More to be added, but for now show that we can map more
anonymous memory than we physically have, and that reading/
writing to it works as expected.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
If we evict enough pages to completely fill the backing store,
through APIs like k_mem_map(), z_page_frame_evict(), or
z_mem_page_out(), this will produce a crash the next time we
try to handle a page fault.
The backing store now always reserves a free storage location
for actual page faults.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
More to be added, but for now show that we can map more
anonymous memory than we physically have, and that reading/
writing to it works as expected.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>