Commit Graph

3049 Commits

Author SHA1 Message Date
Daniel Leung
54af5dda84 kernel: mm: rename z_page_frame_* to k_mem_page_frame_*
Also any demand paging and page frame related bits are
renamed.

This is part of a series to move memory management related
stuff out of the Z_ namespace into its own namespace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Daniel Leung
7715aa3341 kernel: mm: rename Z_SCRATCH_PAGE to K_MEM_SCRATCH_PAGE
This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Daniel Leung
01682756b6 kernel: mm: rename Z_VM_RESERVED to K_MEM_VM_RESERVED
This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Daniel Leung
a459cdf51e kernel: mm: rename Z_FREE_VM_START to K_MEM_VM_FREE_START
This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Daniel Leung
b2784c9145 kernel: mm: rename Z_KERNEL_VIRT_* to K_MEM_KERNEL_VIRT_*
Renames:
  Z_KERNEL_VIRT_START to K_MEM_KERNEL_VIRT_START
  Z_KERNEL_VIRT_SIZE to K_MEM_KERNEL_VIRT_SIZE
  Z_KERNEL_VIRT_END to K_MEM_KERNEL_VIRT_END

This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Daniel Leung
03eded1ed6 kernel: mm: rename Z_VIRT_RAM_* to K_MEM_VIRT_*
Renames:
  Z_VIRT_RAM_START to K_MEM_VIRT_RAM_START
  Z_VIRT_RAM_SIZE to K_MEM_VIRT_RAM_SIZE
  Z_VIRT_RAM_END to K_MEM_VIRT_RAM_END

This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Daniel Leung
3fd66de508 kernel: mm: rename Z_PHYS_RAM_* to K_MEM_PHYS_*
Renames:
  Z_PHYS_RAM_START to K_MEM_PHYS_RAM_START
  Z_PHYS_RAM_SIZE to K_MEM_PHYS_RAM_SIZE
  Z_PHYS_RAM_END to K_MEM_PHYS_RAM_END

This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Daniel Leung
b6f951b99e kernel: mm: rename Z_VM_OFFSET to K_MEM_VM_OFFSET
This is part of a series to move memory management related
stuff from the Z_ namespace into its own namespace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Daniel Leung
def364ab08 kernel: mm: rename Z_BOOT_* to K_MEM_BOOT_*
Rename Z_BOOT_VIRT_TO_PHYS() and Z_BOOT_PHYS_TO_VIRT() to
K_MEM_BOOT_VIRT_TO_PHYS() and K_MEM_BOOT_PHYS_TO_VIRT()
respectively. This is part of a series to move memory management
functions away from the Z_ namespace and into its own namespace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Daniel Leung
db9d3134c5 kernel: mm: rename Z_MEM_PHYS/VIRT_ADDR to K_MEM_*
This is part of a series to move memory management functions
away from the z_ namespace and into its own namespace. Also
make documentation available via doxygen.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Daniel Leung
552e29790d kernel: mm: rename z_phys_un/map to k_mem_*_phys_bare
This renames z_phys_map() and z_phys_unmap() to
k_mem_map_phys_bare() and k_mem_unmap_phys_bare()
respectively. This is part of the series to move memory
management functions away from the z_ namespace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Daniel Leung
295254a96b kernel: mm: remove k_mem_phys_un/map()
These functions were introduced alongside with the memory mapped
stack feature, and are currently only being used there only.
To avoid potential confusion with k_mem_un/map(), remove them
and use k_mem_map/unmap_phys_guard() directly instead.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Daniel Leung
9f9dd264d8 kernel: mm: rename k_mem_un/map_impl to k_mem_*_phys_guard
The internal functions k_mem_map_impl() and k_mem_unmap_impl()
are renamed to k_mem_map_phys_guard() and
k_mem_unmap_phys_guard() respectively to better clarify
their usage.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Dong Wang
749566ac48 kernel/init: use zephyr's stack size macro for main/idle thread creation
As their stacks are defined by zephyr's kernel/thread stack definition
macro, better use zephyr's kernel/thread stack size macro for their stack
size, ensuring consistency and preventing potenial issues related to stack
size misconfiguration.

Signed-off-by: Dong Wang <dong.d.wang@intel.com>
2024-06-12 21:11:02 -04:00
Anas Nashif
c20e798646 arch: call arch_smp_init() directly, do not use SYS_INIT
Move this to a call in the init process. arch_* calls are no services
and should be called consistently during initialization.

Place it between PRE_KERNEL_1 and PRE_KERNEL_2 as some drivers
initialized in PRE_KERNEL_2 might depend on SMP being setup.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-06-12 18:23:54 -04:00
Martí Bolívar
15c9d3711c kernel: add CONFIG_DEVICE_DT_METADATA and helpers
This option allows you to look up a struct device from any of the
node labels that were attached to the devicetree node used to create
the device, etc.

This is helpful because node labels are a much more human-friendly set
of unique identifiers than the node names we are currently relying on
for use with device_get_binding(). Adding this infrastructure in the
device core allows anyone to make use of it without having to
replicate node label storage and search functions in various places in
the tree. The main use case, however, is for looking up devices by
node label in the shell.

Since there is a footprint penalty associated with storing the node
label metadata, leave this option disabled by default.

Signed-off-by: Martí Bolívar <mbolivar@amperecomputing.com>
2024-06-12 18:49:54 +02:00
Anas Nashif
7b2ccf4dfe kernel: increase main stack size for ztests on nios2
ztest now needs to more main stack size.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-06-12 12:47:18 -04:00
Fabio Baltieri
328365989f kernel: mem_slab: only define slab_ptr_is_good with assert enabled
Add a __ASSERT_ON guard around slab_ptr_is_good, as that is only used in
assertions and leaving it on seems to generate a build warning with some
clang versions:

kernel/mem_slab.c:207:20: error: unused function 'slab_ptr_is_good'
  207 | static inline bool slab_ptr_is_good(struct k_mem_slab *slab,...
      |                    ^~~~~~~~~~~~~~~~

Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
2024-06-10 17:46:10 +01:00
Krzysztof Sychla
87946a8996 kernel: banner: fix disabling boot banner
When the CONFIG_BOOT_BANNER flag is set to "n", but CONFIG_BOOT_DELAY
is enabled, there is a delay message printed at boot time.
This allows for the whole boot banner to be disabled.

Signed-off-by: Krzysztof Sychla <ksychla@antmicro.com>
2024-06-10 00:59:10 -07:00
Nicolas Pitre
5f2620fece kernel: mem_slab: extend slab pointer validation
Abstract slab pointer validation and apply it to block dequeue during
allocation in addition to the existing block freeing. This should help
catching some buffer overflow induced corruptions.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-07 21:43:28 +02:00
Nicolas Pitre
67706a1802 kernel: mem_slab: reverse free list initialization
As it is, blocks are allocated going backward within the buffer.
There is nothing fundamentally wrong with that, but it makes debugging
unnatural with the successively descending addresses. Create the free
list so pointers are oriented forward, at least initially.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-07 21:43:28 +02:00
frei tycho
4c2938a295 kernel: added missing parenthesis
- added missing parenthesis around macro argument expansion

Signed-off-by: frei tycho <tfrei@baumer.com>
2024-06-07 12:59:46 +02:00
Peter Mitsis
0bcdae2c62 kernel: Add CONFIG_ARCH_HAS_DIRECTED_IPIS
Platforms that support IPIs allow them to be broadcast via the
new arch_sched_broadcast_ipi() routine (replacing arch_sched_ipi()).
Those that also allow IPIs to be directed to specific CPUs may
use arch_sched_directed_ipi() to do so.

As the kernel has the capability to track which CPUs may need an IPI
(see CONFIG_IPI_OPTIMIZE), this commit updates the signalling of
tracked IPIs to use the directed version if supported; otherwise
they continue to use the broadcast version.

Platforms that allow directed IPIs may see a significant reduction
in the number of IPI related ISRs when CONFIG_IPI_OPTIMIZE is
enabled and the number of CPUs increases.  These platforms can be
identified by the Kconfig option CONFIG_ARCH_HAS_DIRECTED_IPIS.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2024-06-04 22:35:54 -04:00
Peter Mitsis
d8a4c8a90c kernel: Add CONFIG_IPI_OPTIMIZE
The CONFIG_IPI_OPTIMIZE configuration option allows for the flagging
and subsequent signaling of IPIs to be optimized.

It does this by making each bit in the kernel's pending_ipi field
a flag that indicates whether the corresponding CPU might need an IPI
to trigger the scheduling of a new thread on that CPU.

When a new thread is made ready, we compare that thread against each
of the threads currently executing on the other CPUs. If there is a
chance that that thread should preempt the thread on the other CPU
then we flag that an IPI is needed for that CPU. That is, a clear bit
indicates that the CPU absolutely will not need to reschedule, while a
set bit indicates that the target CPU must make that determination for
itself.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2024-06-04 22:35:54 -04:00
Peter Mitsis
9ff5221d23 kernel: Update IPI usage in k_thread_priority_set()
1. The flagging of IPIs is moved out of k_thread_priority_set() into
z_thread_prio_set(). This allows for an IPI to be done for a thread
that had its priority bumped due to the handling of priority
inheritance from a mutex.

2. k_thread_priority_set()'s check for sched_locked only applies to
non-SMP builds that are using the old arch_swap() framework to switch
between threads.

Incidentally, nearly all calls to flag_ipi() are now performed with
sched_spinlock being locked. The only exception is in slice_timeout().

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2024-06-04 22:35:54 -04:00
Peter Mitsis
ed7a5f31c2 kernel: Update CONFIG_PIPES Kconfig description
Updates the CONFIG_PIPES Kconfig description to add a note that
enabling it will cause a slight increase to the thread structure.
This mirrors a similar comment in CONFIG_EVENTS.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2024-06-04 19:10:56 -04:00
Yong Cong Sin
6a3cb93d88 arch: remove the use of z_arch_esf_t completely from internal
Created `GEN_OFFSET_STRUCT` & `GEN_NAMED_OFFSET_STRUCT` that
works for `struct`, and remove the use of `z_arch_esf_t`
completely.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-04 14:02:51 -05:00
Yong Cong Sin
e54b27b967 arch: define struct arch_esf and deprecate z_arch_esf_t
Make `struct arch_esf` compulsory for all architectures by
declaring it in the `arch_interface.h` header.

After this commit, the named struct `z_arch_esf_t` is only used
internally to generate offsets, and is slated to be removed
from the `arch_interface.h` header in the future.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-04 14:02:51 -05:00
Flavio Ceolin
65fc5b7f17 device: Remove z_device_is_ready
This duplicates the functionality of device_is_ready.

Calls for z_device_is_ready are being done in kernel mode, so it is
safe to call its implementation directly.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-31 08:06:44 +02:00
Yong Cong Sin
3570408db5 build: namespace syscall sources to zephyr/
Namespace the `syscall_dispatch.c` & `syscall_export_llext.c`
to `zephyr/` as well

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-05-28 22:03:55 +02:00
Yong Cong Sin
bbe5e1e6eb build: namespace the generated headers with zephyr/
Namespaced the generated headers with `zephyr` to prevent
potential conflict with other headers.

Introduce a temporary Kconfig `LEGACY_GENERATED_INCLUDE_PATH`
that is enabled by default. This allows the developers to
continue the use of the old include paths for the time being
until it is deprecated and eventually removed. The Kconfig will
generate a build-time warning message, similar to the
`CONFIG_TIMER_RANDOM_GENERATOR`.

Updated the includes path of in-tree sources accordingly.

Most of the changes here are scripted, check the PR for more
info.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-05-28 22:03:55 +02:00
Fin Maaß
8c37f14b98 tracing: add k_realloc trace
For `k_realloc` add tracing feature.

Signed-off-by: Fin Maaß <f.maass@vogl-electronic.com>
2024-05-28 17:55:12 +02:00
Fin Maaß
09eaa8757f kernel: implement k_realloc
implement k_realloc.

Signed-off-by: Fin Maaß <f.maass@vogl-electronic.com>
2024-05-28 17:55:12 +02:00
Hess Nathan
20b55425d3 coding guidelines: comply with MISRA Rule 13.4
avoid the direct use of assignment expression
values for conditions

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-28 10:07:31 +02:00
Flavio Ceolin
4d85f3d91c pm: Deprecate z_pm_save_idle_exit
Deprecate z_pm_save_idle_exit and promote pm_system_resume.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-27 02:10:03 -07:00
Flavio Ceolin
f7437ac3b1 pm: Move z_pm_save_idle_exit to pm subsys
There is no need to this function be defined inside the kernel since
all places using it are protecting the call under ifdef PM guards.

This way we can also remove the ifdef condition inside the implementation.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-27 02:10:03 -07:00
Nicolas Pitre
b34e94f362 kernel: demand_paging: fix arch_page_location_get() documentation
Symbols from enum arch_page_location are defined as
ARCH_PAGE_LOCATION_* and not ARCH_PAGE_FAULT_*.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-05-24 07:47:49 -04:00
Peter Mitsis
d082cd29af kernel: Relax loop in z_smp_global_lock()
Updates z_smp_global_lock() to follow the pattern used in spinlocks
to relax the loop between atomic_cas() attempts.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2024-05-22 21:35:06 -04:00
Flavio Ceolin
c12f0507b6 userspace: dynamic: Fix k_thread_stack_free verification
k_thread_stack_free syscall was not checking if the caller
had permission to given stack object.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-21 20:54:56 -04:00
Daniel Leung
e6abc035c8 kernel: mem_domain: new config for isolated stacks
This adds a new kconfig to indicate if architecture code
supports isolating thread stacks within the same domain,
and another new kconfig to selectively enable this
behavior.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-05-21 20:53:09 -04:00
Daniel Leung
169bc07e83 kernel: move memory domain kconfigs into its own file
This moves memory domain related kconfigs into its own file
Kconfig.mem_domain.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-05-21 20:53:09 -04:00
Andy Ross
17a5beb341 kernel: Predicate _cpus_active on CONFIG_PM
This value isn't used outside of the PM subsystem, so don't build it.

More important than the four bytes of .bss was the use of an
atomic_inc().  Some platforms are forced to use
CONFIG_ATOMIC_OPERATIONS_C (but in almost all cases are single-core
devices that won't use atomics at runtime).  There, this turns into a
function call that pulls in the whole atomics implementation.

Signed-off-by: Andy Ross <andyross@google.com>
2024-05-21 15:42:50 -07:00
Daniel Leung
2ad265cb75 kernel: userspace: manipulate _thread_idx_map on per-byte basis
The sys_bitfield_(clear/set)_bit() work on pointer size element.
However, _thread_idx_map[] is a byte array. On little endian
systems, the bitops should work fine. However, on big endian
systems, changing the lower bits may actually be manipulating
memory outside the array when CONFIG_MAX_THREAD_BYTES is not
multiple of 4. So modify the code to perform bit ops on
a per-byte basis.

Fixes #72430

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-05-18 15:53:27 +03:00
Nicolas Pitre
e9a47d932c kernel: mmu: shrink and align struct z_page_frame
The struct z_page_frame is marked __packed to avoid extra padding as
such padding may represent significant memory waste when lots of page
frames are used. However this is a bad strategy.

The code contained this somewhat dubious comment and code in
free_page_frame_list_put():

	/* The structure is packed, which ensures that this is true */
	void *node = pf;
	sys_slist_append(&free_page_frame_list, node);

This is bad for many reasons:

- type checking is completely bypassed;

- if the sys_snode_t node member is no longer located at the front of
  struct z_page_frame then the code will still compile and possibly run
  but be broken with memory corruption as a likely outcome;

- the sys_slist_append() code is completely unaware of the packed
  attribute which breaks architectures with alignment restrictions.

Let's improve code efficiency as well as memory usage by removing the
packed attribute and manually packing the flags in the unused virtual
address bits. This way the page frame array remains naturally aligned,
data access becomes optimal and the actual array size gets even smaller.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-05-13 16:04:40 -04:00
Nicolas Pitre
57305971d1 kernel: mmu: abstract access to page frame flags and address
Introduce z_page_frame_set() and z_page_frame_clear() to manipulate
flags. Obtain the virtual address using the existing
z_page_frame_to_virt(). This will make changes to the page frame
structure easier.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-05-13 16:04:40 -04:00
Daniel Apperloo
9fc26804fb linker: decouple KERNEL_WHOLE_ARCHIVE from LLEXT
Dynamic code execution applications not using LLEXT for "extension"
loading are subject to the same linker optimization symbol resolution
issue described in commit 321e395 (in summary, libkernel.a syscalls
not used directly by the application result in weak symbol resolution
of their z_mrsh_ wrapper).

To support usecases where an application is using alternative methods
to load and execute code calling syscalls (likely from userspace) or
is using a mechanism where the linker may not be aware, the configuration
option has been decoupled from CONFIG_LLEXT (who is now a selector) to
KERNEL_WHOLE_ARCHIVE.

Signed-off-by: Daniel Apperloo <daniel.apperloo@intel.com>
2024-05-13 14:23:38 +02:00
Hess Nathan
6d417d52c2 coding guidelines: comply with MISRA Rule 12.1.
added parentheses verifying lack of ambiguities

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-12 13:37:27 -04:00
Hess Nathan
e05c4a8786 coding guidelines: comply with MISRA Rule 11.8
- modified parameter types to receive a const pointer when a
  non-const pointer is not needed

- avoided redundant casts

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-10 14:45:14 -05:00
Flavio Ceolin
68ea73aca2 kernel: sem: Remove constant expression
limit is unsigned int and K_SEM_MAX_LIMIT is defined as UINT_MAX this
means that limit will never be greater K_SEM_MAX_LIMIT.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-09 12:39:46 -04:00
Pieter De Gendt
f147a5fec2 spelling: Replace occurrences of "iff" with "if and only if"
Spell checking tools do not recognize "iff", replace with "if and only if".
See https://en.wikipedia.org/wiki/If_and_only_if

Signed-off-by: Pieter De Gendt <pieter.degendt@basalte.be>
2024-05-06 14:58:08 +01:00
frei tycho
fe38c703b2 kernel: coding guidelines: cast unused arguments to void
- added missing ARG_UNUSED

Signed-off-by: frei tycho <tfrei@baumer.com>
2024-05-06 14:56:24 +01:00
Alberto Escolar Piedras
2f5e93938b Revert "kernel: retrieve system timer clock frequency at runtime or static"
This reverts commit 7c03e5de7f.

https://github.com/zephyrproject-rtos/zephyr/pull/69705
Introduced a regression in main in which
tests/subsys/logging/log_timestamp
started failing. (See
https://github.com/zephyrproject-rtos/zephyr/issues/72344
for more info).
Let's revert the PR. It can be submitted after with the issue
fixed.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-05-06 14:52:29 +03:00
Najumon B.A
7c03e5de7f kernel: retrieve system timer clock frequency at runtime or static
update kernel timeout logic based on retrieve system timer clock
frequency at runtime or static way based on Kconfig
TIMER_READS_ITS_FREQUENCY_AT_RUNTIME

Signed-off-by: Najumon B.A <najumon.ba@intel.com>
2024-05-04 13:24:12 +03:00
Andy Ross
dec022a848 kernel/sched: Fix edge^2 case in abort/join
The previous abort-lifecycle fix missed a case: other threads can
enter k_thread_join(), see that the thread is already dead, and then
need to call z_thread_switch_spin() to wait for a context switch.  But
the new "dummification" code was (by design!) terminating the thread
such that no context would be saved to it.  So switch_handle stayed
NULL and if you hit that timing case correctly[1] you'd deadlock
waiting for a switch that would never come.

Fix is just to set switch_handle when dummifying to any non-NULL
value.

Also add an assertion to catch the obvious case that a thread is
actually dead on the exit path of k_thread_abort() to make sure the
variant path continues to set flags correctly

[1] CI was doing it fairly reliably via tests/kernel/smp_abort on
    qemu_cortex_a53 only.  Only one of my dev systems could see it,
    and then only about 15% of the time.

Signed-off-by: Andy Ross <andyross@google.com>
2024-05-02 13:55:03 -04:00
Andy Ross
47ab66311d kernel/sched: Fix lockless ordering in halt_thread()
We've had threads spinning on the thread state bits, but weren't being
careful to ensure that those bits were the last things seen to change
in a halting thread.  Move it to the end, and add a barrier for
correctness.

Signed-off-by: Andy Ross <andyross@google.com>
2024-05-02 13:55:03 -04:00
Andy Ross
fd340ebf31 sched: Optimize dummy thread usage on SMP
Nicolas Pitre points out that since these thread structs are just
dummies for the context swtiching, they can be presumed to be "write
only" and thus there's no point in having one per CPU, everyone can
share the same one.

The only gotcha is that we never really documented (nor really have a
place to document) that rule, so it's not theoretically impossible for
an architecture to read back what it might have written underneath
arch_switch().  Leave this in a separate commit for bisection
purposes, but the risk seems very low.

Signed-off-by: Andy Ross <andyross@google.com>
2024-05-02 13:55:03 -04:00
Andy Ross
f0fd54cb31 kernel/sched: Fix free-memory write when ISRs abort _current
After a k_thread_abort(), the resulting thread struct is documented as
unused/free memory that may be re-used (for example, to respawn a new
thread).

But in the special case of aborting the current thread from within an
ISR, that wasn't quite happening.  The scheduler cleanup would
complete, but the architecture layer would still try to context switch
away from the aborted thread on exit, and that can include writes to
the now-reused thread struct!  The specifics will depend on
architecture (some do a full context save on entry, most don't), but
in the case of USE_SWITCH=y it will at the very least write the
switch_handle field.

Fix this simply, with a per-cpu "switch dummy" thread struct for use
as a target for context switches like this.  There is some non-trivial
memory cost to that; thread structs on many architectures are large.

Pleasingly, this also addresses a known deadlock on SMP: because the
"spin in ISR" step now happens as the very last stage of
k_thread_abort() handling, the existing scheduler lock works to
serialize calls such that it's impossible for a cycle of threads to
independently decide to spin on each other: at least one will see
itself as "already aborting" and break the cycle.

Fixes #64646

Signed-off-by: Andy Ross <andyross@google.com>
2024-05-02 13:55:03 -04:00
Andy Ross
fc56050e05 kernel/spinlock: Fix SPIN_VALIDATE in ISRs
Spinlocks taken in ISRs were storing the _current thread pointer of
the interrupted thread as the owner, which was never strictly correct
but was benign as the thread would never run until the lock was
released.

But now k_thread_abort(_current) in an ISR has been fixed to eliminate
all references to the (now aborted) thread struct, and _current points
to a dummy thread.  Handle that edge case in the validation framework.

Signed-off-by: Andy Ross <andyross@google.com>
2024-05-02 13:55:03 -04:00
frei tycho
14cb7d5b03 kernel: coding guidelines: add explicit cast to void
- added explicit cast to void when returned value is expectedly ignored

Signed-off-by: frei tycho <tfrei@baumer.com>
2024-05-02 16:49:36 +02:00
Hess Nathan
7659cfd4dc coding guidelines: comply with MISRA Rule 2.2
- avoided dead stores

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-02 09:32:46 +01:00
Adrian Bonislawski
e44d2e65ee kernel: timeslicing: add time slice reset in slice per thread api
This will reset time slice in k_thread_time_slice_set()
when slice per thread api is used.

Currently it will reset it only in standard slice_set

Signed-off-by: Adrian Bonislawski <adrian.bonislawski@intel.com>
2024-05-01 22:55:50 +01:00
Hess Nathan
527e712448 coding guidelines: comply with MISRA Rule 20.9
- avoid to use undefined macros in #if expressions

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-01 19:48:19 +01:00
Hess Nathan
32af724fbb coding guidelines: comply with MISRA C:2012 Rule 11.2
avoid convert pointers to incomplete type using the pointer to first item

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-01 10:53:20 -04:00
Hess Nathan
c30a9c4c97 coding guidelines: comply with MISRA Rule 21.15
- made explicit the copied data type

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-01 10:52:43 -04:00
Peter Mitsis
a3c7152f92 kernel: Update thread cpu in z_get_next_switch_handle()
Updates z_get_next_switch_handle() to set the new thread's base.cpu
value as it is done in do_swap(). This helps to ensure that the
last CPU on which the thread executed remains current.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2024-04-29 17:40:28 +01:00
Eric Johnson
69c5c6d511 kernel: Remove duplicate execution_cycles write and improve docstring
There is a duplicate write in `z_sched_thread_usage()` that can be
removed. Also modified the docstrings to `k_thread_runtime_stats` to
help better describe the differences between execution_cycles and
total_cycles when getting stats for the CPU or a thread

Signed-off-by: Eric Johnson <eric@memfault.com>
2024-04-28 13:04:20 -04:00
Alberto Escolar Piedras
c05cba682b Revert "kernel/spinlock: Fix SPIN_VALIDATE in ISRs"
This reverts commit 93dc7e7438.

This PR introduced 2 regressions in main CI:
71977 & 71978
Let's revert it by now to get main's CI passing again.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-04-26 10:10:24 +00:00
Alberto Escolar Piedras
ea26bcf8d3 Revert "kernel/sched: Fix free-memory write when ISRs abort _current"
This reverts commit 61c70626a5.

This PR introduced 2 regressions in main CI:
71977 & 71978
Let's revert it by now to get main's CI passing again.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-04-26 10:10:24 +00:00
Alberto Escolar Piedras
c9ec937d71 Revert "sched: Optimize dummy thread usage on SMP"
This reverts commit 20611f13ca.

This PR introduced 2 regressions in main CI:
71977 & 71978
Let's revert it by now to get main's CI passing again.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-04-26 10:10:24 +00:00
Alberto Escolar Piedras
c60d4c2589 Revert "kernel/sched: Fix lockless ordering in halt_thread()"
This reverts commit 02b24911f7.

This PR introduced 2 regressions in main CI:
71977 & 71978
Let's revert it by now to get main's CI passing again.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-04-26 10:10:24 +00:00
Mohamed ElShahawi
7084662cc8 kernel: system_work_q: Mark queue thread as essential
Marking sysworkq as essential, so when it fails, the system will halt
instead of continuously working, and dependent components stay
in a broken state.

Signed-off-by: Mohamed ElShahawi <ExtremeGTX@hotmail.com>
2024-04-25 21:40:24 +02:00
Mohamed ElShahawi
9ba4653243 kernel: work_queue: make thread essential flag configurable
Allow the creator of a work_queue instance to choose whether
the work_queue thread should be marked as ESSENTIAL or not.

Signed-off-by: Mohamed ElShahawi <ExtremeGTX@hotmail.com>
2024-04-25 21:40:24 +02:00
Andy Ross
02b24911f7 kernel/sched: Fix lockless ordering in halt_thread()
We've had threads spinning on the thread state bits, but weren't being
careful to ensure that those bits were the last things seen to change
in a halting thread.  Move it to the end, and add a barrier for
correctness.

Signed-off-by: Andy Ross <andyross@google.com>
2024-04-25 15:12:02 +02:00
Andy Ross
20611f13ca sched: Optimize dummy thread usage on SMP
Nicolas Pitre points out that since these thread structs are just
dummies for the context swtiching, they can be presumed to be "write
only" and thus there's no point in having one per CPU, everyone can
share the same one.

The only gotcha is that we never really documented (nor really have a
place to document) that rule, so it's not theoretically impossible for
an architecture to read back what it might have written underneath
arch_switch().  Leave this in a separate commit for bisection
purposes, but the risk seems very low.

Signed-off-by: Andy Ross <andyross@google.com>
2024-04-25 15:12:02 +02:00
Andy Ross
61c70626a5 kernel/sched: Fix free-memory write when ISRs abort _current
After a k_thread_abort(), the resulting thread struct is documented as
unused/free memory that may be re-used (for example, to respawn a new
thread).

But in the special case of aborting the current thread from within an
ISR, that wasn't quite happening.  The scheduler cleanup would
complete, but the architecture layer would still try to context switch
away from the aborted thread on exit, and that can include writes to
the now-reused thread struct!  The specifics will depend on
architecture (some do a full context save on entry, most don't), but
in the case of USE_SWITCH=y it will at the very least write the
switch_handle field.

Fix this simply, with a per-cpu "switch dummy" thread struct for use
as a target for context switches like this.  There is some non-trivial
memory cost to that; thread structs on many architectures are large.

Pleasingly, this also addresses a known deadlock on SMP: because the
"spin in ISR" step now happens as the very last stage of
k_thread_abort() handling, the existing scheduler lock works to
serialize calls such that it's impossible for a cycle of threads to
independently decide to spin on each other: at least one will see
itself as "already aborting" and break the cycle.

Fixes #64646

Signed-off-by: Andy Ross <andyross@google.com>
2024-04-25 15:12:02 +02:00
Andy Ross
93dc7e7438 kernel/spinlock: Fix SPIN_VALIDATE in ISRs
Spinlocks taken in ISRs were storing the _current thread pointer of
the interrupted thread as the owner, which was never strictly correct
but was benign as the thread would never run until the lock was
released.

But now k_thread_abort(_current) in an ISR has been fixed to eliminate
all references to the (now aborted) thread struct, and _current points
to a dummy thread.  Handle that edge case in the validation framework.

Signed-off-by: Andy Ross <andyross@google.com>
2024-04-25 15:12:02 +02:00
Andy Ross
5fa2b6f377 kernel/sched: Refeactor/cleanup z_thread_halt()
Big change is to factor out a thread_halt_spin() utility to manage the
core complexity of this code: the situation where an ISR is asked to
abort a thread already running on another SMP CPU.

With that gone, things can be cleaned up quite a bit.  Remove early
returns, most of the "#if CONFIG_SMP" usage was superfluous and will
optimize out, unify and clean up the comments, etc...

No behavioral changes (hopefully), just refactoring.

Signed-off-by: Andy Ross <andyross@google.com>
2024-04-25 15:12:02 +02:00
Anas Nashif
297ad5186d kernel: increase main stack size for ztest on ARC
stack has been on the edge when using ztest, use 1024 for ARC as well.

Fixes #71797

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-24 10:49:05 +02:00
Anas Nashif
4593f0d71c kernel: priority queues: declare as static inlines
After the move to C files we got some drop in the performance when
running latency_measure. This patch declares the priority queue
functions as static inlines with minor optimizations.

The result for one metric (on qemu):

3.6 and before the anything was changed:

  Get data from LIFO (w/ ctx switch): 13087 ns

after original change (46484da502):

  Get data from LIFO (w/ ctx switch): 13663 ns

with this change:

  Get data from LIFO (w/ ctx switch): 12543 ns

So overall, a net gain of ~ 500ns that can be seen across the board on many
of the metrics.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-22 16:40:11 -04:00
Flavio Ceolin
11b85ee510 kernel: stack: Check possible overflow
Check possible overflow in k_stack data struct. An overflow
can happens resulting in a much smaller amount of memory allocation.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-04-22 15:20:39 -04:00
Ederson de Souza
eeebb4d911 kernel: Device deferred initialization
Currently, all devices are initialized at boot time (following their
level and priority order). This patch introduces deferred
initialization: by setting the property `zephyr,deferred-init` on a
device on the devicetree, Zephyr will not initialized the device.

To initialize such devices, one has to call `device_init()`.

Deferred initialization is done by grouping all deferred devices on a
different ELF section. In this way, there's no need to consume more
memory to keep track of deferred devices. When `device_init()` is
called, Zephyr will scan the deferred devices section and call the
initialization function for the matching device. As this scanning is
done only during deferred device initialization, its cost should be
bearable.

Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
2024-04-11 15:50:44 -04:00
Daniel Leung
d0a90a0b33 kernel: add the ability to memory map thread stacks
This introduces support for memory mapped thread stacks,
where each thread stack is mapped into virtual memory
address space with two guard pages to catch
under-/over-flowing the stack. This is just on the kernel
side. Additional architecture code is required to fully
support this feature.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-04-10 07:44:27 -04:00
Daniel Leung
04c5632bd4 kernel: mm: introduce k_mem_phys_map()/_unmap()
This is similar to k_mem_map()/_unmap(). But instead of using
anonymous memory, the provided physical region is mapped
into virtual address instead. In addition to simple mapping
physical ro virtual addresses, the mapping also adds two
guard pages before and after the virtual region to catch
buffer under-/over-flow.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-04-10 07:44:27 -04:00
Daniel Leung
378131c266 kernel: add options to cleanup after aborting current thread
This adds the mechanism to do cleanup after k_thread_abort()
is called with the current thread. This is mainly used for
cleaning up things when the thread cannot be running, e.g.,
cleanup the thread stack.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-04-10 07:44:27 -04:00
Krzysztof Chruściński
aee8dd1d33 kernel: timeout: Optimize setting next alarm
Next timeout was set unconditionally at the end of sys_clock_announce.
However, if one of the current expired timeouts was setting a new
timeout which is the first to execute then system clock was configured
twice. Lets configure system clock only once in the isr at the and of
sys_clock_announce.

If timeouts are frequent this optimization can reduce CPU load. In
many cases setting the new sys_clock timeout is the most time
consuming operation in the sys_clock isr handler. As an example,
on the target I used setting new sys_clock timeout is taking 6 uS of
9 uS spent in the isr and it takes 16 uS with the redundant call.

Signed-off-by: Krzysztof Chruściński <krzysztof.chruscinski@nordicsemi.no>
2024-04-09 13:55:07 -04:00
Simon Hein
ebdb07a05c kernel: Clean up mailbox async msg configuration
Remove confusing and duplicate async message configuration
switches for mailboxes.

Signed-off-by: Simon Hein <Shein@baumer.com>
2024-04-09 11:05:55 +02:00
Anas Nashif
20b2c98add kernel: move nothread support to own file
Do not build threading support when CONFIG_MULTITHREADING=n is set and
move needed calls to a new file with the changes needed instead of the
ifdef party in sched.c

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-06 14:22:08 +03:00
Yong Cong Sin
fbaf7dfdc1 kernel: banner: use BUILD_VERSION only if not empty
The `BUILD_VERSION` can be defined but empty when built
without git, causing version to be missing from the banner:

```
*** Booting Zephyr OS build  ***
Hello World! qemu_riscv64
```

Let's check if it is empty before using it, so that
`KERNEL_VERSION_STRING`, which is generated independently
with cmake can be used as a fallback:

```
*** Booting Zephyr OS build 3.5.0 ***
Hello World! qemu_riscv64
```

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-04-04 23:47:33 +02:00
Anas Nashif
8a88cd4805 kernel: thread: move thread states to header
Move state string defines into thread header.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-01 18:47:36 -04:00
Anas Nashif
f5435b3df7 kernel: thread: move k_thread_priority_get
Move to thread.c alongside all other thread calls.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-01 18:47:36 -04:00
Anas Nashif
5c170c7046 kernel: thread: rename is_preempt
Trivila rename to thread_is_preempt.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-01 18:47:36 -04:00
Anas Nashif
6754cbd1b5 kernel: thread: move k_is_preempt_thread to thread.c
This belongs in thread.c

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-01 18:47:36 -04:00
Anas Nashif
17c874f4fc kernel: thread: rename is_metairq
Trivial rename of is_metairq to thread_is_metairq.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-01 18:47:36 -04:00
Anas Nashif
0b473ce925 kernel: rename sliceable -> thread_is_sliceable
Trivial rename of sliceable function.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-01 18:47:36 -04:00
Anas Nashif
37df485463 kernel: split timeslicing/ipi code out of sched.c
Move both timeslicing and IPI code to own files.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-01 18:47:36 -04:00
Anas Nashif
31bc210bbc kernel: sched: remove unused prototype: z_is_thread_time_slicing
A prototype not used anywhere.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-01 18:47:36 -04:00
Anas Nashif
ebb503ff7b kernel: move thread related helper function kthread.h
Move some helper functions to inernal kthread.h, to offload crowded
sched.c

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-01 18:47:36 -04:00
Daniel Leung
d34351d994 kernel: align thread stack size declaration
When thread stack is defined as an array, K_THREAD_STACK_LEN()
is used to calculate the size for each stack in the array.
However, standalone thread stack has its size calculated by
Z_THREAD_STACK_SIZE_ADJUST() instead. Depending on the arch
alignment requirement, they may not be the same... which
could cause some confusions. So align them both to use
K_THREAD_STACK_LEN().

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-03-27 19:27:10 -04:00
Daniel Leung
6cd7936f57 kernel: align kernel stack size declaration
When kernel stack is defined as an array, K_KERNEL_STACK_LEN()
is used to calculate the size for each stack in the array.
However, standalone kernel stack has its size calculated by
Z_KERNEL_STACK_SIZE_ADJUST() instead. Depending on the arch
alignment requirement, they may not be the same... which
could cause some confusions. So align them both to use
K_KERNEL_STACK_LEN().

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-03-27 19:27:10 -04:00
Daniel Leung
efe30749de kernel: rename Z_THREAD_STACK_BUFFER to K_THREAD_STACK_BUFFER
Simple rename to align the kernel naming scheme. This is being
used throughout the tree, especially in the architecture code.
As this is not a private API internal to kernel, prefix it
appropriately with K_.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-03-27 19:27:10 -04:00