Commit Graph

494 Commits

Author SHA1 Message Date
Anas Nashif
5c170c7046 kernel: thread: rename is_preempt
Trivila rename to thread_is_preempt.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-01 18:47:36 -04:00
Anas Nashif
17c874f4fc kernel: thread: rename is_metairq
Trivial rename of is_metairq to thread_is_metairq.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-01 18:47:36 -04:00
Anas Nashif
37df485463 kernel: split timeslicing/ipi code out of sched.c
Move both timeslicing and IPI code to own files.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-01 18:47:36 -04:00
Anas Nashif
31bc210bbc kernel: sched: remove unused prototype: z_is_thread_time_slicing
A prototype not used anywhere.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-01 18:47:36 -04:00
Anas Nashif
ebb503ff7b kernel: move thread related helper function kthread.h
Move some helper functions to inernal kthread.h, to offload crowded
sched.c

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-01 18:47:36 -04:00
Simon Hein
bcd1d19322 kernel: add closing comments to config endifs
Add a closing comment to the endif with the configuration
information to which the endif belongs too.
To make the code more clearer if the configs need adaptions.

Signed-off-by: Simon Hein <Shein@baumer.com>
2024-03-25 18:03:31 -04:00
Daniel Leung
3664ed64c3 arch: move arch_interface.h under zephyr/arch
arch_interface.h is for architecture and should not be
under sys/. So move it under include/zephyr/arch/.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-03-25 09:58:35 +00:00
TaiJu Wu
1f5f0cf838 sched: Remove multi-level queue priority limit
Modified bitmask to  bitmask array, it can make multilevel queue remove
32 bit prioriry limit.

We can scan bitmask array to find which queue have ready thread.

Only need the number of queues as priority because the priority
is checked on create_thread.

Signed-off-by: TaiJu Wu <tjwu1217@gmail.com>
2024-03-12 19:37:40 -04:00
Peter Mitsis
9f7695dda0 kernel: Remove unused z_pend_curr_irqlock()
The routine z_pend_curr_irqlock() is no longer used anywhere.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2024-03-07 11:51:06 -05:00
Anas Nashif
0d8da5ff93 kernel: rename scheduler spinlock variable and make it private
rename sched_spinlock to _sched_spinglock to maintain it is privacy and
to avoid any misuse.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-03-06 19:27:28 -05:00
Anas Nashif
868f099d61 kernel: sched: z_set_prio -> z_thread_prio_set
Rename private function to make it clear what priority we are setting
and to be consistent across the code.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-03-06 19:27:28 -05:00
Anas Nashif
6c003bdbcf kernel: remove unused code in headers
List of functions defined in headers and not being used anywhere.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-03-06 19:27:28 -05:00
Anas Nashif
9e83413542 kernel: split thread monitor
Move thread monitor related functions, not enabled in most cases outside
of thread.c and cleanup headers.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-03-06 19:27:28 -05:00
Anas Nashif
5e591c38f1 kernel: do not export z_thread_priority_set
This function is only being used by a test, so instead of reimplementing
a syscall in the test, provide a Kconfig option to provide the
functionality that only works with tests and remove some of the
duplication and extra code.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-03-06 19:27:28 -05:00
Anas Nashif
3ca50f5060 kernel: move z_init_static_threads to where it is being used
Move out of thread and put directly in init.c where it is being used.
Also remove definition from kernel.h, this is an internal function and
should not be in a public header.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-03-06 19:27:28 -05:00
Anas Nashif
8791012ed1 kernel: move essential flag related routines out
The functions to manipulate the essential flag indeed operate on
threads, but they are misplaced in the thread implementation file. Put
them alongside other routines setting other thread flags and cleanup
headers a bit.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-03-06 19:27:28 -05:00
Anas Nashif
46484da502 kernel: move priority queue handling to own file/header
clean up headers under include/ and move handling of priority queue to
own file/header.
No need for the header  include/zephyr/kernel/internal/sched_priq.h
anymore. Move the relevant structures where they are being used in
kernel_structs.h.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-03-02 15:06:45 +01:00
Peter Mitsis
ee9c44fee6 linker: Round TLS sizes up in linker script
Instead of rounding up both __tdata_size and __tbss_size at runtime,
perform the calculation when the image is built.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2024-02-25 22:28:56 -05:00
Nguyen Minh Thien
8188be57d3 kernel: fix spelling errors
Fix spelling errors found in comment of the kernel source code.

Signed-off-by: Nguyen Minh Thien <nguyenmthien@live.com>
2024-02-25 20:53:37 -05:00
Krzysztof Chruściński
25173f71cd pm: device_runtime: Extend with synchronous runtime PM
In many cases suspending or resuming of a device is limited to
just a few register writes. Current solution assumes that those
operations may be blocking, asynchronous and take a lot of time.
Due to this assumption runtime PM API cannot be effectively used
from the interrupt context. Zephyr has few driver APIs which
can be used from an interrupt context and now use of runtime PM
is limited in those cases.

Patch introduces a new type of PM device - synchronous PM. If
device is specified as capable of synchronous PM operations then
device runtime getting and putting is executed in the critical
section. In that case, runtime API can be used from an interrupt
context. Additionally, this approach reduces RAM needed for
PM device (104 -> 20 bytes of RAM on ARM Cortex-M).

Signed-off-by: Krzysztof Chruściński <krzysztof.chruscinski@nordicsemi.no>
2024-02-01 15:03:42 +01:00
Daniel Leung
2cdd44801e kernel: move z_init_cpu to private kernel headers
z_init_cpu() is a private kernel API so move it under
kernel/include.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-01-17 11:57:20 -05:00
Daniel Leung
a7dccc4475 kernel: mmu: mitigate range check overflow issue
It is possible that address + size will overflow the available
address space and the pointer wraps around back to zero. Some
of these have been fixed in previous commits. This fixes
the remaining ones with regard to Z_PHYS_RAM_START/_END,
and Z_VIRT_RAM_START/_END.

Fixes #65542

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-20 11:37:17 -05:00
Johan Hedberg
3fbf12487c kernel: Introduce a way to specify minimum system heap size
There are several subsystems and boards which require a relatively large
system heap (used by k_malloc()) to function properly. This became even
more notable with the recent introduction of the ACPICA library, which
causes ACPI-using boards to require a system heap of up to several
megabytes in size.

Until now, subsystems and boards have tried to solve this by having
Kconfig overlays which modify the default value of HEAP_MEM_POOL_SIZE.
This works ok, except when applications start explicitly setting values
in their prj.conf files:

$ git grep CONFIG_HEAP_MEM_POOL_SIZE= tests samples|wc -l
     157

The vast majority of values set by current sample or test applications
is much too small for subsystems like ACPI, which results in the
application not being able to run on such boards.

To solve this situation, we introduce support for subsystems to specify
their own custom system heap size requirement. Subsystems do
this by defining Kconfig options with the prefix HEAP_MEM_POOL_ADD_SIZE_.
The final value of the system heap is the sum of the custom
minimum requirements, or the value existing HEAP_MEM_POOL_SIZE option,
whichever is greater.

We also introduce a new HEAP_MEM_POOL_IGNORE_MIN Kconfig option which
applications can use to force a lower value than what subsystems have
specficied, however this behavior is disabled by default.

Whenever the minimum is greater than the requested value a CMake warning
will be issued in the build output.

This patch ends up modifying several places outside of kernel code,
since the presence of the system heap is no longer detected using a
non-zero CONFIG_HEAP_MEM_POOL_SIZE value, rather it's now detected using
a new K_HEAP_MEM_POOL_SIZE value that's evaluated at build.

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2023-12-20 11:01:42 +01:00
Daniel Leung
fa561ccd59 kernel: mmu: no need to expose z_free_page_count
z_free_page_count is only used in one file, so there is
no need to expose it, even to other part of kernel.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-12 18:46:21 +00:00
Daniel Leung
34c6b17680 doc: fixed already renamed _Cstart to z_cstart
The function _Cstart has already been renamed to z_cstart,
so change the remaining references of it in various docs.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-11-30 21:01:47 -05:00
Daniel Leung
c972ef1a0f kernel: mm: move kernel mm functions under kernel includes
This moves the k_* memory management functions from sys/ into
kernel/ includes, as there are kernel public APIs. The z_*
functions are further separated into the kernel internal
header directory.

Also made a quick change to doxygen to group sys_mem_* into
the OS Memory Management group so they will appear in doc.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-11-20 09:19:14 +01:00
Flavio Ceolin
f9c7a5e6fb kernel: random: Rename early random get function
Rename z_early_boot_rand_get with z_early_rand_get to get consistent
with other early functions.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-10-13 10:03:53 +03:00
Daniel Leung
2c2d53c7e5 kernel: remove deprecate wording on arch_kernel_init()
The wording on deprecating arch_kernel_init() in favor of prep_c()
has never been materialized. Various architectures are using it to
perform initialization. So remove the wording.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-10-09 10:15:49 +02:00
Daniel Leung
dd6a7eb77d kernel: demand_paging: add doc to enum arch_page_location
This adds doc to enum arch_page_location.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-10-09 10:15:49 +02:00
Daniel Leung
77dc74136c kernel: tls: fix doc on arch_tls_stack_setup()
There was a typo there so fix it.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-10-09 10:15:49 +02:00
Anas Nashif
a1c7bfbc63 kernel: remove unused z_init_thread_base from kernel.h
This API is internal and not used in any way in kernel.h, so move it
back to where it is needed.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-09-30 18:43:28 +02:00
Anas Nashif
e19f21cb27 kernel: move z_is_thread_essential out of public kernel header
This is a private API to the kernel, so move out of kernel.h

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-09-30 18:43:28 +02:00
Anas Nashif
f0c7fbf0f1 kernel: move sched_priq.h to internal/ folder
This header is internal to the kernel and shall not be included directly.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-09-30 18:43:28 +02:00
Peter Mitsis
e6f1090553 kernel: Integrate object core statistics
Integrates object core statistics framework into the following
kernel objects:
  sys_mem_blocks, k_mem_slab
  threads, _cpu, z_kernel

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-09-30 08:04:14 +03:00
Anas Nashif
8dc2746c0e kernel: z_handle_obj_poll_events is internal not kernel.h material
This internal kernel API is misplaced in a public kernel header. Just
make it available to the code using it in the kernel.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-09-17 07:59:30 -04:00
Peter Mitsis
c7255cf374 kernel: Remove references to _EXPIRED
The _EXPIRED macro is no longer necessary. It is a relic of an older
timeout processing algorithm from several years ago.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-09-15 19:16:20 -04:00
Anas Nashif
fcf50ed6e7 kernel: move timeout_q.h to kernel/include
This is a private kernel header with private kernel APIs, it should not
be exposed in the public zephyr include directory.

Once sample remains to be fixed (metairq_dispatch), which currently uses
private APIs from that header, it should not be the case.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-09-12 12:55:36 -04:00
Anas Nashif
8634c3b444 kernel: move wait_q.h header to be internal
This header does not expose any public APIs, so move it under
kernel/include and change files including it.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-09-12 12:55:36 -04:00
Peter Mitsis
bd5839ec9e kernel: Fix wrap-around check in kernel/mmu.h
Fixes the buffer wrap-around check so that it will not be ignored
by the GNU C compiler.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-08-01 09:51:33 +02:00
Florian Grandel
e256b7d244 kernel: spinlock: LOCKED -> K_SPINLOCK
Let the kernel use the new K_SPINLOCK macro and remove the alias.

Signed-off-by: Florian Grandel <fgrandel@code-for-humans.de>
2023-07-10 09:27:21 +02:00
Florian Grandel
816d09c453 kernel: spinlock: expose LOCKED macro as public API
While the LOCKED pattern is universally useful it can be misused. This
change therefore exposes the LOCKED pattern with extensive usage
documentation to reduce the risk of abuse or unintended deadlock.

Signed-off-by: Florian Grandel <fgrandel@code-for-humans.de>
2023-07-10 09:27:21 +02:00
Evgeniy Paltsev
16b8191be0 SMP: fix build failure if SMP=y and SYS_CLOCK_EXISTS=n
Fix build failure for the SMP configurations without sysclock.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2023-06-22 06:17:27 -04:00
Gerard Marull-Paretas
48b201cc53 device: make device dependencies optional
Device dependencies are not always required, so make them optional via
CONFIG_DEVICE_DEPS. When enabled, the gen_device_deps script will run so
that dependencies are collected and part of the final image. Related
APIs will be also made available. Since device dependencies are used in
just a few places (power domains), disable the feature by default. When
not enabled, a second linking pass will not be required.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2023-06-21 09:32:05 +02:00
Gerard Marull-Paretas
5982d83e2a device: s/struct device.handles/struct device.deps
Rename struct device `handles` member to `deps`, in line with previous
renamings in the device API.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2023-06-21 09:32:05 +02:00
Andy Ross
c3046f417a kernel/sched: Use new barrier and spin APIs
The switch_handle field in the thread struct is used as an atomic flag
between CPUs in SMP, and has been known for a long time to technically
require memory barriers for correct operation.  We have an API for
that now, so put them in:

* The code immediately before arch_switch() needs a write barrier to
  ensure that thread state written by the scheduler is seen to happen
  before the outgoing thread is flagged with a valid switch handle.

* The loop in z_sched_switch_spin() needs a read barrier at the end,
  to make sure the calling context doesn't load state from before the
  other CPU stored the switch handle.

Also, that same spot in switch_spin was spinning with interrupts held,
which means it needs a call to arch_spin_relax() to avoid a FPU state
deadlock on some architectures.

Signed-off-by: Andy Ross <andyross@google.com>
2023-05-26 17:09:35 -04:00
Andy Ross
b89e427bd6 kernel/sched: Rename/redocument wait_for_switch() -> z_sched_switch_spin()
This trick turns out also to be needed by the abort/join code.
Promote it to a more formal-looking internal API and clean up the
documentation to (hopefully) clarify the exact behavior and better
explain the need.

This is one of the more... enchanted bits of the scheduler, and while
the trick is IMHO pretty clean, it remains a big SMP footgun.

Signed-off-by: Andy Ross <andyross@google.com>
2023-05-26 17:09:35 -04:00
Flavio Ceolin
df4be07b26 kernel: mmu: Fix Xtensa memory alignment issue
z_page_frame can't be packed on Xtensa due memory alignment
constraints. When this is struct is packed it is 5 bytes long it will
cause an memory alignment problem on Xtensa.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-23 08:54:29 +02:00
Peter Mitsis
e1aa1396d8 kernel: Remove unused absolute symbols
Removes unused absolute symbols that are defined via the
GEN_ABSOLUTE_SYM() macro in the kernel directory.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-04-18 10:51:28 -04:00
Aastha Grover
5537776898 kernel: Add z_sched_wake_thread API
This API wakes up a given thread and is also called from
z_thread_timeout()

Signed-off-by: Aastha Grover <aastha.grover@intel.com>
2023-03-09 09:22:21 +01:00
Andy Ross
f3afd5a4c9 kernel/sched: Use kernel timeouts for timeslice expirations
Rework the fragile and ad-hoc computation of timeslice expirations
into per-CPU struct _timeout objects with regular callbacks.  The
expiration callbacks themselves simply set a per-cpu flag (they might
run on any CPU), which gets checked at the end of the timer ISR on
every CPU.

This simplifies logic and removes a bunch of code.  It also fixes at
least three bugs:

1. As @npitre discovered: On SMP, the number of ticks announced on any
given CPU is going to be a subset of all expired ticks.  This broke
the accounting of timeslice ticks, and effectively meant that
timeslicing only worked on SMP on systems where one CPU could hog all
the announcements, and only on that CPU.

2. The bootstrap path to arm the timer driver after setting the first
timeout in an empty list couldn't take into account
sys_clock_elapsed() ticks, as it didn't know whether it was being
called underneath an existing announce loop.  Now this code is no
longer responsible for knowing anything about time slicing at all.

3. Also on SMP, there was a case where two CPUs timeslicing
simultaneously could stomp on each others' timeouts in
z_set_timeout_expiry(), as neither had a way of knowing what the
other's state was.  CPUs could miss their own expiration and have to
wait for the slice expiration on the other CPU.  Now, timeouts are
global objects with simple expiration times, and there's no need for
that function at all.

Signed-off-by: Andy Ross <andyross@google.com>
2023-03-09 09:21:12 +01:00
Peter Mitsis
a06f9ac418 kernel: Remove unused derived offset symbols
Some of the offset symbols that are derived from the macro
GEN_OFFSET_SYM() are not used anywhere in the Zephyr codebase.
Remove them as part of a cleanup effort.

Instances of an associated GEN_OFFSET_SYM() have also been
removed when the resulting macro is no longer referenced.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-02-22 16:01:32 +01:00
Peter Mitsis
fa23eb49b3 kernel: Remove unused generated offset symbols
Some of the offset symbols generated via the macro GEN_OFFSET_SYM()
are not used anywhere in the Zephyr codebase. Remove them as part of
a cleanup effort.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-02-22 16:01:32 +01:00
Peter Mitsis
ca58339e16 kernel: Add routine to walk a wait queue
Adds a routine to safely walk a specified wait queue and invoke a
custom callback function on each waiting thread.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-02-11 06:45:13 +09:00
Kumar Gala
c778eb2a56 smp: Move arrays to use CONFIG_MP_MAX_NUM_CPUS
Move to use CONFIG_MP_MAX_NUM_CPUS for array size declarations instead
of CONFIG_MP_NUM_CPUS.

Signed-off-by: Kumar Gala <kumar.gala@intel.com>
2022-10-17 14:40:12 +09:00
Nicolas Pitre
1f362a81f1 riscv: fix crash resulting from touching the initial stack's guard area
The interrupt stack is used as the system stack during kernel
initialization while IRQs are not yet enabled. The sp register is
set to z_interrupt_stacks + CONFIG_ISR_STACK_SIZE.

CONFIG_ISR_STACK_SIZE only represents the desired usable stack size.
This does not take into account the added guard area. Result is a stack
whose pointer is much closer to the trigger zone than expected when
CONFIG_PMP_STACK_GUARD=y, and the SMP configuration in particular pushes
it over the edge during many CI test cases.

Worse: during early init we're not quite ready to handle exceptions
yet and complete havoc ensues with no meaningful debugging output.

Make sure the early assembly code locates the actual top of the stack
by generating a constant with its true size.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-09-28 07:53:56 +00:00
Hu Zhenyu
57487622f5 kernel: Init the base.slice_ticks for dummy_thread
Fixes #46324
Set dummy_thread->base.slice_ticks to 0 when
CONFIG_TIMESLICE_PER_THREAD is set. To avoid
_current_cpu->slice_ticks be a big number.

Signed-off-by: Hu Zhenyu <zhenyu.hu@intel.com>
2022-08-04 19:44:24 -04:00
Simon Hein
02cfbfea51 kernel: comply to coding guidelines MISRA C:2012 Rule 14.4
MISRA C:2012 Rule 14.4 (The controlling expression of an if statement
and the controlling expression of an iteration-statement shall have
essentially Boolean type.)

Use `bool' instead of `int' to represent Boolean values.
Use `do { ... } while (false)' instead of `do { ... } while (0)'.
Use comparisons with zero instead of implicitly testing integers.

This commit is a subset of the original commit:
5d02614e34a86b549c7707d3d9f0984bc3a5f22a

Signed-off-by: Simon Hein <SHein@baumer.com>
2022-07-21 06:16:16 -04:00
Stephanos Ioannidis
360f810704 kernel: Migrate to K_KERNEL_PINNED_STACK_ARRAY_DECLARE
This commit updates all deprecated `K_KERNEL_PINNED_STACK_ARRAY_EXTERN`
macro usages to use the `K_KERNEL_PINNED_STACK_ARRAY_DECLARE` macro
instead.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-06-20 10:25:52 +02:00
Gerard Marull-Paretas
cffefc818d kernel: migrate includes to <zephyr/...>
In order to bring consistency in-tree, migrate all kernel code to the
new prefix <zephyr/...>. Note that the conversion has been scripted,
refer to zephyrproject-rtos#45388 for more details.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-05-09 09:26:20 +02:00
Bradley Bolen
88ba97fea4 arch: arm: aarch32: cortex_a_r: Add shared FPU support
This adds lazy floating point context switching.  On svc/irq entrance,
the VFP is disabled and a pointer to the exception stack frame is saved
away.  If the esf pointer is still valid on exception exit, then no
other context used the VFP so the context is still valid and nothing
needs to be restored.  If the esf pointer is NULL on exception exit,
then some other context used the VFP and the floating point context is
restored from the esf.

The undefined instruction handler is responsible for saving away the
floating point context if needed.  If the handler is in the first
irq/svc context and the current thread uses the VFP, then the float
context needs to be saved.  Also, if the handler is in a nested context
and the previous context was using the FVP, save the float context.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2022-05-05 12:03:27 +09:00
Flavio Ceolin
d02a1e9879 pm: Only resize power domains
Instead of resizing all devices handles, we just resize devices that are
power domains. This means that a power domain has to be declared as
compatbile with "power-domain" in device tree node.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2022-04-18 17:25:01 -07:00
Nazar Kazakov
f483b1bc4c everywhere: fix typos
Fix a lot of typos

Signed-off-by: Nazar Kazakov <nazar.kazakov.work@gmail.com>
2022-03-18 13:24:08 -04:00
Andy Ross
3e696896bf kernel: Add "per thread" timeslice mechanism
Zephyr's timeslice implementation has always been somewhat primitive.
You get a global timeslice that applies broadly to the whole bottom of
the priority space, with no ability (beyond that one priority
threshold) to tune it to work on certain threads, etc...

This adds an (optionally configurable) API that allows timeslicing to
be controlled on a per-thread basis: any thread at any priority can be
set to timeslice, for a configurable per-thread slice time, and at the
end of its slice a callback can be provided that can take action.
This allows the application to implement things like responsiveness
heuristics, "fair" scheduling algorithms, etc... without requiring
that facility in the core kernel.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-03-09 13:49:44 -05:00
Nicolas Pitre
678b76e4b0 kernel/init.c: allow for memset/memcpy alternatives during early boot
Zeroing the BSS and copying data to RAM with regular memset/memcpy may
cause problems when those functions are assuming a fully initialized
system for their optimizations to work e.g. some instructions require
an active MMU, but turning the MMU on needs the .bss section to be
cleared first, etc.

Commit c5b898743a ("aarch64: Fix alignment fault on z_bss_zero()")
provides a detailed explanation of such a case.

Replacing z_bss_zero() with an architecture specific one is problematic
as the former may see new sections added to it that would be missed by
the later. The same reasoning goes for z_data_copy().

Let's make maintenance much easier by providing weak versions of
memset/memcpy that can be overridden by architecture-specific safe
versions when needed.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-02-21 21:00:12 -05:00
Krzysztof Chruscinski
1da97e1374 kernel: Add function for calculating stack usage
Extracting stack usage calculation from k_thread_stack_space_get to
z_stack_space_get so it can be used also for interrupt stack.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2022-02-21 20:57:17 -05:00
Daniel Leung
88cfd3343d kernel: arch: no need for #ifdef MMU in header
There is no need to use conditional compilation for the function
prototypes in the kernel architecture header file. So remove it.
Added bouns is that these functions can appear in documentation
without explicitly enabled in pre-defines during doc build.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2022-01-18 19:18:30 -05:00
Peter Mitsis
4eb1dd02cc kernel: extend CPU runtime stats
Extends the CPU usage runtime stats to track current, total, peak
and average usage (as bounded by the scheduling of the idle thread).
This permits a developer to obtain more system information if desired
to tune the system.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-01-10 10:38:06 -05:00
Peter Mitsis
5deaffb2ee kernel: update z_sched_thread_usage()
This commit does two things to the z_sched_thread_usage(). First,
it updates the API so that it accepts a pointer to the runtime
stats instead of simply returning the usage cycles. This gives it
the flexibility to retrieve additional statistics in the future.

Second, the runtime stats are only updated if the specified thread
is the current thread running on the current core.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-01-10 10:38:06 -05:00
Berend Ozceri
b208e5811e kernel/swap: Initialize dummy thread's resource pool
The resource pool of the short-lived dummy thread "stub" may be
inherited by other threads created during system initialization. This
commit initializes this resource pool to NULL or the system pool to
ensure that a well-defined resource pool propagates to other threads
that inherit it from the dummy thread.

Fixes #41482.

Signed-off-by: Berend Ozceri <berend@recogni.com>
2022-01-06 11:57:18 -05:00
Daniel Leung
650a629b08 debug: gdbstub: remove start argument from z_gdb_main_loop()
Storing the state where this is the first GDB break can be done
in the main GDB stub code. There is no need to store the state
in architecture layer.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-11-30 15:24:00 -05:00
Flavio Ceolin
623ed5ae29 pm: Remove invalid comments
Remove comments referencing an old function / behavior.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-11-29 19:37:55 -05:00
Krzysztof Chruscinski
c0808e3f59 logging: Minimal mode configuration cleanup
Remove LOG_MINIMAL kconfig option which was confusing
since LOG_MODE_MINIMAL existed. LOG_MINIMAL was used to
force minimal mode but because of invalid dependencies
it was leading to issues.

Refactored code to use LOG_MODE_MINIMAL everywhere and
renamed LOG_MINIMAL to LOG_DEFAULT_MINIMAL which has impact
on defualt logging mode (which still can be later changed
in conf file or in menuconfig).

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-11-20 11:58:40 -05:00
Andy Ross
f169c5bc13 kernel: Swap RUNTIME_STATS implementation
Clean up RUNTIME_STATS to separate the API from the individual data
backends.  Use the SCHED_THREAD_USAGE tracking instead of the original
for execution_cycles.  Move the kconfig for that into the runtime
stats menu, since it's part of the family now.

Also remove a lot of needless #if's around the declarations.  Unused
structs and uncalled functions don't need to be explicitly hidden.  An
attempt to access a non-existent field (e.g. "execution_cycles" if
that isn't configured) provides all the build time validation we need.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross
40d12c142d kernel/sched: Add "thread_usage" API for thread runtime cycle monitoring
This is an alternate backend that does what THREAD_RUNTIME_STATS is
doing currently, but with a few advantages:

* Correctly synchronized: you can't race against a running thread
  (potentially on another CPU!) while querying its usage.

* Realtime results: you get the right answer always, up to timer
  precision, even if a thread has been running for a while
  uninterrupted and hasn't updated its total.

* Portable, no need for per-architecture code at all for the simple
  case. (It leverages the USE_SWITCH layer to do this, so won't work
  on older architectures)

* Faster/smaller: minimizes use of 64 bit math; lower overhead in
  thread struct (keeps the scratch "started" time in the CPU struct
  instead).  One 64 bit counter per thread and a 32 bit scratch
  register in the CPU struct.

* Standalone.  It's a core (but optional) scheduler feature, no
  dependence on para-kernel configuration like the tracing
  infrastructure.

* More precise: allows architectures to optionally call a trivial
  zero-argument/no-result cdecl function out of interrupt entry to
  avoid accounting for ISR runtime in thread totals.  No configuration
  needed here, if it's called then you get proper ISR accounting, and
  if not you don't.

For right now, pending unification, it's added side-by-side with the
older API and left as a z_*() internal symbol.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Flavio Ceolin
9444480c7b pm: Better return type for pm_system_suspend
Instead of returning PM_STATE_ACTIVE for when the cpu didn't enter a
low power state and a different state when it entered, but has
already left the state and is active again, it changes
pm_system_suspend to return true when the cpu has entered a low power
state and false otherwise.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-11-06 10:21:53 -04:00
Andy Ross
7dee7a6139 kernel/sched: Fix race with thread return values
There was a brief (but seen in practice on real apps on real
hardware!) race with the switch-based z_swap() implementation.  The
thread return value was being initialized to -EAGAIN after the
enclosing lock had been released.  But that lock is supposed to be
atomic with the thread suspend.

This opened a window for another racing thread to come by and "wake
up" our pending thread (which is fine on its own), set its return
value (e.g. to 0 for success) and then have that value clobbered by
the thread continuing to suspend itself outside the lock.

Melodramatic aside: I continue to hate this
arch_thread_return_value_set() API; it needs to die.  At best it's a
mild optimization on a handful of architectures (e.g. x86 implements
it by writing to the EAX register save slot in the context block).
Asynchronous APIs are almost always worse than synchronous ones, and
in this case it's an async operation that races against literal
context switch code that can't use traditional locking strategies.

Fixes #39575

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-10-25 12:31:06 +02:00
Andy Ross
b11e796c36 kernel/sched: Add CONFIG_CPU_MASK_PIN_ONLY
Some SMP applications have threading designs where every thread
created is always assigned to a specific CPU, and never want to
schedule them symmetrically across CPUs under any circumstance.

In this situation, it's possible to optimize the run queue design a
bit to put a separate queue in each CPU struct instead of having a
single global one.  This is probably good for a few cycles per
scheduling event (maybe a bit more on architectures where cache
locality can be exploited) in circumstances where there is more than
one runnable thread.  It's a mild optimization, but a basically simple
one.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-28 20:15:05 -04:00
Daniel Leung
049e3bac73 kernel: add -ENOTSUP doc to arch_float_en-/dis-able()
Some architectures already returns -ENOTSUP when these functions
are called. So add this return value to the API doc.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-09-03 10:00:02 -04:00
Andy Ross
c6d077e1bc soc: intel_adsp/cavs_v25: Add CPU halt and relaunch APIs
Add a SOC API to allow for application control over deep idle power
states.  Note that the hardware idle entry happens out of the WAITI
instruction, so the application has to be responsibile for ensuring
the CPU to be halted actually reaches idle deterministically.  Lots of
warnings in the docs to this effect.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-03 07:19:34 -04:00
Daniel Leung
e88afd2c37 kernel: mmu: pin/unpin boot sections during boot process
During boot process, the boot sections need to be pinned in
memory to prevent them from being paged out (to avoid
pages being paged out and immediately paged in again).
Once the boot process is completed (just before calling main()),
the boot sections can be unpinned so the memory can be
used for demand paging for paging in data pages.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Guennadi Liakhovetski
45b70e1500 smp: limit the scope of some SMP-only functions
z_smp_init() is only available if CONFIG_SMP is defined,
smp_timer_init() also depends on two Kconfig parameters. Also make it
conditional in cavs_timer.c. Also clarify some SMP-related comments
there.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
2021-07-23 16:06:54 -04:00
Daniel Leung
dbc0be487f kernel: use proper macro to declare extern interrupt stacks
The z_interrupt_stacks was declared extern in the kernel internal
header file using the same macro which defines the same stack
array but with an added "extern" in front. This macro adds
alignment and section attribute which are actually not the same
as the actual stack array defined in kernel/init.c. The section
name used in the section attribute contains the file name where
the stack array is defined or extern declared. So the same
symbol, in this case z_interrupt_stacks, has different
attributes in two places, and GCC 11 starts to complain about
this. So use the newly introduced macro to extern declare
the stack array without adding/replacing any symbol attributes.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-07-22 07:24:11 -05:00
Daniel Leung
dfa4b7e375 kernel: mmu: z_backing_store* to k_mem_paging_backing_store*
These functions are those that need be implemented by backing
store outside kernel. Promote them from z_* so these can be
included in documentation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-28 11:33:22 -04:00
Daniel Leung
31c362d966 kernel: mmu: rename z_eviction* to k_mem_paging_eviction*
These functions and data structures are those that need
to be implemented by eviction algorithm and application
outside kernel. Promote them from z_* so these can be
included in documentation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-28 11:33:22 -04:00
Andy Ross
851d14afc8 kernel/sched: Remove "cooperative scheduling only" special cases
The scheduler has historically had an API where an application can
inform the kernel that it will never create a thread that can be
preempted, and the kernel and architecture layer would use that as an
optimization hint to eliminate some code paths.

Those optimizations have dwindled to almost nothing at this point, and
they're now objectively a smaller impact than the special casing that
was required to handle the idle thread (which, obviously, must always
be preemptible).

Fix this by eliminating the idea of "cooperative only" and ensuring
that there will always be at least one preemptible priority with value
>=0.  CONFIG_NUM_PREEMPT_PRIORITIES now specifies the number of
user-accessible priorities other than the idle thread.

The only remaining workaround is that some older architectures (and
also SPARC) use the CONFIG_PREEMPT_ENABLED=n state as a hint to skip
thread switching on interrupt exit.  So detect exactly those platforms
and implement a minimal workaround in the idle loop (basically "just
call swap()") instead, with a big explanation.

Note that this also fixes a bug in one of the philosophers samples,
where it would ask for 6 cooperative priorities but then use values -7
through -2.  It was assuming the kernel would magically create a
cooperative priority for its idle thread, which wasn't correct even
before.

Fixes #34584

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-05-24 23:38:16 -04:00
Andy Ross
bd077561d0 kernel/swap: Add assertion to catch lock-breaking context switches
Our z_swap() API takes a key returned from arch_irq_lock() and
releases it atomically with the context switch.  Make sure that the
action of the unlocking is to unmask interrupts globally.  If
interrupts would still be masked then that means there is an OUTER
interrupt lock still held, and the code that locked it surely doesn't
expect the thread to be suspended and interrupts unmasked while it's
held!

Unfortunately, this kind of mistake is very easy to make.  We should
catch that with a simple assertion.  This is essentially a crude
Zephyr equivalent of the extremely common "BUG: scheduling while
atomic" error in Linux drivers (just google it).

The one exception made is the circumstance where a thread has already
aborted itself.  At that stage, whatever upthread lock state might
have existed will have already been messed up, so there's no value in
our asserting here.  We can't catch all bugs, and this can actually
happen in error handling and/or test frameworks.

Fixes #33319

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-05-17 15:27:37 -04:00
Daniel Leung
1310ad6b0e linker: add bits for pinned regions
This adds the necessary bits for linker scripts and source code
to specify which symbols need to be pinned in memory. This is
needed for demand paging as some functions and data must reside
in memory all the time and cannot be paged out (e.g. paging,
scheduler, and interrupt routines for functionality).

This is up to the arch/SoC/board to define the sections in
their linker scripts as the pinned section may need special
alignment which cannot be done in common script snippets.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Daniel Leung
d812728ec4 linker: add bits for boot regions
This adds the necessary bits for linker scripts and source code
to specify which symbols are needed for boot process so they
can be grouped together.

One use of this is to group boot related code and data so these
won't interval with other kernel and application for better
caching.

This is a must for demand paging as some functions and data
must be available during the boot process and before the memory
manager is initialized. During this time, paging cannot be used
so symbols linked in virtual memory space are unavailable.

This is up to the arch/SoC/board to define the sections in
their linker scripts as section may need special alignment
which cannot be done in common script snippets.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Torbjörn Leksell
f17144349b Tracing: Thread tracing
Add thread tracing hooks, default hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Daniel Leung
085d3768e1 kernel: mmu: introduce arch_page_phys_get()
This adds a new function prototype for arch_page_phys_get()
which will be used to translate mapped virtual addresses back
to physical memory addresses. This is needed for the future
k_mem_unmap() function which requires this to find
the corresponding page frame. It is faster to look through
the page tables instead of doing linear search of the page
frame array.

A weak function is provided in case arch_page_phys_get()
is not implemented at the arch level. This simply goes
through all the page frame and find the one which has
mapped to the virtual address.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Jennifer Williams
ca75bbef3c tests: boot_time: remove all the code and instrumentation feeding into test
Remove the config BOOT_TIME_MEASUREMENT and corresponding #ifdef'd code
throughout (kernel/init.c, idle.c, core/common.S , reset.S, ... ) which
hold the extern hooks for z_timestamp_main and z_timestamp_idle in the
removed boot_time test suite.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-05-05 10:41:15 -04:00
Daniel Leung
1117169980 kernel: generate placeholders for kobj tables before final build
Due to the use of gperf to generate hash table for kobjects,
the addresses of these kobjects cannot change during the last
few phases of linking (especially between zephyr_prebuilt.elf
and zephyr.elf). Because of this, the gperf generated data
needs to be placed at the end of memory to avoid pushing symbols
around in memory. This prevents moving these generated blocks
to earlier sections, for example, pinned data section needed
for demand paging. So create placeholders for use in
intermediate linking to reserve space for these generated blocks.
Due to uncertainty on the size of these blocks, more space is
being reserved which could result in wasted space. Though, this
retains the use of hash table for faster lookup.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-27 13:32:00 -04:00
Nicolas Pitre
f97d12936e kernel: add an architecture specific structs header
Add the ability to define architecture specific structures, notably
the ability to extend struct _cpu with per-CPU arch-specific stuff that
can be accessed with _current_cpu->arch.* similarly to _current->arch.*
for per-thead architecture data.

This is opt-in for architectures that want to benefit from this,
otherwise empty defaults are provided. A placeholder for ARM64 is
included to show the pattern.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-04-21 09:03:47 -04:00
Carlo Caione
64dfa69681 aarch64: Remove useless _curr_cpu struct
Currently _curr_cpu is only used by the get_cpu macro to quickly access
the cpu struct. This is not really necessary because we can access to
the struct by directly referencing &(_kernel.cpus[cpu_num]) in assembly

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-04-09 08:10:10 -04:00
Daniel Leung
8eea5119d7 kernel: mmu: demand paging execution time histogram
This adds the bits to record execution time of eviction selection,
and backing store page-in/page-out in histograms.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-06 16:43:55 -04:00
Anas Nashif
25c87db860 kernel/arch: cleanup function definitions
make identifiers used in the declaration and definition identical. This
is based on MISRA rule 8.3.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Anas Nashif
bbbc38ba8f kernel: Make both operands of operators of same essential type category
Add a 'U' suffix to values when computing and comparing against
unsigned variables and other related fixes of the same MISRA rule (10.4)

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Anas Nashif
d8f698703b kernel: idle/z_sched_prio_cmp: match implementation to prototype
The identifiers used in the declaration and definition of a function
shall be identical [MISRAC2012-RULE_8_3-b]

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-03-29 07:52:42 -04:00
Katsuhiro Suzuki
59903e2934 kernel: arch: introduce k_float_enable()
This patch introduce new API to enable FPU of thread. This is pair of
existed k_float_disable() API. And also add empty arch_float_enable()
into each architectures that have arch_float_disable(). The arc and
riscv already implemented arch_float_enable() so I do not touch
these implementations.

Motivation: Current Zephyr implementation does not allow to use FPU
on main and other system threads like as work queue. Users need to
create an other thread with K_FP_REGS for floating point programs.
Users can use FPU more easily if they can enable FPU on running
threads.

Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
2021-03-25 14:13:23 +01:00
Enjia Mai
4aed856d7f kernel: smp: Remove unused internal API z_smp_reacquire_global_lock()
The internal function z_smp_reacquire_global_lock() has not used by
anywhere inside zephyr code, so remove it.

Fixes #33273.

Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
2021-03-14 18:32:26 -04:00