- Remove SYS_ prefix
- shorten POWER_MANAGEMENT to just PM
- DEVICE_POWER_MANAGEMENT -> PM_DEVICE
and use PM_ as the prefix for all PM related Kconfigs
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Use GEN_OFFSET_SYM macro to genarate absolute symbols for the
_callee_saved struct and use these new symbols in the assembly code.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Since the tracing of thread being switched in/out has the same
instrumentation points, we can roll the tracing function calls
into the one for thread stats gathering functions.
This avoids duplicating code to call another function.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds the bits to gather the first thread runtime statictic:
thread execution time. It provides a rough idea of how much time
a thread is spent in active execution. Currently it is not being
used, pending following commits where it combines with the trace
points on context switch as they instrument the same locations.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds the common struct fields and functions to support
the implementation of thread local storage in individual
architecture. This uses the thread stack to store TLS data.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The core kernel does not use this yet, but it will be later used
as part of infrastructure for memory-mapping stacks, as detailed
in #28899.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Zephyr SMP kernels need to be able to run on architectures with
incoherent caches. Naive implementation of synchronization on such
architectures requires extensive cache flushing (e.g. flush+invalidate
everything on every spin lock operation, flush on every unlock!) and
is a performance problem.
Instead, many of these systems will have access to separate "coherent"
(usually uncached) and "incoherent" regions of memory. Where this is
available, place all writable data sections by default into the
coherent region. An "__incoherent" attribute flag is defined for data
regions that are known to be CPU-local and which should use the cache.
By default, this is used for stack memory.
Stack memory will be incoherent by default, as by definition it is
local to its current thread. This requires special cache management
on context switch, so an arch API has been added for that.
Also, when enabled, add assertions to strategic places to ensure that
shared kernel data is indeed coherent. We check thread objects, the
_kernel struct, waitq's, timeouts and spinlocks. In practice almost
all kernel synchronization is built on top of these structures, and
any shared data structs will contain at least one of them.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Strictly speaking, any access to a mem domain or its
containing partitions should be serialized on this lock.
Architecture code may need to grab this lock if it is
using this data during, for example, context switches,
especially if they support SMP as locking interrupts
is not enough.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
When threads exited we were leaving dangling references to
them in the domain's mem_domain_q.
z_thread_single_abort() now calls into the memory domain
code via z_mem_domain_exit_thread() to take it off.
The thread setup code now invokes z_mem_domain_init_thread(),
avoiding extra checks in k_mem_domain_add_thread(), we know
the object isn't currently a member of a doamin.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Both operands of an operator in the arithmetic conversions
performed shall have the same essential type category.
Changes are related to converting the integer constants to the
unsigned integer constants
Signed-off-by: Aastha Grover <aastha.grover@intel.com>
Fixes races where threads on another CPU are joining the
exiting thread, since it could still be running when
the joiners wake up on a different CPU.
Fixes problems where the thread object is still being
used by the kernel when the fn_abort() function is called,
preventing the thread object from being recycled or
freed back to a slab pool.
Fixes a race where a thread is aborted from one CPU while
it self-aborts on another CPU, that was currently worked
around with a busy-wait.
Precedent for doing this comes from FreeRTOS, which also
performs final thread cleanup in the idle thread.
Some logic in z_thread_single_abort() rearranged such that
when we release sched_spinlock, the thread object pointer
is never dereferenced by the kernel again; join waiters
or fn_abort() logic may free it immediately.
An assertion added to z_thread_single_abort() to ensure
it never gets called with thread == _current outside of an ISR.
Some logic has been added to ensure z_thread_single_abort()
tasks don't run more than once.
Fixes: #26486
Related to: #23063#23062
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This code had one purpose only, feed timing information into a test and
was not used by anything else. The custom trace points unfortunatly were
not accurate and this test was delivering informatin that conflicted
with other tests we have due to placement of such trace points in the
architecture and kernel code.
For such measurements we are planning to use the tracing functionality
in a special mode that would be used for metrics without polluting the
architecture and kernel code with additional tracing and timing code.
Furthermore, much of the assembly code used had issues.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
* Move switched_in into the arch context switch assembly code,
which will correctly record the switched_in information.
* Add switched_in/switched_out for context switch in irq exit.
Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
Memory mapping, for now, will be a private kernel API
and is not intended to be application-facing at this time.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
It implements gdb remote protocol to talk with a host gdb during the
debug session. The implementation is divided in three layers:
1 - The top layer that is responsible for the gdb remote protocol.
2 - An architecture specific layer responsible to write/read registers,
set breakpoints, handle exceptions, ...
3 - A transport layer to be used to communicate with the host
The communication with GDB in the host is synchronous and the systems
stops execution waiting for instructions and return its execution after
a "continue" or "step" command. The protocol has an exception that is
when the host sends a packet to cause an interruption, usually triggered
by a Ctrl-C. This implementation ignores this instruction though.
This initial work supports only X86 using uart as backend.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Now that device_api attribute is unmodified at runtime, as well as all
the other attributes, it is possible to switch all device driver
instance to be constant.
A coccinelle rule is used for this:
@r_const_dev_1
disable optional_qualifier
@
@@
-struct device *
+const struct device *
@r_const_dev_2
disable optional_qualifier
@
@@
-struct device * const
+const struct device *
Fixes#27399
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
We make a policy change here: all threads are members of a
memory domain, never NULL. We introduce a default memory domain
for threads that haven't been assigned to or inherited another one.
Primary motivation for this change is better MMU support, as
one common configuration will be to maintain page tables at
the memory domain level.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This adds a very primitive coredump mechanism under subsys/debug
where during fatal error, register and memory content can be
dumped to coredump backend. One such backend utilizing log
module for output is included. Once the coredump log is converted
to a binary file, it can be used with the ELF output file as
inputs to an overly simplified implementation of a GDB server.
This GDB server can be attached via the target remote command of
GDB and will be serving register and memory content. This allows
using GDB to examine stack and memory where the fatal error
occurred.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Move tracing switched_in and switched_out to the architecture code and
remove duplications. This changes swap tracing for x86, xtensa.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
These stacks are appropriate for threads that run purely in
supervisor mode, and also as stacks for interrupt and exception
handling.
Two new arch defines are introduced:
- ARCH_KERNEL_STACK_GUARD_SIZE
- ARCH_KERNEL_STACK_OBJ_ALIGN
New public declaration macros:
- K_KERNEL_STACK_RESERVED
- K_KERNEL_STACK_EXTERN
- K_KERNEL_STACK_DEFINE
- K_KERNEL_STACK_ARRAY_DEFINE
- K_KERNEL_STACK_MEMBER
- K_KERNEL_STACK_SIZEOF
If user mode is not enabled, K_KERNEL_STACK_* and K_THREAD_STACK_*
are equivalent.
Separately generated privilege elevation stacks are now declared
like kernel stacks, removing the need for K_PRIVILEGE_STACK_ALIGN.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This now takes a stack pointer as an argument with TLS
and random offsets accounted for properly.
Based on #24467 authored by Flavio Ceolin.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The core kernel computes the initial stack pointer
for a thread, properly aligning it and subtracting out
any random offsets or thread-local storage areas.
arch_new_thread() no longer needs to make any calculations,
an initial stack frame may be placed at the bounds of
the new 'stack_ptr' parameter passed in. This parameter
replaces 'stack_size'.
thread->stack_info is now set before arch_new_thread()
is invoked, z_new_thread_init() has been removed.
The values populated may need to be adjusted on arches
which carve-out MPU guard space from the actual stack
buffer.
thread->stack_info now has a new member 'delta' which
indicates any offset applied for TLS or random offset.
It's used so the calculations don't need to be repeated
if the thread later drops to user mode.
CONFIG_INIT_STACKS logic is now performed inside
z_setup_new_thread(), before arch_new_thread() is called.
thread->stack_info is now defined as the canonical
user-accessible area within the stack object, including
random offsets and TLS. It will never include any
carved-out memory for MPU guards and must be updated at
runtime if guards are removed.
Available stack space is now optimized. Some arches may
need to significantly round up the buffer size to account
for page-level granularity or MPU power-of-two requirements.
This space is now accounted for and used by virtue of
the Z_THREAD_STACK_SIZE_ADJUST() call in z_setup_new_thread.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
arch_new_thread() passes along the thread priority and option
flags, but these are already initialized in thread->base and
can be accessed there if needed.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
- simplify dummy thread initialization to a kswap.h
inline function
- use the same inline function for both early boot and
SMP setup
- add a note on necessity of the dummy thread even if
a custom swap to main is implemented
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This commit renames the Kconfig `FP_SHARING` symbol to `FPU_SHARING`,
since this symbol specifically refers to the hardware FPU sharing
support by means of FPU context preservation, and the "FP" prefix is
not fully descriptive of that; leaving room for ambiguity.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
This commit renames the Kconfig `FLOAT` symbol to `FPU`, since this
symbol only indicates that the hardware Floating Point Unit (FPU) is
used and does not imply and/or indicate the general availability of
toolchain-level floating point support (i.e. this symbol is not
selected when building for an FPU-less platform that supports floating
point operations through the toolchain-provided software floating point
library).
Moreover, given that the symbol that indicates the availability of FPU
is named `CPU_HAS_FPU`, it only makes sense to use "FPU" in the name of
the symbol that enables the FPU.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
The core kernel z_setup_new_thread() calls into arch_new_thread(),
which calls back into the core kernel via z_new_thread_init().
Move everything that doesn't have to be in z_new_thread_init() to
z_setup_new_thread() and convert to an inline function.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This never needed to be put in a separate gperf table.
Privilege mode stacks can be generated by the main
gen_kobject_list.py logic, which we do here.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The set of interrupt stacks is now expressed as an array. We
also define the idle threads and their associated stacks this
way. This allows for iteration in cases where we have multiple
CPUs.
There is now a centralized declaration in kernel_internal.h.
On uniprocessor systems, z_interrupt_stacks has one element
and can be used in the same way as _interrupt_stack.
The IRQ stack for CPU 0 is now set in init.c instead of in
arch code.
The extern definition of the main thread stack is now removed,
this doesn't need to be in a header.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Use of the _current_cpu pointer cannot be done safely in a preemptible
context. If a thread is preempted and migrates to another CPU, the
old CPU record will be wrong.
Add a validation assert to the expression that catches incorrect
usages, and fix up the spots where it was wrong (most important being
a few uses of _current outside of locks, and the arch_is_in_isr()
implementation).
Note that the resulting _current expression now requires locking and
is going to be somewhat slower. Longer term it's going to be better
to augment the arch API to allow SMP architectures to implement a
faster "get current thread pointer" action than this default.
Note also that this change means that "_current" is no longer
expressible as an lvalue (long ago, it was just a static variable), so
the places where it gets assigned now assign to _current_cpu->current
instead.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
These arch_timing_ defines get used in certain timer
drivers and need to be in the public include space,
and not the private kernel headers.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Similar to the suspend refactoring earlier, this really nees to be
done in an atomic block. There were two confirmable races here,
though it's not completely clear either was being hit in practice:
1. The bit operations in z_mark_thread_as_started() aren't atomic so
it needs to be protected.
2. The intermediate state in z_ready_thread() could result in a dead
or suspended thread being added to the ready queue if another
context tried a simultaneous abort or suspend.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Fixes an issue where calling z_thread_malloc() would
borrow the resource pool of whatever thread happened
to be interrupted at the time.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
On SMP, there is an inherent race when swapping: the old thread adds
itself back to the run queue before calling into the arch layer to do
the context switch. The former is properly synchronized under the
scheduler lock, and the later operates with interrupts locally
disabled. But until somewhere in the middle of arch_switch(), the old
thread (that is in the run queue!) does not have complete saved state
that can be restored.
So it's possible for another CPU to grab a thread before it is saved
and try to restore its unsaved register contents (which are garbage --
typically whatever state it had at the last interrupt).
Fix this by leveraging the "swapped_from" pointer already passed to
arch_switch() as a synchronization primitive. When the switch
implementation writes the new handle value, we know the switch is
complete. Then we can wait for that in z_swap() and at interrupt
exit.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The original intent was that the output handle be written through the
pointer in the second argument, though not all architectures used that
scheme. As it turns out, that write is becoming a synchronization
signal, so it's no longer optional.
Clarify the documentation in arch_switch() about this requirement, and
add an instruction to the x86_64 context switch to implement it as
original envisioned.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Improve positioning of tracing calls. Avoid multiple calls and missing
events because of complex logix. Trace the event where things happen
really.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
We have been using thread, th and t for thread variables making the code
less readable, especially when we use t for timeouts and other time
related variables. Just use thread where possible and keep things
consistent.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Builds of docs with doxygen 1.8.16 has a number of warnings of the form:
'warning: unbalanced grouping commands'. Fix those warnings be either
balancing the group command or removing it.
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
In some platforms the size of size_t can be different of 4 bytes. Use
sys_rand_get to proper fill this variable.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Promote the private z_arch_* namespace, which specifies
the interface between the core kernel and the
architecture code, to a new top-level namespace named
arch_*.
This allows our documentation generation to create
online documentation for this set of interfaces,
and this set of interfaces is worth treating in a
more formal way anyway.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This commit modifies the z_new_thread_init function, that was
previously declared as ALWAYS_INLINE to be a normal function.
z_new_thread_init function is only called by the z_arch_new_thread
function and, since this is not a performance-critical function, there
is no good justification for inlining it.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
This commit refactors kernel and arch headers to establish a boundary
between private and public interface headers.
The refactoring strategy used in this commit is detailed in the issue
This commit introduces the following major changes:
1. Establish a clear boundary between private and public headers by
removing "kernel/include" and "arch/*/include" from the global
include paths. Ideally, only kernel/ and arch/*/ source files should
reference the headers in these directories. If these headers must be
used by a component, these include paths shall be manually added to
the CMakeLists.txt file of the component. This is intended to
discourage applications from including private kernel and arch
headers either knowingly and unknowingly.
- kernel/include/ (PRIVATE)
This directory contains the private headers that provide private
kernel definitions which should not be visible outside the kernel
and arch source code. All public kernel definitions must be added
to an appropriate header located under include/.
- arch/*/include/ (PRIVATE)
This directory contains the private headers that provide private
architecture-specific definitions which should not be visible
outside the arch and kernel source code. All public architecture-
specific definitions must be added to an appropriate header located
under include/arch/*/.
- include/ AND include/sys/ (PUBLIC)
This directory contains the public headers that provide public
kernel definitions which can be referenced by both kernel and
application code.
- include/arch/*/ (PUBLIC)
This directory contains the public headers that provide public
architecture-specific definitions which can be referenced by both
kernel and application code.
2. Split arch_interface.h into "kernel-to-arch interface" and "public
arch interface" divisions.
- kernel/include/kernel_arch_interface.h
* provides private "kernel-to-arch interface" definition.
* includes arch/*/include/kernel_arch_func.h to ensure that the
interface function implementations are always available.
* includes sys/arch_interface.h so that public arch interface
definitions are automatically included when including this file.
- arch/*/include/kernel_arch_func.h
* provides architecture-specific "kernel-to-arch interface"
implementation.
* only the functions that will be used in kernel and arch source
files are defined here.
- include/sys/arch_interface.h
* provides "public arch interface" definition.
* includes include/arch/arch_inlines.h to ensure that the
architecture-specific public inline interface function
implementations are always available.
- include/arch/arch_inlines.h
* includes architecture-specific arch_inlines.h in
include/arch/*/arch_inline.h.
- include/arch/*/arch_inline.h
* provides architecture-specific "public arch interface" inline
function implementation.
* supersedes include/sys/arch_inline.h.
3. Refactor kernel and the existing architecture implementations.
- Remove circular dependency of kernel and arch headers. The
following general rules should be observed:
* Never include any private headers from public headers
* Never include kernel_internal.h in kernel_arch_data.h
* Always include kernel_arch_data.h from kernel_arch_func.h
* Never include kernel.h from kernel_struct.h either directly or
indirectly. Only add the kernel structures that must be referenced
from public arch headers in this file.
- Relocate syscall_handler.h to include/ so it can be used in the
public code. This is necessary because many user-mode public codes
reference the functions defined in this header.
- Relocate kernel_arch_thread.h to include/arch/*/thread.h. This is
necessary to provide architecture-specific thread definition for
'struct k_thread' in kernel.h.
- Remove any private header dependencies from public headers using
the following methods:
* If dependency is not required, simply omit
* If dependency is required,
- Relocate a portion of the required dependencies from the
private header to an appropriate public header OR
- Relocate the required private header to make it public.
This commit supersedes #20047, addresses #19666, and fixes#3056.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
Duplicate definitions elsewhere have been removed.
A couple functions which are defined by the arch interface
to be non-inline, but were implemented inline by native_posix
and intel64, have been moved to non-inline.
Some missing conditional compilation for z_arch_irq_offload()
has been fixed, as this is an optional feature.
Some massaging of native_posix headers to get everything
in the right scope.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The intel64 switch implementation doesn't actually use a switch handle
per se, just the raw thread struct pointers which get stored into the
handle field. This works fine for normally initialized threads, but
when switching out of a dummy thread at initialization, nothing has
initialized that field and the code was dumping registers into the
bottom of memory through the resulting NULL pointer.
Fix this by skipping the load of the field value and just using an
offset instead to get the struct address, which is actually slightly
faster anyway (a SUB immediate instruction vs. the load).
Actually for extra credit we could even move the switch_handle field
to the top of the thread struct and eliminate the instruction
entirely, though if we did that it's probably worth adding some
conditional code to make the switch_handle field disappear entirely.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
include/sys/arch_inlines.h will contain all architecture APIs
that are used by public inline functions and macros,
with implementations deriving from include/arch/cpu.h.
kernel/include/arch_interface.h will contain everything
else, with implementations deriving from
arch/*/include/kernel_arch_func.h.
Instances of duplicate documentation for these APIs have been
removed; implementation details have been left in place.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We now define z_is_idle_thread_object() in ksched.h,
and the repeated definitions of a function that does
the same thing now changed to just use the common
definition.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This takes an entry point and not a thread as argument.
Rename to z_is_idle_thread_entry() to make this clearer.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The main and idle threads, and their associated stacks,
were being referenced in various parts of the kernel
with no central definition. Expose these in kernel_internal.h
and namespace with z_ appropriately.
The main and idle threads were being defined statically,
with another variable exposed to contain their pointer
value. This wastes a bit of memory and isn't accessible
to user threads anyway, just expose the actual thread
objects.
Redundance MAIN_STACK_SIZE and IDLE_STACK_SIZE defines
in init.c removed, just use the Kconfigs they derive
from.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
These are renamed to z_timestamp_main and z_timestamp_idle,
and now specified in kernel_internal.h.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
z_set_thread_return_value is part of the core kernel -> arch
interface and has been renamed to z_arch_thread_return_value_set.
z_set_thread_return_value_with_data renamed to
z_thread_return_value_set_with_data for consistency.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is part of the core kernel -> architecture interface
and is appropriately renamed z_arch_is_in_isr().
References from test cases changed to k_is_in_isr().
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is part of the core kernel -> architecture interface
and should have a leading prefix z_arch_.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Global variables related to timing information have been
renamed to be prefixed with z_arch, with naming arranged
in increasing order of specificity.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The callback function has been ignored in z_timeout_init() since the
timer rework in fall 2018. Passing real handlers to it in code is
distracting when they will be overridden by whatever callback is
provided in z_add_timeout().
As this function is an internal API deprecation is not necessary.
Remove the parameter and change all call sites to drop the argument.
Signed-off-by: Peter A. Bigot <pab@pabigot.com>
In uniprocessor mode, the kernel knows when a context switch "is
coming" because of the cache optimization and can use that to do
things like update time slice state. But on SMP the scheduler state
may be updated on the other CPU at any time, so we don't know that a
switch is going to happen until the last minute.
Expose reset_time_slice() as a public function and call it when needed
out of z_swap().
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The mutex locking was written to use k_sched_lock(), which doesn't
work as a synchronization primitive if there is another CPU running
(it prevents the current CPU from preempting the thread, it says
nothing about what the others are doing).
Use the pre-existing spinlock for all synchronization. One wrinkle is
that the priority code was needing to call z_thread_priority_set(),
which is a rescheduling call that cannot be called with a lock held.
So that got split out with a low level utility that can update the
schedule state but allow the caller to defer yielding until later.
Fixes#17584
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The current API was assuming too much, in that it expected that
arch-specific memory domain configuration is only maintained
in some global area, and updates to domains that are not currently
active have no effect.
This was true when all memory domain state was tracked in page
tables or MPU registers, but no longer works when arch-specific
memory management information is stored in thread-specific areas.
This is needed for: #13441#13074#15135
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
* z_NanoFatalErrorHandler() is now moved to common kernel code
and renamed z_fatal_error(). Arches dump arch-specific info
before calling.
* z_SysFatalErrorHandler() is now moved to common kernel code
and renamed k_sys_fatal_error_handler(). It is now much simpler;
the default policy is simply to lock interrupts and halt the system.
If an implementation of this function returns, then the currently
running thread is aborted.
* New arch-specific APIs introduced:
- z_arch_system_halt() simply powers off or halts the system.
* We now have a standard set of fatal exception reason codes,
namespaced under K_ERR_*
* CONFIG_SIMPLE_FATAL_ERROR_HANDLER deleted
* LOG_PANIC() calls moved to k_sys_fatal_error_handler()
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Hitting wundef in kernel_structs.h, switching to match other instances
where #ifdef is used instead of #if
Signed-off-by: Nicholas Lowell <nlowell@lexmark.com>
System call arguments are indexed from 1 to 6, so arg0
is corrected to arg1 in two occasions. In addition, the
ARM function for system calls is now called z_arm_do_syscall,
so we update the inline comment in __svc handler.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
This mechanism had multiple problems:
- Missing parameter documentation strings.
- Multiple calls to k_thread_name_set() from user
mode would leak memory, since the copied string was never
freed
- k_thread_name_get() returns memory to user mode
with no guarantees on whether user mode can actually
read it; in the case where the string was in thread
resource pool memory (which happens when k_thread_name_set()
is called from user mode) it would never be readable.
- There was no test case coverage for these functions
from user mode.
To properly fix this, thread objects now have a buffer region
reserved specifically for the thread name. Setting the thread
name copies the string into the buffer. Getting the thread name
with k_thread_name_get() still returns a pointer, but the
system call has been removed. A new API k_thread_name_copy()
is introduced to copy the thread name into a destination buffer,
and a system call has been provided for that instead.
We now have full test case coverge for these APIs in both user
and supervisor mode.
Some of the code has been cleaned up to place system call
handler functions in proximity with their implementations.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
move misc/util.h to sys/util.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move misc/rb.h to sys/rb.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move misc/printk.h to sys/printk.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move misc/math_extras.h to sys/math_extras.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move misc/dlist.h to sys/dlist.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move atomic.h to sys/atomic.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move tracing.h to debug/tracing.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
We introduce k_float_disable() system call, to allow threads to
disable floating point context preservation. The system call is
to be used in FP Sharing Registers mode (CONFIG_FP_SHARING=y).
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
This name collides with one in the bt subsystem, and wasn't named in
proper zephyrese anyway.
Fixes#16604
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Use the new math_extras functions instead of calling builtins directly.
Change a few local variables to size_t after checking that all uses of
the variable actually expects a size_t.
Signed-off-by: Jakob Olesen <jolesen@fb.com>
When some header are included into C++ source file, this kind of
compilations errors are generated:
error: invalid conversion from 'void*'
to 'u32_t*' {aka 'unsigned int*'} [-fpermissive]
Signed-off-by: Benoit Leforestier <benoit.leforestier@gmail.com>
Update the name of mem-domain API function to add a partition
so that it complies with the 'z_' prefix convention. Correct
the function documentation.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
The struct _caller_saved is not used. Most architectures put
automatically the registers onto stack, in others architectures the
exception code does it.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
The struct _kernel_ach exists only because ARC' s port needed it, in
all other ports this was defined as an empty struct. Turns out that
this struct is not required even for ARC anymore, this is a legacy
code from nanokernel time.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Rename reserved function names in arch/ subdirectory. The Python
script gen_priv_stacks.py was updated to follow the 'z_' prefix
naming.
Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
Controlling expression of if and iteration statements must have a
boolean type.
MISRA-C rule 14.4
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This function was returning an essentially boolean value. Just changing
the signature to return a bool.
MISRA-C rule 14.4
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This is throwing errors in static analysis, complaining that comparing
that a prior is higher and lower is impossible. That is wrong per my
eyes (I swear I think it might be cueing off the names of the
functions, which invert "higher" and "lower" to match our reversed
priority numbers).
But frankly this was never a very readable macro to begin with.
Refactor to put the bounds into the term, so the static analyzer can
prove it locally, and add a build assertion to catch any errors (there
are none currently) where the low<->high priority range is invalid.
Long term, we should probably remove this macro, it doesn't provide
much value. But removing it in response to a static analysis failure
is... not very responsible as a development practice.
Fixes#14816Fixes#14820
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Corrections in the documentation of arguments in
Z_SYSCALL_MEMORY, Z_SYSCALL_MEMORY_READ, and
Z_SYSCALL_MEMORY_WRITE macros.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
This commit enhances the documentation of z_arch_buffer_validate
describing the cases where the validation is performed
successfully, as well as the cases where the result is
undefined.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Currently thread abort doesn't work if a thread is currently scheduled
on a different CPU, because we have no way of delivering an interrupt
to the other CPU to force the issue. This patch adds a simple
framework for an architecture to provide such an IPI, implements it
for x86_64, and uses it to implement a spin loop in abort for the case
where a thread is currently scheduled elsewhere.
On SMP architectures (xtensa) where no such IPI is implemented, we
fall back to waiting on an arbitrary interrupt to occur. This "works"
for typical code (and all current tests), but of course it cannot be
guaranteed on such an architecture that k_thread_abort() will return
in finite time (e.g. the other thread on the other CPU might have
taken a spinlock and entered an infinite loop, so it will never
receive an interrupt to terminate itself)!
On non-SMP architectures this patch changes no code paths at all.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Update reserved function names starting with one underscore, replacing
them as follows:
'_k_' with 'z_'
'_K_' with 'Z_'
'_handler_' with 'z_handl_'
'_Cstart' with 'z_cstart'
'_Swap' with 'z_swap'
This renaming is done on both global and those static function names
in kernel/include and include/. Other static function names in kernel/
are renamed by removing the leading underscore. Other function names
not starting with any prefix listed above are renamed starting with
a 'z_' or 'Z_' prefix.
Function names starting with two or three leading underscores are not
automatcally renamed since these names will collide with the variants
with two or three leading underscores.
Various generator scripts have also been updated as well as perf,
linker and usb files. These are
drivers/serial/uart_handlers.c
include/linker/kobject-text.ld
kernel/include/syscall_handler.h
scripts/gen_kobject_list.py
scripts/gen_syscall_header.py
Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
Some init tasks may use some bss app memory areas and
expect them to be zeroed out. Do this much earlier
in the boot process, before any of the init tasks
run.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
On x86, if a supervisor thread belonging to a memory domain
adds a new partition to that domain, subsequent context switches
to another thread in the same domain, or dropping itself to user
mode, does not have the correct setup in the page tables.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We can just implement this as a macro and not needlessly
run afoul of MISRC-C rule 17.1
Fixes: #10012
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Upon hard/soft irq or exception entry/exit, handle transitions
off or onto the trampoline stack, which is the only stack that
can be used on the kernel side when the shadow page table
is active. We swap page tables when on this stack.
Adjustments to page tables are now as follows:
- Any adjustments for stack memory access now are always done
to the user page tables
- Any adjustments for memory domains are now always done to
the user page tables
- With KPTI, resetting a page now clears the present bit
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
These functions, for good design reason, take a locking key to
atomically release along with the context swtich. But there's still a
common pattern in code to do a switch unconditionally by passing
irq_lock() directly. On SMP that's a little hurtful as it spams the
global lock. Provide an _unlocked() variant for
_Swap/_reschedule/_pend_curr for simplicity and efficiency.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Just like with _Swap(), we need two variants of these utilities which
can atomically release a lock and context switch. The naming shifts
(for byte count reasons) to _reschedule/_pend_curr, and both have an
_irqlock variant which takes the traditional locking.
Just refactoring. No logic changes.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
We want a _Swap() variant that can atomically release/restore a
spinlock state in addition to the legacy irqlock. The function as it
was is now named "_Swap_irqlock()", while _Swap() now refers to a
spinlock and takes two arguments. The former will be going away once
existing users (not that many! Swap() is an internal API, and the
long port away from legacy irqlocking is going to be happening mostly
in drivers) are ported to spinlocks.
Obviously on uniprocessor setups, these produce identical code. But
SMP requires that the correct API be used to maintain the global lock.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
* Newlib now defines a special z_newlib_partition containing
all globals relevant to newlib. Most of these are in libc.a
with a heap tracking variable in newlib's hooks.
* Both C libraries now expose a k_mem_partition containing the
bounds of the malloc heap arena. Threads that want to use
libc malloc() will need to add this to their memory domain.
* z_newlib_get_heap_bounds has been removed, in favor of the
memory partition for the heap arena
* ztest now includes the C library partitions in its memory
domain.
* The mem_alloc test now runs in user mode to prove that this
all works for both C libraries.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
GCC 6.2.0 is making frustratingly poor inlining decisions with some of
these routines, resulting in an awful lot of runtime calls for code
that is only ever expanded once or twice within the file.
Treat with targetted ALWAYS_INLINE's to force the issue. The
scheduler code is a hot path.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Whether a timeout is linked into the timeout queue can be determined
from the corresponding sys_dnode_t linked state. This removes the need
to use a special flag value in dticks to determine that the timeout is
inactive.
Update _abort_timeout to return an error code, rather than the flag
value, when the timeout to be aborted was not active.
Remove the _INACTIVE flag value, and replace its external uses with an
internal API function that checks whether a timeout is inactive.
Signed-off-by: Peter A. Bigot <pab@pabigot.com>
The call to _arch_switch is a giant screaming sign inviting optimizer
bugs. The code that appears before is what happened long ago when we
were switched out, but the version that EXECUTED just now is actually
in a different thread. So the assignment to _current before the
switch actually assigned OUR thread (the "new_thread" of the old
context!) to _current.
But obviously the optimizer looks at that code and assumes that the
_current which got assigned to the thread we were switching to long
ago is still correct, and used it when retrieving the swap return
value.
Obviously the real bug here is that the _arch_switch() in question
lacked a memory clobber (and it's getting one).
But we can remove two lines, remove code from inside the interrupt
lock and make the implementation more robust by moving the read to
after the irq_unlock() (which generally also has a memory clobber).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
In general driver system calls are implemented at a subsystem
layer. However, some drivers may have capabilities specific to
the hardware not covered by the subsystem API. Such drivers may
want to define their own system calls.
This macro makes it simple to validate in the driver-specific
system call handlers that not only does the untrusted device
pointer correspond to the expected subsystem, initialization
state, and caller permissions, but also that the device object
is an instance of a specific driver (and not just any driver in
that subsystem).
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The function that checks if an object is valid is essentially a boolean
function. Just changing its return type to reflect it.
MISRA-C rule 14.4
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
System must not set the clock expiry via backdoor as it may
effect in unbound time drift of all scheduled timeouts.
Fixes: #11502
Signed-off-by: Pawel Dunaj <pawel.dunaj@nordicsemi.no>
If we just had the kernel's implementation, we could
just move this to lib/, but possible arch-specific
implementations dictate that we just make this a
syscall.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
There was an struct and a variable called _kernel. This is error prone
and a MISRA-C violation. It is changing the struct to have a unique
identifier.
MISRA-C rule 5.8
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
There is a struct and a macro called _ready_q, this is error
prone. Just removing it.
MISRA-C rule 5.4
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
In _pend_current_thread the argument key is always a unsigned
interger type and this function forces it to become a signed
interger. This is a dangerous behavior and cant be trusted to
work as expected.
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
This API shouldn't take a int type but instead it should take
u32_t. This argument has to be similar to irq_lock() and
irq_unlock().
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
I was pretty careful, but these snuck in. Most of them are due to
overbroad string replacements in comments. The pull request is very
large, and I'm too lazy to find exactly where to back-merge all of
these.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Now that the API has been fixed up, replace the existing timeout queue
with a much smaller version. The basic algorithm is unchanged:
timeouts are stored in a sorted dlist with each node nolding a delta
time from the previous node in the list; the announce call just walks
this list pulling off the heads as needed. Advantages:
* Properly spinlocked and SMP-aware. The earlier timer implementation
relied on only CPU 0 doing timeout work, and on an irq_lock() being
taken before entry (something that was violated in a few spots).
Now any CPU can wake up for an event (or all of them) and everything
works correctly.
* The *_thread_timeout() API is now expressible as a clean wrapping
(just one liners) around the lower-level interface based on function
pointer callbacks. As a result the timeout objects no longer need
to store backpointers to the thread and wait_q and have shrunk by
33%.
* MUCH smaller, to the tune of hundreds of lines of code removed.
* Future proof, in that all operations on the queue are now fronted by
just two entry points (_add_timeout() and z_clock_announce()) which
can easily be augmented with fancier data structures.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
_timeout_remaining_get() was a function on a struct _timeout, doing
iteration on the timeout list, but it was defined in timer.c (the
higher level abstraction).
Move it to where it belongs. Also have it return ticks instead of ms
to conform to scheme in the rest of the timeout API. And rename it to
a more standard zephyr name.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Add the callback parameter to add_timeout(), and remove the thread
argument. Now the "low level" timeout API can be expressed without
reference to threads.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Now that this is known to be an unused value, remove it from the API.
Note that this caught a few spots where we were passing values (a
non-NULL wait_q with a NULL thread handle) that were always being
ignored before.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The timeout_q.h scheme, where it declared real functions, but the
stubs for when there was no clock were in wait_q.h was senselessly
weird. Put them in the same file.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Instead of checking every time we hit the low-level context switch
path to see if the new thread has a "partner" with which it needs to
share time, just run the slice timer always and reset it from the
scheduler at the points where it has already decided a switch needs to
happen. In TICKLESS_KERNEL situations, we pay the cost of extra timer
interrupts at ~10Hz or whatever, which is low (note also that this
kind of regular wakeup architecture is required on SMP anyway so the
scheduler can "notice" threads scheduled by other CPUs). Advantages:
1. Much simpler logic. Significantly smaller code. No variance or
dependence on tickless modes or timer driver (beyond setting a
simple timeout).
2. No arch-specific assembly integration with _Swap() needed
3. Better performance on many workloads, as the accounting now happens
at most once per timer interrupt (~5 Hz) and true rescheduling and
not on every unrelated context switch and interrupt return.
4. It's SMP-safe. The previous scheme kept the slice ticks as a
global variable, which was an unnoticed bug.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This got broken. Add some #ifery to handle the case. Not clean, will
clean up in a future pass once the API is final.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
There was no good reason to have these rather large functions in a
header. Put them into sys_clock.c for now, pending rework to the
system.
Now the API is clearly visible in a small header.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The tickless driver had a bunch of "hairy" APIs which forced the timer
drivers to do needless low-level accounting for the benefit of the
kernel, all of which then proceeded to implement them via cut and
paste. Specifically the "program_time" calls forced the driver to
expose to the kernel exactly when the next interrupt was due and how
much time had elapsed, in a parallel API to the existing "what time is
it" and "announce a tick" interrupts that carry the same information.
Remove these from the kernel, replacing them with synthesized logic
written in terms of the simpler APIs.
In some cases there will be a performance impact due to the use of the
64 bit uptime call, but that will go away soon.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The existing API had two almost identical functions: _set_time() and
_timer_idle_enter(). Both simply instruct the timer driver to set the
next timer interrupt expiration appropriately so that the call to
z_clock_announce() will be made at the requested number of ticks. On
most/all hardware, these should be implementable identically.
Unfortunately because they are specified differently, existing drivers
have implemented them in parallel.
Specify a new, unified, z_clock_set_timeout(). Document it clearly
for implementors. And provide a shim layer for legacy drivers that
will continue to use the old functions.
Note that this patch fixes an existing bug found by inspection: the
old call to _set_time() out of z_clock_announce() failed to test for
the "wait forever" case in the situation where clock_always_on is
true, meaning that a system that reached this point and then never set
another timeout would freeze its uptime clock incorrectly.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The builtin function __builtin_umul_overflow returns a boolean and
should not checked as an integer.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
According C99 the first 31 characters of an identifier must be unique.
Shortening the namespace of the generated objects to achieve it.
C99 - 5.2.4.1
MISRA-C rule 5.1
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Change APIs that essentially return a boolean expression - 0 for
false and 1 for true - to return a bool.
MISRA-C rule 14.4
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Added k_thread_name_set() and enable thread name setting when declaring
static threads. This is enabled only when THREAD_MONITOR is used. System
threads get a name by default.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Add ifdef guard to the z_reset_timeslice() to fix compilation
errors when CONFIG_TIMESLICING is disabled.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
MISRA C requires that every controlling expression of and if or while
statement have a boolean type.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Rename _DEVICE_STRUCT_SIZE to _DEVICE_STRUCT_SIZEOF. This causes it to
be picked by the script 'gen_offset_header.py' and inserted into the
header file 'include/generated/offsets.h'.
Renaming from x_SIZE to x_SIZEOF will align it's name with the other
symbols that denote a sctruct's size, like K_THREAD_SIZEOF.
Furthermore, it will allow the symbol to be accessed through a header
file define, instead of only as an extern symbol. This is more
flexible, and more aligned with the other symbols in offsets.
Finally, if we are able to move all of offsets.c symbols into the
offsets.h header file we be able to remove offsets.o from the link and
thereby simplify the linking process.
Signed-off-by: Sebastian Bøe <sebastian.boe@nordicsemi.no>
Any word started with underscore followed by and uppercase letter or a
second underscore is a reserved word according with C99.
With have *many* violations on Zephyr's code, this commit is tackling
only the violations caused by headers guards. It also takes the
opportunity to normalize them using the filename in uppercase and
replacing dot with underscore. e.g file.h -> FILE_H
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
__swap function was returning -EAGAIN in some case, though its return
value was declared as unsigned int.
This commit changes this function to return int since it can return a
negative value and its return was already been propagate as int.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
When adding a new runnable thread in tickless mode, we need to detect
whether it will timeslice with the running thread and reset the timer,
otherwise it won't get any CPU time until the next interrupt fires at
some indeterminate time in the future.
This fixes the specific bug discussed in #7193, but the broader
problem of tickless and timeslicing interacting badly remains. The
code as it exists needs some rework to avoid all the #ifdef mess.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
When adding a new runnable thread in tickless mode, we need to detect
whether it will timeslice with the runnable thread and reset the
timer, otherwise it won't get any CPU time until the next interrupt
fires at some indeterminate time in the future.
This fixes the specific bug discussed in #7193, but the broader
problem of tickless and timeslicing interacting badly remains. The
code as it exists needs some rework to avoid all the #ifdef mess.
Note that the patch also moves _ready_thread() from a ksched.h inline
to sched.c.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This fixes#8669, and is distressingly subtle for a one-line patch:
The list iteration code in _handle_expired_timeouts() would remove the
timeout from our (temporary -- the dlist header is on the stack of our
calling function) list of expired timeouts before invoking the
handler. But sys_dlist_remove() only fixes up the containing list
pointers, leaving garbage in the node. If the action of that handler
is to re-add the timeout (which is very common!) then that will then
try to remove it AGAIN from the same list.
Even then, the common case is that the expired list contains only one
item, so the result is a perfectly valid empty list that affects
nothing. But if you have more than one, you get a corrupt cycle in
the iteration list and things get weird.
As it happens, there's no value in trying to remove this timeout from
the temporary list at all. Just iterate over it naturally.
Really, this design is fragile: we shouldn't be reusing the list nodes
in struct _timeout for this purpose and should figure out some other
mechanism. But this fix should be good for now.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Move to more generic tracing hooks that can be implemented in different
ways and do not interfere with the kernel.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Define generic interface and hooks for tracing to replace
kernel_event_logger and existing tracing facilities with something more
common.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This patch provides support needed to get timing related
information from xtensa based SOC.
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
Bitwise operators should be used only with unsigned integer operands
because the result os bitwise operations on signed integers are
implementation-defined.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
There exist two symbols that became equivalent when PR #9383 was
merged; _SYSCALL_LIMIT and K_SYSCALL_LIMIT. This patch deprecates the
redundant _SYSCALL_LIMIT symbol.
_SYSCALL_LIMIT was initally introduced because before PR #9383 was
merged K_SYSCALL_LIMIT was an enum, which couldn't be included into
assembly files. PR #9383 converted it into a define, which can be
included into assembly files, making _SYSCALL_LIMIT redundant.
Likewise for _SYSCALL_BAD.
Signed-off-by: Sebastian Bøe <sebastian.boe@nordicsemi.no>
We now have functions for handling all the details of copying
data to/from user mode, including C strings and copying data
into resource pool allocations.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Zephyr 1.12 removed the old scheduler and replaced it with the choice
of a "dumb" list or a balanced tree. But the old multi-queue
algorithm is still useful in the space between these two (applications
with large-ish numbers of runnable threads, but that don't need fancy
features like EDF or SMP affinity). So add it as a
CONFIG_SCHED_MULTIQ option.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Make these "choice" items instead of a single boolean that implies the
element unset.
Also renames WAITQ_FAST to WAITQ_SCALABLE, as the rbtree is really
only "fast" for large queue sizes (it's constant factor overhead is
bigger than a list's!)
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The _THREAD_POLLING bit in thread_state was never actually a
legitimate thread "state". It is a clever synchronization trick
introduced to allow the thread to release the irq_lock while looping
over the input event array without dropping events.
Instead, make that flag a word in the "poller" struct that lives on
the stack of the thread calling k_poll. The disadvantage is the 4
bytes of thread space needed. Advantages:
+ Cleaner API, it's now internal to poll instead of being globally
visible.
+ The thread_state bit space is just one byte, and was almost full
already.
+ Smaller code to write/test a full word and not a bitfield
+ Words are atomic, so no need for one of irq lock/unlock pairs.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The original implementation of CONFIG_THREAD_MONITOR would
try to leverage a thread's initial stack layout to provide
the entry function with arguments for any given thread.
This is problematic:
- Some arches do not have a initial stack layout suitable for
this
- Some arches never enabled this at all (riscv32, nios2)
- Some arches did not enable this properly
- Dropping to user mode would erase or provide incorrect
information.
Just spend a few extra bytes to store this stuff directly
in the k_thread struct and get rid of all the arch-specific
code for this.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Recent changes post-scheduler-rewrite broke scheduling on SMP:
The "preempt_ok" feature added to isolate preemption points wasn't
honored in SMP mode. Fix this by adding a "swap_ok" field to the CPU
record (not the thread) which is set at the same time out of
update_cache().
The "queued" flag wasn't being maintained correctly when swapping away
from _current (it was added back to the queue, but the flag wasn't
set).
Abstract out a "should_preempt()" predicate so SMP and uniprocessor
paths share the same logic, which is distressingly subtle.
There were two places where _Swap() was predicated on
_get_next_ready_thread() != _current. That's no longer a benign
optimization in SMP, where the former function REMOVES the next thread
from the queue. Just call _Swap() directly in SMP, which has a
unified C implementation that does this test already. Don't change
other architectures in case it exposes bugs with _Swap() switching
back to the same thread (it should work, I just don't want to break
anything).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
We generalize querying the entropy driver directly with
a new internal API, which is now used by CONFIG_STACK_RANDOM
and stack canary initialization.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Very simple implementation of deadline scheduling. Works by storing a
single word in each thread containing a deadline, setting it (as a
delta from "now") via a single new API call, and using it as extra
input to the existing thread priority comparison function when
priorities are equal.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This replaces the existing scheduler (but not priority handling)
implementation with a somewhat simpler one. Behavior as to thread
selection does not change. New features:
+ Unifies SMP and uniprocessing selection code (with the sole
exception of the "cache" trick not being possible in SMP).
+ The old static multi-queue implementation is gone and has been
replaced with a build-time choice of either a "dumb" list
implementation (faster and significantly smaller for apps with only
a few threads) or a balanced tree queue which scales well to
arbitrary numbers of threads and priority levels. This is
controlled via the CONFIG_SCHED_DUMB kconfig variable.
+ The balanced tree implementation is usable symmetrically for the
wait_q abstraction, fixing a scalability glitch Zephyr had when many
threads were waiting on a single object. This can be selected via
CONFIG_WAITQ_FAST.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The SMP testing missed the case where _Swap() decides to return back
into the _current. Obviously there is no valid switch handle for the
running thread into which we can restore, and everything blows up.
(What happened is that the new scheduler code opened up a spot where
k_thread_priority_set() does a _reschedule() unconditionally and
doens't check to see whether or not it's needed like the old code).
But that isn't incorrect! It's entirely possible that _Swap() may
find that no thread is runnable except _current (due, for example, to
another CPU racing the other thread you expected off to sleep or
something). Don't blow up, check and return a noop.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
There were multiple spots where code was using the _wait_q_t
abstraction as a synonym for a dlist and doing direct list management
on them with the dlist APIs. Refactor _wait_q_t into a proper opaque
struct (not a typedef for sys_dlist_t) and write a simple wrapper API
for the existing usages. Now replacement of wait_q with a different
data structure is much cleaner.
Note that there were some SYS_DLIST_FOR_EACH_SAFE loops in mailbox.c
that got replaced by the normal/non-safe macro. While these loops do
mutate the list in the code body, they always do an early return in
those circumstances instead of returning into the macro'd for() loop,
so the _SAFE usage was needless.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Refactoring. Mempool wants to unpend all threads at once. It's
cleaner to do this in the scheduler instead of the IPC code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The various macros to do checks in system call handlers all
implictly would generate a kernel oops if a check failed.
This is undesirable for a few reasons:
* System call handlers that acquire resources in the handler
have no good recourse for cleanup if a check fails.
* In some cases we may want to propagate a return value back
to the caller instead of just killing the calling thread,
even though the base API doesn't do these checks.
These macros now all return a value, if nonzero is returned
the check failed. K_OOPS() now wraps these calls to generate
a kernel oops.
At the moment, the policy for all APIs has not changed. They
still all oops upon a failed check/
The macros now use the Z_ notation for private APIs.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Some kernel APIs may need to allocate memory in order to function
correctly, especially if they are exposed to userspace where
buffers provided by user code cannot be trusted.
Instead of simply drawing from the system heap, specific pools
may instead be assigned to threads, and any requests made on
behalf of the calling thread will draw heap memory from that pool.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
MPU devices that enforce power-of-two alignment now
specify the size of the buffer used for the newlib heap.
This buffer will be properly aligned and a pointer
exposed in a kernel header, such that it can be added
to a user thread's memory domain configuration if
necessary.
MPU devices that don't have these restrictions allocate
the heap as normal.
In all cases, if an MPU/MMU region needs to be programmed,
the z_newlib_get_heap_bounds() API will return the necessary
information.
Given how precious MPU regions are, no automatic programming
of the MPU is done; applications will need to do this as
needed in their memory domain configurations.
On x86, the x86 MMU-specific code has been moved to arch/x86
using the new z_newlib_get_heap_bounds() API.
Fixes: #6814
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This was wrong in two ways, one subtle and one awful.
The subtle problem was that the IRQ lock isn't actually globally
recursive, it gets reset when you context switch (i.e. a _Swap()
implicitly releases and reacquires it). So the recursive count I was
keeping needs to be per-thread or else we risk deadlock any time we
swap away from a thread holding the lock.
And because part of my brain apparently knew this, there was an
"optimization" in the code that tested the current count vs. zero
outside the lock, on the argument that if it was non-zero we must
already hold the lock. Which would be true of a per-thread counter,
but NOT a global one: the other CPU may be holding that lock, and this
test will tell you *you* do. The upshot is that a recursive
irq_lock() would almost always SUCCEED INCORRECTLY when there was lock
contention. That this didn't break more things is amazing to me.
The rework is actually simpler than the original, thankfully. Though
there are some further subtleties:
* The lock state implied by irq_lock() allows the lock to be
implicitly released on context switch (i.e. you can _Swap() with the
lock held at a recursion level higher than 1, which needs to allow
other processes to run). So return paths into threads from _Swap()
and interrupt/exception exit need to check and restore the global
lock state, spinning as needed.
* The idle loop design specifies a k_cpu_idle() function that is on
common architectures expected to enable interrupts (for obvious
reasons), but there is no place to put non-arch code to wire it into
the global lock accounting. So on SMP, even CPU0 needs to use the
"dumb" spinning idle loop.
Finally this patch contains a simple bugfix too, found by inspection:
the interrupt return code used when CONFIG_SWITCH is enabled wasn't
correctly setting the active flag on the threads, opening up the
potential for a race that might result in a thread being scheduled on
two CPUs simultaneously.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Driver APIs might not implement all operations, making it possible for
a user thread to get the kernel to execute a function at 0x00000000.
Perform runtime checks in all the driver handlers, checking if they're
capable of performing the requested operation.
Fixes#6907.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
There was a ton of junk in this header. Pare it down to just the
stuff actually used by code outside of sched.c, move the needed
internal stuff into sched.c itself, and drop everything else.
Note that (other than the tiny inlines that remain here in the header)
the scheduler interface exposed to the rest of the system is now
composed of just 12 functions.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The POSIX layer had a simple ready_one_thread() utility. Move this to
the scheduler API (with a prepended underscore -- it's an internal
API) so that it can be synchronized along with the rest of the
scheduler.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Almost everywhere this was called, it was immediately followed by
_abort_thread_timeout(), for obvious reasons. The only exceptions
were in timeout and k_timer expiration (unifying these two would be
another good cleanup), which are peripheral parts of the scheduler and
can plausibly use a more "internal" API.
So make the common case the default, and expose the old behavior as
_unpend_thread_no_timeout(). (Along with identical changes for
_unpend_first_thread) Saves code bytes and simplifies scheduler
surface area for future synchronization work.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Now that other work has eliminated the two cases where we had to do a
reschedule "but yield even if we are cooperative", we can squash both
down to a single _reschedule() function which does almost exactly what
legacy _Swap() did, but wrapped as a proper scheduler API.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Everywhere the current thread is pended, the code is going to have to
do a _Swap() soon afterward, yet the scheduler API exposed these as
separate steps. Unify this pattern everywhere it appears, which saves
some code bytes and gets _Swap() out of the general scheduler API at
zero cost.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
There was a somewhat promiscuous pattern in the kernel where IPC
mechanisms would do something that might effect the current thread
choice, then check _must_switch_threads() (or occasionally
__must_switch_threads -- don't ask, the distinction is being replaced
by real English words), sometimes _is_in_isr() (but not always, even
in contexts where that looks like it would be a mistake), and then
call _Swap() if everything is OK, otherwise releasing the irq_lock().
Sometimes this was done directly, sometimes via the inverted test,
sometimes (poll, heh) by doing the test when the thread state was
modified and then needlessly passing the result up the call stack to
the point of the _Swap().
And some places were just calling _reschedule_threads(), which did all
this already.
Unify all this madness. The old _reschedule_threads() function has
split into two variants: _reschedule_yield() and
_reschedule_noyield(). The latter is the "normal" one that respects
the cooperative priority of the current thread (i.e. it won't switch
out even if there is a higher priority thread ready -- the current
thread has to pend itself first), the former is used in the handful of
places where code was doing a swap unconditionally, just to preserve
precise behavior across the refactor. I'm not at all convinced it
should exist...
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
A priority value cannot be simultaneously higher than the maximum
possible value and smaller than the minimum value. Rewrite the
_VALID_PRIO() macro as a function so that this if either of these
invariants are invalid, the priority is considered invalid.
Coverity-CID: 182584
Coverity-CID: 182585
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
The result of left shifting a bit into the sign-bit is undefined
behavior. This makes the offending shift operation unsigned.
Signed-off-by: Kristian Klomsten Skordal <kristian.skordal@nordicsemi.no>
The scheduler exposed two APIs to do the same thing:
_add_thread_to_ready_q() was a low level primitive that in most cases
was wrapped by _ready_thread(), which also (1) checks that the thread
_is_ready() or exits, (2) flags the thread as "started" to handle the
case of a thread running for the first time out of a waitq timeout,
and (3) signals a logger event.
As it turns out, all existing usage was already checking case #1.
Case #2 can be better handled in the timeout resume path instead of on
every call. And case #3 was probably wrong to have been skipping
anyway (there were paths that could make a thread runnable without
logging).
Now _add_thread_to_ready_q() is an internal scheduler API, as it
probably always should have been.
This also moves some asserts from the inline _ready_thread() wrapper
to the underlying true function for code size reasons, otherwise the
extra use of the inline added by this patch blows past code size
limits on Quark D2000.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The xtensa asm2 layer had a function to select the next switch handle
to return into following an exception. There is no arch-specific code
there, it's just scheduler logic. Move it to the scheduler where it
belongs.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
In SMP, the system timer is used for timeslicing on auxiliary CPUs,
but the base system timekeeping via _nano_sys_clock_tick_announce() is
still done on CPU0 only (because the framework isn't prepared for
asynchronous notification yet). Skip processing on CPU1+.
Also, due to a hardware interaction* that is difficult to work around,
timer initialization on the auxiliary CPUs is done at the very end of
the CPU bringup, just before the swap into the scheduler. A
smp_timer_init() API has been added for this purpose.
* On ESP-32, enabling the timer seems to result in a near-synchronous
interrupt being delivered despite my best attempts to keep it
masked, then blowing things up because the CPU record isn't set up
to handle it yet.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Now that all the pieces are in place, enable SMP for real:
Initialize the CPU records, launch the CPUs at the end of kernel
initialization, have them wait for a flag to release them into the
scheduler, then enter into the runnable threads via _Swap().
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The scheduler needs a few tweaks to work in SMP mode:
1. The "cache" field just doesn't work. With more than one CPU,
caching the highest priority thread isn't useful as you may need N
of them at any given time before another thread is returned to the
scheduler. You could recalculate it at every change, but that
provides no performance benefit. Remove.
2. The "bitmask" designed to prevent the need to individually check
priorities is likewise dropped. This could work, but in fact on
our only current SMP system and with current K_NUM_PRIOPRITIES
values it provides no real benefit.
3. The individual threads now have a "current cpu" and "active" flag
so that the choice of the next thread to run can correctly skip
threads that are active on other CPUs.
The upshot is that a decent amount of code gets #if'd out, and the new
SMP implementations for _get_highest_ready_prio() and
_get_next_ready_thread() are simpler and smaller, at the expense of
having to drop older optimizations.
Note that scheduler synchronization is unchanged: all scheduler APIs
used to require that an irq_lock() be held, which means that they now
require the global spinlock via the same API. This should be a very
early candidate for lock granularity attention!
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
When in SMP mode, the nested/irq_stack/current fields are specific to
the current CPU and not to the kernel as a whole, so we need an array
of these. Place them in a _cpu_t struct and implement a
_arch_curr_cpu() function to retrieve the pointer.
When not in SMP mode, the first CPU's fields are defined as a unioned
with the first _cpu_t record. This permits compatibility with legacy
assembly on other platforms. Long term, all users, including
uniprocessor architectures, should be updated to use the new scheme.
Fundamentally this is just renaming: the structure layout and runtime
code do not change on any existing platforms and won't until someone
defines a second CPU.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The xtensa-asm2 work included a patch that added nano_internal.h
includes in lots of places that needed to have _Swap defined, because
it had to break a cycle and this no longer got pulled in from the arch
headers.
Unfortunately those new includes created new and more amusing cycles
elsewhere which led to breakage on other platforms.
Break out the _Swap definition (only) into a separate header and use
that instead. Cleaner. Seems not to have any more hidden gotchas.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
When using _arch_switch() context switching, the thread return value
is a generic hook and not provided by the architecture.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The existing __swap() mechanism is too high level for some
applications because of its scheduler-awareness. This introduces a
new _arch_switch() mechanism, which is a simpler primitive that looks
like:
void _arch_switch(void *handle, void **old_handle_out);
The new thread handle (typically just a stack pointer) is specified
explicitly instead of being picked up from the scheduler by
per-architecture code, and on return the "old" thread handle that got
switched out is returned through the pointer.
The new primitive (currently available only on xtensa) is selected
when CONFIG_USE_SWITCH is "y". A new C _Swap() implementation based
on this primitive is then added which operates compatibly.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
K_NUM_PRIORITIES and K_NUM_PRIO_BITMAPS were defined in
nano_internal.h, but used in only a handful of places. Move to
kernel_structs.h (somewhat higher up in the hierarchy) to help with
include file cycle-breaking. Arguably they are a better fit there
anyway.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Rename the nano_internal.h to kernel_internal.h and modify the
header file name accordingly wherever it is used.
Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
Add an architecure specfic code for the memory domain
configuration. This is needed to support a memory domain API
k_mem_domain_add_thread.
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
Having two implementations of the same thing is bad,
especially when one can just call the other inline version.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Additional arch specific interfaces to handle memory domain
destroy and single partition removal.
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
Kernel object metadata had an extra data field added recently to
store bounds for stack objects. Use this data field to assign
IDs to thread objects at build time. This has numerous advantages:
* Threads can be granted permissions on kernel objects before the
thread is initialized. Previously, it was necessary to call
k_thread_create() with a K_FOREVER delay, assign permissions, then
start the thread. Permissions are still completely cleared when
a thread exits.
* No need for runtime logic to manage thread IDs
* Build error if CONFIG_MAX_THREAD_BYTES is set too low
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Additional arch specific interfaces to handle memory domain
destroy and single partition removal.
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
Currently this is defined as a k_thread_stack_t pointer.
However this isn't correct, stacks are defined as arrays. Extern
references to k_thread_stack_t doesn't work properly as the compiler
treats it as a pointer to the stack array and not the array itself.
Declaring as an unsized array of k_thread_stack_t doesn't work
well either. The least amount of confusion is to leave out the
pointer/array status completely, use pointers for function prototypes,
and define K_THREAD_STACK_EXTERN() to properly create an extern
reference.
The definitions for all functions and struct that use
k_thread_stack_t need to be updated, but code that uses them should
be unchanged.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We also need macros to assert that an object must be in an
uninitialized state. This will be used for validating thread
and stack objects to k_thread_create(), which must not be already
in use.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
It's currently too easy to run out of thread IDs as they
are never re-used on thread exit.
Now the kernel maintains a bitfield of in-use thread IDs,
updated on thread creation and termination. When a thread
exits, the permission bitfield for all kernel objects is
updated to revoke access for that retired thread ID, so that
a new thread re-using that ID will not gain access to objects
that it should not have.
Because of these runtime updates, setting the permission
bitmap for an object to all ones for a "public" object doesn't
work properly any more; a flag is now set for this instead.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This will allow these thread objects to be re-used.
_mark_thread_as_dead() removed, it was only being called in one
place.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
API to assist with re-using objects, such as terminated threads or
kernel objects returned to a pool.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Use some preprocessor trickery to automatically deduce the amount of
arguments for the various _SYSCALL_HANDLERn() macros. Makes the grunt
work of converting a bunch of kernel APIs to system calls slightly
easier.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
Does the opposite of k_object_access_grant(); the provided thread will
lose access to that kernel object.
If invoked from userspace the caller must hace sufficient access
to that object and permission on the thread being revoked access.
Fix documentation for k_object_access_grant() API to reflect that
permission on the thread parameter is needed as well.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
By default, threads are created only having access to their own thread
object and nothing else. This new flag to k_thread_create() gives the
thread access to all objects that the parent had at the time it was
created, with the exception of the parent thread itself.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We now have macros which should significantly reduce the amount of
boilerplate involved with defining system call handlers.
- Macros which define the proper prototype based on number of arguments
- "SIMPLE" variants which create handlers that don't need anything
other than object verification
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
- Dumping error messages split from _k_object_validate(), to avoid spam
in test cases that are expected to have failure result.
- _k_object_find() prototype moved to syscall_handler.h
- Clean up k_object_access() implementation to avoid double object
lookup and use single validation function
- Added comments, minor whitespace changes
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Computing the total size of the array need to handle the case where
the product overflow a 32-bit unsigned integer.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Instead of boolean arguments to indicate memory read/write
permissions, or init/non-init APIs, new macros are introduced
which bake the semantics directly into the name of the macro.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Expecting stringified expressions to be completely comprehensible to end
users is wishful thinking; we really need to express what a failed
system call verification step means in human terms in most cases.
Memory buffer and kernel object checks now are implemented in terms of
_SYSCALL_VERIFY_MSG.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This API only gets used inside system call handlers and a specific test
case dedicated to it. Move definition to the private kernel header along
with the rest of the defines for system call handlers.
A non-userspace inline variant of this function is unnecessary and has
been deleted.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Add the following application-facing memory domain APIs:
k_mem_domain_init() - to initialize a memory domain
k_mem_domain_destroy() - to destroy a memory domain
k_mem_domain_add_partition() - to add a partition into a domain
k_mem_domain_remove_partition() - to remove a partition from a domain
k_mem_domain_add_thread() - to add a thread into a domain
k_mem_domain_remove_thread() - to remove a thread from a domain
A memory domain would contain some number of memory partitions.
A memory partition is a memory region (might be RAM, peripheral
registers, flash...) with specific attributes (access permission,
e.g. privileged read/write, unprivileged read-only, execute never...).
Memory partitions would be defined by set of MPU regions or MMU tables
underneath.
A thread could only belong to a single memory domain any point in time
but a memory domain could contain multiple threads.
Threads in the same memory domain would have the same access permission
to the memory partitions belong to the memory domain.
The memory domain APIs are used by unprivileged threads to share data
to the threads in the same memory and protect sensitive data from
threads outside their domain. It is not only for improving the security
but also useful for debugging (unexpected access would cause exception).
Jira: ZEP-2281
Signed-off-by: Chunlin Han <chunlin.han@linaro.org>
Everything get passed to handlers as u32_t, make it simpler to check
something that is known to be a pointer, like we already do with
_SYSCALL_IS_OBJ().
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
To define a system call, it's now sufficient to simply tag the inline
prototype with "__syscall" or "__syscall_inline" and include a special
generated header at the end of the header file.
The system call dispatch table and enumeration of system call IDs is now
automatically generated.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
- syscall.h now contains those APIs needed to support invoking calls
from user code. Some stuff moved out of main kernel.h.
- syscall_handler.h now contains directives useful for implementing
system call handler functions. This header is not pulled in by
kernel.h and is intended to be used by C files implementing kernel
system calls and driver subsystem APIs.
- syscall_list.h now contains the #defines for system call IDs. This
list is expected to grow quite large so it is put in its own header.
This is now an enumerated type instead of defines to make things
easier as we introduce system calls over the new few months. In the
fullness of time when we desire to have a fixed userspace/kernel ABI,
this can always be converted to defines.
Some new code added:
- _SYSCALL_MEMORY() macro added to check memory regions passed up from
userspace in handler functions
- _syscall_invoke{7...10}() inline functions declare for invoking system
calls with more than 6 arguments. 10 was chosen as the limit as that
corresponds to the largest arg list we currently have
which is for k_thread_create()
Other changes
- auto-generated K_SYSCALL_DECLARE* macros documented
- _k_syscall_table in userspace.c is not a placeholder. There's no
strong need to generate it and doing so would require the introduction
of a third build phase.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
* Instead of a common system call entry function, we instead create a
table mapping system call ids to handler skeleton functions which are
invoked directly by the architecture code which receives the system
call.
* system call handler prototype specified. All but the most trivial
system calls will implement one of these. They validate all the
arguments, including verifying kernel/device object pointers, ensuring
that the calling thread has appropriate access to any memory buffers
passed in, and performing other parameter checks that the base system
call implementation does not check, or only checks with __ASSERT().
It's only possible to install a system call implementation directly
inside this table if the implementation has a return value and requires
no validation of any of its arguments.
A sample handler implementation for k_mutex_unlock() might look like:
u32_t _syscall_k_mutex_unlock(u32_t mutex_arg, u32_t arg2, u32_t arg3,
u32_t arg4, u32_t arg5, void *ssf)
{
struct k_mutex *mutex = (struct k_mutex *)mutex_arg;
_SYSCALL_ARG1;
_SYSCALL_IS_OBJ(mutex, K_OBJ_MUTEX, 0, ssf);
_SYSCALL_VERIFY(mutex->lock_count > 0, ssf);
_SYSCALL_VERIFY(mutex->owner == _current, ssf);
k_mutex_unlock(mutex);
return 0;
}
* the x86 port modified to work with the system call table instead of
calling a common handler function. fixed an issue where registers being
changed could confuse the compiler has been fixed; all registers, even
ones used for parameters, must be preserved across the system call.
* a new arch API for producing a kernel oops when validating system call
arguments added. The debug information reported will be from the system
call site and not inside the handler function.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Based on work by Chunlin Han <chunlin.han@linaro.org>.
This defines the interfaces that architectures will need to implement in
order to support memory domains in either MMU or MPU hardware.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Now creating a thread will assign it a unique, monotonically increasing
id which is used to reference the permission bitfield in the kernel
object metadata.
Stub functions in userspace.c now implemented.
_new_thread is now wrapped in a common function with pre- and post-
architecture thread initialization tasks.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This will be used by system call handlers to ensure that any memory
regions passed in from userspace are actually accessible by the calling
thread.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
In various places, a private _thread_entry_t, or the full prototype
were being used. Be consistent and use the same typedef everywhere.
Signen-off-by: Andrew Boie <andrew.p.boie@intel.com>
k_timer_start(timer, duration, period) is API used to
start a timer. Currently duration parameters accepts
only positive number.
But a user may require to do some periodic activity
ASAP and start timer with 0 value. So this patch
allows 0 as minimum value of duration.
In this patch, when duration value is set as 0 then
timer expiration handler is called instead of submiting
this into timeout queue.
Jira: ZEP-2497
Signed-off-by: Youvedeep Singh <youvedeep.singh@intel.com>
SYS_DLIST_FOR_EACH_CONTAINER is preferable over using
SYS_DLIST_FOR_EACH_NODE as that avoid casting directly which assumes the
node field is always at the beginning.
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
Historically, stacks were just character buffers and could be treated
as such if the user wanted to look inside the stack data, and also
declared as an array of the desired stack size.
This is no longer the case. Certain architectures will create a memory
region much larger to account for MPU/MMU guard pages. Unfortunately,
the kernel interfaces treat both the declared stack, and the valid
stack buffer within it as the same char * data type, even though these
absolutely cannot be used interchangeably.
We introduce an opaque k_thread_stack_t which gets instantiated by
K_THREAD_STACK_DECLARE(), this is no longer treated by the compiler
as a character pointer, even though it really is.
To access the real stack buffer within, the result of
K_THREAD_STACK_BUFFER() can be used, which will return a char * type.
This should catch a bunch of programming mistakes at build time:
- Declaring a character array outside of K_THREAD_STACK_DECLARE() and
passing it to K_THREAD_CREATE
- Directly examining the stack created by K_THREAD_STACK_DECLARE()
which is not actually the memory desired and may trigger a CPU
exception
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
One of the stack sentinel policies was to check the sentinel
any time a cooperative context switch is done (i.e, _Swap is
called).
This was done by adding a hook to _check_stack_sentinel in
every arch's __swap function.
This way is cleaner as we just have the hook in one inline
function rather than implemented in several different assembly
dialects.
The check upon interrupt is now made unconditionally rather
than checking if we are calling __swap, since the check now
is only called on cooperative _Swap(). The interrupt is always
serviced first.
Issue: ZEP-2244
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The kernel tracks time slice usage with the _time_slice_elapsed global.
Every time the timer interrupt goes off and the timer driver calls
_nano_sys_clock_tick_announce() with the elapsed time, this is added to
_time_slice_elapsed. If it exceeds the total time slice, the thread is
moved to the back of the queue for that priority level and
_time_slice_elapsed is reset to zero.
In a non-tickless kernel, this is the only time _time_slice_elapsed is
reset. If a thread uses up a partial time slice, and then cooperatively
switches to another thread, the next thread will inherit the remaining
time slice, causing it not to be able to run as long as it ought to.
There does exist code to properly reset the elapsed count, but it was
only compiled in a tickless kernel. Now it is built any time
CONFIG_TIMESLICING is enabled.
Issue: ZEP-2107
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>