Zephyr SMP kernels need to be able to run on architectures with
incoherent caches. Naive implementation of synchronization on such
architectures requires extensive cache flushing (e.g. flush+invalidate
everything on every spin lock operation, flush on every unlock!) and
is a performance problem.
Instead, many of these systems will have access to separate "coherent"
(usually uncached) and "incoherent" regions of memory. Where this is
available, place all writable data sections by default into the
coherent region. An "__incoherent" attribute flag is defined for data
regions that are known to be CPU-local and which should use the cache.
By default, this is used for stack memory.
Stack memory will be incoherent by default, as by definition it is
local to its current thread. This requires special cache management
on context switch, so an arch API has been added for that.
Also, when enabled, add assertions to strategic places to ensure that
shared kernel data is indeed coherent. We check thread objects, the
_kernel struct, waitq's, timeouts and spinlocks. In practice almost
all kernel synchronization is built on top of these structures, and
any shared data structs will contain at least one of them.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Several utility macros must be given the name of a macro to apply to
every element in a sequence to transform it. When no transformation
is necessary a macro still needs to be provided. That macro is
IDENTITY.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Remove SYS_RING_BUF_DECLARE_POW2 and SYS_RING_BUF_DECLARE_SIZE as they
have been marked deprecated for at least 2 releases.
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
Both operands of an operator in the arithmetic conversions
performed shall have the same essential type category.
Changes are related to converting the integer constants to the
unsigned integer constants
Signed-off-by: Aastha Grover <aastha.grover@intel.com>
This updates Kconfig options in the Doxygen documentation to use the new
@option ALIAS. There are three categories of fixes:
* Use of `:option:` inside Doxygen headers, which is not valid (this is
rST syntax!).
* Kconfig options that were just written as plain text and were no
references were generated.
* Use of `@rst` blocks where the only reason for using them was to have
Kconfig options resolved.
Signed-off-by: Fabio Utzig <fabio.utzig@nordicsemi.no>
This code had one purpose only, feed timing information into a test and
was not used by anything else. The custom trace points unfortunatly were
not accurate and this test was delivering informatin that conflicted
with other tests we have due to placement of such trace points in the
architecture and kernel code.
For such measurements we are planning to use the tracing functionality
in a special mode that would be used for metrics without polluting the
architecture and kernel code with additional tracing and timing code.
Furthermore, much of the assembly code used had issues.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Memory mapping, for now, will be a private kernel API
and is not intended to be application-facing at this time.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We no longer plan to support a split address space with
the kernel in high memory and per-process address spaces.
Because of this, we can simplify some things. System RAM
is now always identity mapped at boot.
We no longer require any virtual-to-physical translation
for page tables, and can remove the dual-mapping logic
from the page table generation script since we won't need
to transition the instruction point off of physical
addresses.
CONFIG_KERNEL_VM_BASE and CONFIG_KERNEL_VM_LIMIT
have been removed. The kernel's address space always
starts at CONFIG_SRAM_BASE_ADDRESS, of a fixed size
specified by CONFIG_KERNEL_VM_SIZE.
Driver MMIOs and other uses of k_mem_map() are still
virtually mapped, and the later introduction of demand
paging will result in only a subset of system RAM being
a fixed identity mapping instead of all of it.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
It implements gdb remote protocol to talk with a host gdb during the
debug session. The implementation is divided in three layers:
1 - The top layer that is responsible for the gdb remote protocol.
2 - An architecture specific layer responsible to write/read registers,
set breakpoints, handle exceptions, ...
3 - A transport layer to be used to communicate with the host
The communication with GDB in the host is synchronous and the systems
stops execution waiting for instructions and return its execution after
a "continue" or "step" command. The protocol has an exception that is
when the host sends a packet to cause an interruption, usually triggered
by a Ctrl-C. This implementation ignores this instruction though.
This initial work supports only X86 using uart as backend.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
All ISRs are meant to take a const struct device pointer, but to
simplify the change let's just move the parameter to constant and that
should be fine.
Fixes#27399
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
All ISRs are meant to take a const struct device pointer, but to
simplify the change let's just move the parameter to constant and that
should be fine.
Fixes#27399
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
Following are the changes to variable names that are matching
with tag names (Rule 5.7 violations)
In kernel.h, event_type is matching with a tag name in
lib/os/onoff.c. Added a _ prefix to event_type and
also to the macro argument names.
In userspace.c, *dyn_obj is matching with the tag name
dyn_obj in the file itslef. Changed it to dyn
In device.h, device_mmio.h, init.h and init.c,
changed the *device to dev. Except for one change in
init.h
Signed-off-by: Spoorthy Priya Yerabolu <spoorthy.priya.yerabolu@intel.com>
Saves us a few bytes of program text on arches that don't need
these implemented, currently all uniprocessor MPU-based systems.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Private macros/function for converting a virtual address
to a physical address. Only works for a specific range
of virtual addresses (the permanent SRAM mappings).
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is an extension to the existing FOR_EACH family of macros that
has a couple of twists that make it more useful in certain contexts,
as explained in the docstring.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
Some systems may need to associate arch-specific data to
a memory domain. Add a Kconfig and `arch` field for this,
and a new arch API to initialize it.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Add function pointer to vtable and use that directly instead of
routing via ioctl() call. This is done as we are trying to get
rid of ioctl() calls in the system.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Do not route close() calls via ioctl() as that is error prone
and quite pointless. Instead create a callback for close() in
fdtable and use it directly.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
The on-off manager infrastructure is designed to robust asynchronous
transition between binary states where multiple clients may be
initiating a transition from any context. The actual transition is
performed using a manager that tracks the current state and pending
operations. Requests are initiated by passing a reference to an
onoff_client object that holds client state including the notification
mechanism.
This API may be used in subsystems where the transitions for a
particular driver are always synchronous and isr-ok, e.g. setting a
SoC-controlled GPIO. In this situation the full on-off manager
infrastructure is wasteful. All we need is a record of the service
state: off, active count, or error.
Add a data structure and an API that can be used to replace the onoff
manager functionality in a situation where all transitions are isr-ok
and synchronous while retaining compatible behavior from the client
perspective.
Signed-off-by: Peter A. Bigot <pab@pabigot.com>
There is nothing wrong with instance numbers and they are
recommended for use whenever possible, but this is an API
design problem because it's not always possible to get nodes
by instance number; in some cases, drivers need to get node
identifiers from node labels, for example.
Change these APIs (which are not yet in any Zephyr release)
to take node IDs instead of instance IDs.
Fixes: #26984
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
true and false in a C environment are macros that are defined to 1 and
0 respectively. In C++ they are keywords, and as such cause failures
when macro tests require they expand to a 1 or 0 token. Use the
actual token values rather than the macro names when defining the
macros.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Numbers in C macros are treated as signed integer. This causes
issues when using macro KB(x). If the result is larger than
(2^31 - 1), it is treated as a negative number downstream.
For example, the RAM size of up_squared is 2GB. The result of
KB(CONFIG_SRAM_SIZE) should be 2147483648, but due to being
casted as signed, the result is instead -2147483648. Any
calculations using this would be incorrectly. Fix it by
casting x in KB(x) to size_t first.
Fixes#27164
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
These stacks are appropriate for threads that run purely in
supervisor mode, and also as stacks for interrupt and exception
handling.
Two new arch defines are introduced:
- ARCH_KERNEL_STACK_GUARD_SIZE
- ARCH_KERNEL_STACK_OBJ_ALIGN
New public declaration macros:
- K_KERNEL_STACK_RESERVED
- K_KERNEL_STACK_EXTERN
- K_KERNEL_STACK_DEFINE
- K_KERNEL_STACK_ARRAY_DEFINE
- K_KERNEL_STACK_MEMBER
- K_KERNEL_STACK_SIZEOF
If user mode is not enabled, K_KERNEL_STACK_* and K_THREAD_STACK_*
are equivalent.
Separately generated privilege elevation stacks are now declared
like kernel stacks, removing the need for K_PRIVILEGE_STACK_ALIGN.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The core kernel computes the initial stack pointer
for a thread, properly aligning it and subtracting out
any random offsets or thread-local storage areas.
arch_new_thread() no longer needs to make any calculations,
an initial stack frame may be placed at the bounds of
the new 'stack_ptr' parameter passed in. This parameter
replaces 'stack_size'.
thread->stack_info is now set before arch_new_thread()
is invoked, z_new_thread_init() has been removed.
The values populated may need to be adjusted on arches
which carve-out MPU guard space from the actual stack
buffer.
thread->stack_info now has a new member 'delta' which
indicates any offset applied for TLS or random offset.
It's used so the calculations don't need to be repeated
if the thread later drops to user mode.
CONFIG_INIT_STACKS logic is now performed inside
z_setup_new_thread(), before arch_new_thread() is called.
thread->stack_info is now defined as the canonical
user-accessible area within the stack object, including
random offsets and TLS. It will never include any
carved-out memory for MPU guards and must be updated at
runtime if guards are removed.
Available stack space is now optimized. Some arches may
need to significantly round up the buffer size to account
for page-level granularity or MPU power-of-two requirements.
This space is now accounted for and used by virtue of
the Z_THREAD_STACK_SIZE_ADJUST() call in z_setup_new_thread.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Define a macro which takes as input a requested stack buffer
size, and returns the size of the stack object needed to
contain it.
An optional architecture interface is provided, though many
arches will be fine with the default implementation.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Some forthcoming macros will need this. This may now
be individually defined by arch code instead of hanging
off of ARCH_THREAD_STACK_DEFINE.
Some additional details added to the documentation string.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This can be in inline function, which gains us some
type safety.
Temporarily define Z_STACK_PTR_ALIGN() in terms of it,
slated for removal once some other changes land.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
A new macro Z_THREAD_STACK_OBJ_ALIGN() defines the alignment
of the lowest memory address of a stack object.
Related is a new arch interface ARCH_THREAD_STACK_OBJ_ALIGN()
which lets arches specifiy this. ARCH_STACK_PTR_ALIGN or a
power-of-two ceiling is used if not defined.
The default stack macros now use this instead of hard-coding
ARCH_STACK_PTR_ALIGN.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Three variants provided:
- Drivers with one MMIO region to manage
- Drivers with multiple MMIO regions to manage
- Drivers or driver-like code which doesn't use Zephyr's device
model
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This will be the interface for mapping memory in the kernel's
part of the address space, which is guaranteed to be persistent
regardless of what thread is scheduled.
Further code for specifically managing virtual memory will end up in
kernel/mmu.c.
Further defintions for memory management in general will end up
in sys/mem_manage.h.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The mm_reg_t type used to represent a MMIO address (e.g. in the
sys_write*() APIs) was defined to a uint32_t, which is obviously wrong
on 64 bit systems where devices can be mapped anywhere.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
For iterable areas defined with Z_STRUCT_SECTION_ITERABLE(),
the corresponding output section in the linker script is just
boilerplate. Add macros to make these definitions simpler.
Unfortunately, we have a fair number of iterable sections not
defined with Z_STRUCT_SECTION_ITERABLE(), this patch does not
address this.
The output sections are all named <struct name>_area, update
sanitylib.py with this.
sys_sem with no userspace, and k_lifo/k_fifo are special cases
where different data types that are all equivalent need to be
put in the same iterable area. Add
Z_STRUCT_SECTION_ITERABLE_ALTERNATE() for this special case.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Add support for a C11-style aligned_alloc() in the heap
implementation. This is properly optimized, in the sense that unused
prefix/suffix data around the chosen allocation is returned to the
heap and made available for general allocation.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Macro GET_ARG_N is geting nth argument from the variable list provided.
GET_ARGS_LESS_N returns argument list without n first arguments.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Since this is an experimental API and MACRO_MAP() was deprecated in
favor of FOR_EACH() in zephyr v2.3.0, we are within our rights to just
remove it without notice now. Do so and mention it in the release
notes.
Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>