Padding inserted after the (first-stage) vector table,
so that the Zephyr image does not attempt to use the
area which we reserve to detect null pointer dereferencing
(0x0 - <size>). If the end of the vector table section is
higher than the upper end of the reserved area, no padding
will be added. Note also that the padding will be added
only once, to the first stage vector table, even if the current
snipped is included multiple times (this is for a corner case,
when we want to use this feature together with SW Vector Relaying
on MCUs without VTOR but with an MPU present).
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Additions to the null-pointer exception detection mechanism
for ARMv8-M Mainline MCUs.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Implement the functionality to detect null pointer dereference
exceptions via the DWT unit in the ARMv7-M Mainline MCUs.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
When we enable the null pointer exceptino feature (using DWT)
we include debug.c in the build. debug.c contains the functions
to configure and enable null pointer detection using the Data
Watchdog and Trace unit.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Extend the debug monitor exception handler to
- return recoverable faults when the debug monitor
is enabled but we do not get an expected DWT event,
- call a debug monitor routine to check for null pointer
exceptions.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Move the DWT utility functions, present in timing.c
in an internal cortex-m header.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Introduce the required Kconfig symbol framework for the
Cortex-M-specific null pointer dereferencing detection
feature. There are two implementations (based on DWT and
MPU) so we introduce the corresponding choice symbols,
including a choice symbol to signify that the feature
is to be disabled.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
The most common secure monitor firmware in the ARM world is TF-A. The
current release allows up to 8 64-bit values to be returned from a
SMC64 call from AArch64 state.
Extend the number of possible return values from 4 to 8.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Instead of relying on hardcoded offset in the assembly code, introduce
the offset macros to make the code more clear.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The current code is assuming that the SMC/HVC helpers can only be used
by the PSCI driver. This is wrong because a mechanism to call into the
secure monitor should be made available regardless of using PSCI or not.
For example several SoCs relies on SMC calls to read/write e-fuses,
retrieve the chip ID, control power domains, etc...
This patch introduces a new CONFIG_HAS_ARM_SMCCC symbol to enable the
SMC/HVC helpers support and export that to drivers that require it.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
This is fundamental enough that it better be initialized ASAP.
Many other things get initialized soon afterwards assuming the MMU
is already operational.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Location of __kernel_ram_start is too far and _app_smem .bss areas
are not covered. Use _image_ram_start instead.
Location of __kernel_ram_end is also way too far. We should stop at
_image_ram_end where the expected unmapped area starts.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
This is easier to cover multiple segments this way. Especially since
not all boundary symbols from the linker script come with a size
derrivative.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The MT_OVERWRITE case is much more common. Redefine that flag as
MT_NO_OVERWRITE instead for those fewer cases where it is needed.
One such case is platform provided mappings. Apply them after the
common kernel mappings and use the MT_NO_OVERWRITE on them.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
There is no real reason for keeping page tables into separate pools.
Make it global which allows for more efficient memory usage and
simplifies the code.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Introduce a remove_map() to ... remove a mapping.
Add a use count to the page table pool so pages can be dynamically
allocated, deallocated and reused.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Add a newer, much smaller and simpler implementation of abort and
join. No need to involve the idle thread. No need for a special code
path for self-abort. Joining a thread and waiting for an aborting one
to terminate elsewhere share an implementation. All work in both
calls happens under a single locked path with no unexpected
synchronization points.
This fixes a bug with the current implementation where the action of
z_sched_single_abort() was nonatomic, releasing the lock internally at
a point where the thread to be aborted could self-abort and confuse
the state such that it failed to abort at all.
Note that the arm32 and native_posix architectures, which have their
own thread abort implementations, now see a much simplified
"z_thread_abort()" internal API.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
ARC mpu version used a wrong number 3, could cause conflict in future.
This commit fix this issue to version number 4.
Signed-off-by: Yuguo Zou <yuguo.zou@synopsys.com>
We need to form the ASSERT expression inside the MemManage
fault handler for the case we building without USERSPACE
and STACK GUARD support, in the same way it is formed for
the case with USERSPACE or MPU STACK GUARD support, that
is, we only assert if we came across a stacking error.
Data access violations can still occur even without user
mode or guards, e.g. when trying to write to Read-only
memory (such as the code region).
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Remove duplication in the code by moving macro LOCKED() to the correct
kernel_internal.h header.
Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
This changes x86 to use CONFIG_SRAM_OFFSET instead of
arch-specific CONFIG_X86_KERNEL_OFFSET. This allows the common
MMU macro Z_BOOT_VIRT_TO_PHYS() and Z_BOOT_PHYS_TO_VIRT() to
function properly if we ever need to map the kernel into
virtual address space that does not have the same starting
physical address.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds a new kconfig CONFIG_SRAM_OFFSET to specify the offset
from beginning of SRAM where the kernel begins. On x86 and
PC compatible platforms, the first 1MB of RAM is reserved and
Zephyr should not link anything there. However, this 1MB still
needs to be mapped by the MMU to access various platform related
information. CONFIG_SRAM_OFFSET serves similar function as
CONFIG_KERNEL_VM_OFFSET and is needed for proper phys/virt
address translations.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Instead of doing these in assembly, use the common z_bss_zero()
and z_data_copy() C functions instead. This simplifies code
a bit and we won't miss any additions to these two functions
(if any) under x86 in the future (as x86_64 was actually not
clearing gcov bss area).
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This moves calling z_loapic_enable() from crt0.S into
z_x86_prep_c(). This is done so we can move BSS clearing
and data section copying inside z_x86_prep_c() as
these are needed before calling z_loapic_enable().
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds a new kconfig to enable the use of memory map.
This map can be populated automatically if
CONFIG_MULTIBOOT_MEMMAP=y or can be manually defined
via x86_memmap[].
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This is an hidden option to indicate we are building for
PC-compatible devices (where there are BIOS, ACPI, etc.
which are standard on such devices).
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This reverts commit b6b6d39bb6.
With both commit 4690b8d5ec ("libc/minimal: fix malloc() allocated
memory alignment") and commit c822e0abbd ("libc/minimal: fix
realloc() allocated memory alignment") in place, there is no longer
a need for enforcing the big heap mode on every allocations.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Write through switched_from in arch_switch() as required by the
switch protocol.
Also restructure the implementation to better match the template in
kernel_arch_interface.h, by removing a wrapper routine and instead
use CONTAINER_OF().
Fixes#32197
Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
Add the missing pieces to enable XIP for AArch64. Try to simulate the
XIP using QEMU using the '-bios' parameter.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Before accessing the multiboot data passed by the bootloader,
we need to map the memory first. This adds the code to map
the memory if necessary.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
We assume that all x86 CPUs do have clflush instructions.
And the cache line size is now provided through DTS.
So detecting clflush instruction as well as the cache line size is no
longer required at runtime and thus removed.
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
This adds X86 keyword to the kconfigs to indicate these are
for x86. The old options are still there marked as
deprecated.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Only the CAVS 1.5 linker script has full support for the coherence
features, don't advertise it on the other SoC's yet.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
While fixing license headers, identified this script as orphan and not
being used anywhere, so remove.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
The call to sys_trace_idle() is potentially clobbering x0 resulting in a
wrong value being used by the following code. Save and restore x0 before
and after the call to sys_trace_idle() to avoid any issue.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Suggested-by: James Harris <james.harris@intel.com>
With x86, there are usually memory regions that are reserved
for firmware and device MMIOs. We don't want to use these
pages for memory mapping so mark them as reserved at boot.
The weakly defined x86_memmap contains the list of memory
regions which can be overriden by SoC or board configurations.
Also, with CONFIG_MULTIBOOT_MEMMAP=y, the memory regions
are populated from multiboot provided data.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Additional stack for tests when building with FPU_SHARING
enabled is required, because the option may increase ESF
stacking requirements for threads.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
XCC doesn't like the "rsr.<reg name>" style assembly
so fix that to the other style.
Also, XCC doesn't like _CONCAT() with the EPC/EPS
registers so need to spell out all of them.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
There is a hard-coded value of PS_INTLEVEL(15) to set the PS
register. The correct way is actually to use XCHAL_EXCM_LEVEL
with PS_INTLEVEL() to setup the register. So fix it.
Fixes#31858
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The x86_64 SysV ABI requires 16 byte alignment for the stack pointer
during execution of normal code. That means that on entry to an
ABI-compatible C function (which is reached via a CALL instruction
that pushes the return address) the RSP register must be MISaligned by
exactly 8 bytes. The kernel mode thread setup got this right, but we
missed the equivalent condition in userspace entry.
The end result was a misaligned stack, which is surprisingly robust
for most use. But recent toolchains have starting doing some more
elaborate vectorization, and the resulting SSE instructions started
failing in userspace on the misaliged loads.
Note that there's a comment about optimization: we're doing the stack
alignment in the "wrong place" and are needlessly wasting bytes in
some cases. We should see the raw stack boundaries where we are
setting up RSP values. Add a FIXME to this effect, but don't touch
anything as this patch is a targeted bugfix.
Also fix a somewhat embarassing 32-bit-ism that would have truncated
the address of a userspace stack that we tried to put above 4G.
Fixes#31018
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
When VTOR is implemented on the Cortex-M SoC, we can
basically use any address (properly aligned) for the
vector table starting address. We fix the setting of
VTOR in prep_c.c for non-XIP images, in this commit,
so we do not need to always have the vector table be
present at the start of RAM (CONFIG_SRAM_BASE_ADDRESS)
and allow for extra linker sections being placed before
the vector table section.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
If CONFIG_EXTRA_EXCEPTION_INFO is enabled, log
the value of EXC_RETURN in the fault handler.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Under FPU sharing mode, any thread is allowed to generate
a Floating Point context (use FP registers in FP instructions),
regardless of whether threads are pre-tagged with K_FP_REGS
option when they are created.
When building with MPU stack guard feature enabled,
a large MPU stack guard is required to catch stack
overflows, if lazy FP stacking is enabled. When lazy
FP stacking is not enabled, a default 32 byte guard is
sufficient.
If lazy stacking is enabled by default, all threads may
potentially generate FP context, so they would need to
program a large MPU guard, carved out of their reserved
stack memory.
To avoid this memory waste, we modify the behavior, and make
lazy stacking a dynamically enabled feature, implemented as
follows:
- threads that are not pre-tagged with K_FP_REGS, and have
not generated an FP context use a default MPU guard and disable
lazy stacking. As long as the threads do not have an active FP
context, they won't stack FP registers, anyway, on ISRs and
exceptions, while they will benefit from reserving a small
MPU guard size
- as soon as a thread starts using FP registers, ISR might
temporarily experience some increased ISR latency due to lazy
stacking being disabled. This will be the case until the next
context switch, where the threads that have active FP context
will be tagged with K_FP_REGS, enable lazy stacking, and
program a wide MPU guard.
The implementation is a tradeoff between performance (ISR
latency) and memory consumption.
Note that when MPU STACK GUARD feature is not enabled, lazy
FP stacking is always activated.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
For applications that make use of the FPU in cortex m,
we enforce the FPU sharing registers mode, because the
compiler, under certain optimization regimes, may use
FP instructions and create FP context in any thread,
so the unshared registers mode is not practically
supported.
In addition to that we force FPU_SHARING to depend on
MULTITHREADING, as FPU sharing mode does not make sense
outside the normal multi-threaded builds.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
For the standard multi-theading builds, we will
enforce FP context stacking only when FPU_SHARING
is set. For the single-threading use case we enable
context stacking by default.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
If CONTROL register is done in reset.S we can skip
clearing the FPCA when enabling the floating point
support, to save a few instructions. The CONTROL
register is cleared right after boot, if the symbol
CONFIG_INIT_ARCH_HW_AT_BOOT is enabled.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
This adds a new GEN_ABSOLUTE_SYM_KCONFIG() specifically for
generating absolute symbols in assembly for kconfig values.
This is needed as the existing GEN_ABSOLUTE_SYM() with
constraints in extended assembly parses the "value" as
signed 32-bit integers. An unsigned 32-bit integer with
MSB set results in a negative number in the final binary.
This also prevents integers larger than 32-bit. So this
new macro simply puts the value inline within the assembly
instrcution instead of having it as parameter.
Fixes#31562
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
On Intel processors, if GS is not zero and is being set to
zero, GS_BASE is also being set to zero. This would interfere
with the actual use of GS_BASE for usespace. To avoid accidentally
clearing GS_BASE, simply set GS to 0 at boot, so any subsequent
clearing of GS will not clear GS_BASE.
The clearing of GS_BASE was discovered while trying to figure out
why the mem_protect test would hang within 10-20 repeated runs.
GDB revealed that both GS and GS_BASE was set to zero when the tests
hanged. After setting GS to zero at boot, the mem_protect tests
were running repeated for 5,000+ times without hanging.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Major changes:
- move related functions together
- optimize add_map() not to walk the page tables *twice* on
every loop
- properly handle leftover size when a range is already mapped
- don't overwrite existing mappings by default
- return an error when the mapping fails
and make the code clearer overall.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Both _IRQ_VECTOR_TABLE_SECTION_NAME and _SW_ISR_TABLE_SECTION_NAME
are defined with asterisk at the end in an attempt to include
all related symbols in the linker script. However, these two
macros are also being used in the source code to specify
the destination sections for variables. Asterisks in the name
results in older GCC (4.x) complaining about those asterisks.
So create new macros for use in linker script, and keep
the names asterisk free.
Fixes#29936
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
We've already enabled full RAM mapping if ACPI is enabled, also
set a large 3GB address space size, these systems are not RAM-
constrained (they are PC platforms) and they have large MMIO
config spaces for PCIe.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
In the current interrupt nesting implementation, if an ISR is
interrupted while executing inside a branch, the lr_svc register will
be corrupted, and the branch of the interrupted ISR will exit to the
return address of the final branch of the interrupting ISR, which may
or may not correspond to the intended return address.
This commit fixes the aforementioned bug by storing the lr_svc register
in the stack at the ISR entry, and restoring its value before exiting
the ISR.
For more details, refer to the issue #30517.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
This commit fixes the following bugs in the AArch32 z_arm_exc_exit
routine:
1. Invalid return address when calling `z_arm_pendsv` from the
exception-specific mode
2. Caller-saved register is referenced after a call to `z_arm_pendsv`
For more details, refer to the issue #31511.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
This commit updates the stale references to the `_IntExit` function in
the in-line documentation.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
Image header is compatible with Linux aarch64 boot protocol,
so zephyr can be booted with U-boot or Xen loader.
Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
At its current state, the script tries to access the vector table
list without checking first that the index is valid. This can
cause the script to crash without a descriptive message.
The index can be invalid if an IRQ number that is larger than
the maximum number allowed by the SOC is used.
This PR adds a check of that index, that exits with an error
message if the index is invalid.
Fixes#29809
Signed-off-by: Yonatan Schachter <yonatan.schachter@gmail.com>
This option allows forcing big heap mode. Useful on for getting 8-byte
aligned blocks on 32-bit machines.
Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
Some arches like x86 need all memory mapped so that they can
fetch information placed arbitrarily by firmware, like ACPI
tables.
Ensure that if this is the case, the kernel won't accidentally
clobber it by thinking the relevant virtual memory is unused.
Otherwise this has no effect on page frame management.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
All arch_ APIs and macros are implemented, and the page fault
handling code will call into the core kernel.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Pre-allocation of paging structures is now required, such that
no allocations are ever needed when mapping memory.
Instantiation of new memory domains may still require allocations
unless a common page table is used.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We no longer use a page pool to draw memory pages when doing
memory map operations. We now preallocate the entire virtual
address space so no allocations are ever necessary when mapping
memory.
We still need memory to clone page tables, but this is now
expressed by a new Kconfig X86_MAX_ADDITIONAL_MEM_DOMAINS
which has much clearer semantics than specifying the number
of pages in the pool.
The default address space size is now 8MB, but this can be
tuned by the application.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
A more comprehensive solution would use E820 enumeration, but we
are unlikely to ever care that much, as we intend to use demand
paging on microcontrollers and not PC-like hardware. This is
really to just prevent QEMU from crashing.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is only needed if the base address of SRAM doesn't
have the same alignment as the base address of the virtual
address space.
Fix the calculations on X86 where this is the case.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This change uses stack frame to print backtrace once exception occurs
Printing backtrace helps to identify the cause of exception
Signed-off-by: Shubham Kulkarni <shubham.kulkarni@espressif.com>
When zefi.py was changed to pass compiler and objcopy the flag to
objcopy for the EFI target was dropped. This is because the current
SDK (0.12.1) doesn't support that target type for objcopy. However,
target is necessary for the images to be created correctly and boot.
Switch back to use the host objcopy as a stop gap fix, until the SDK
can support target for EFI.
Fixes: #31517
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
This changes the timing functions to use TSC to gather
timing information instead of using the timer for
scheduling as it provides higher resolution for timing
information.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This removes the z_ prefix those (functions, enums, etc.) that
are being used outside the coredump subsys. This aligns better
with the naming convention.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
All arch_ APIs and macros are implemented, and the page fault
handling code will call into the core kernel.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Pre-allocation of paging structures is now required, such that
no allocations are ever needed when mapping memory.
Instantiation of new memory domains may still require allocations
unless a common page table is used.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We no longer use a page pool to draw memory pages when doing
memory map operations. We now preallocate the entire virtual
address space so no allocations are ever necessary when mapping
memory.
We still need memory to clone page tables, but this is now
expressed by a new Kconfig X86_MAX_ADDITIONAL_MEM_DOMAINS
which has much clearer semantics than specifying the number
of pages in the pool.
The default address space size is now 8MB, but this can be
tuned by the application.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
A more comprehensive solution would use E820 enumeration, but we
are unlikely to ever care that much, as we intend to use demand
paging on microcontrollers and not PC-like hardware. This is
really to just prevent QEMU from crashing.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is only needed if the base address of SRAM doesn't
have the same alignment as the base address of the virtual
address space.
Fix the calculations on X86 where this is the case.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This adds the correct compiler and linker flags to
support software floating point operations. The flags
need to be added to TOOLCHAIN_*_FLAGS for GCC to find
the correct library (when calling GCC with
--print-libgcc-file-name).
Note that software floating point needs to be turned
on for Newlib. This is due to Newlib having floating
point numbers in its various printf() functions which
results in floating point instructions being emitted
from toolchain. These instructions are placed very
early in the functions which results in them being
executed even though the format string contains
no floating point conversions. Without using CONFIG_FPU
to enable hardware floating point support, any calls to
printf() like functions will result in exceptions
complaining FPU is not available. Although forcing
CONFIG_FPU=y with newlib is an option, and because
the OS doesn't know which threads would call these
printf() functions, Zephyr has to assume all threads
are using FPU and thus incurring performance penalty as
every context switching now needs to save FPU registers.
A compromise here is to use soft float instead. Newlib
with soft float enabled does not have floating point
instructions and yet can still support its printf()
like functions.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Currently, zefi.py takes host GCC OBJCOPY as
default. Fixing the script to use CMAKE_C_COMPILER
and CMAKE_OBJCOPY.
Fixes: #27047
Signed-off-by: Spoorthy Priya Yerabolu <spoorthy.priya.yerabolu@intel.com>
The only two supported operations for data caches in the cache framework
are currently arch_dcache_flush() and arch_dcache_invd().
This is quite restrictive because for some architectures we also want to
control i-cache and in general we want a finer control over what can be
flushed, invalidated or cleaned. To address these needs this patch
expands the set of operations that can be performed on data and
instruction caches, adding hooks for the operations on the whole cache,
a specific level or a specific address range.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The new APIs are not only dealing with cache flushing. Rename the
Kconfig symbol to CACHE_MANAGEMENT to better reflect this change.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The kconfig options to configure the cache flushing framework are
currently living in the arch-specific kconfigs of ARC and X86 (32-bit)
architectures even though these are defining the same things.
Move the common symbols in one place accessible by all the architectures
and create a menu for those.
Leave the default values in the arch-specific locations.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Firmware implementing the PSCI functions described in ARM document
number ARM DEN 0022A ("Power State Coordination Interface System
Software on ARM processors") can be used by Zephyr to initiate various
CPU-centric power operations.
It is needed for virtualization, it is used to coordinate OSes and
hypervisors and it provides the functions used for SMP bring-up such as
CPU_ON and CPU_OFF.
A new PSCI driver is introduced to setup a proper subsystem used to
communicate with the PSCI firmware, implementing the basic operations:
get_version, cpu_on, cpu_off and affinity_info.
The current implementation only supports PSCI 0.2 and PSCI 1.0
The PSCI conduit (SMC or HVC) is setup reading the corresponding
property in the DTS node.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Increased stacks required for RISC-V 64-bit CI to pass. Most of these
were catched by the kernel stack sentinel.
The CMSIS stacks are for programs in samples/portability.
Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
The CONFIG_FLOAT_HARD config previously enabled the C (compressed)
ISA extensions (CONFIG_COMPRESSED_ISA). This commit removes that
dependency.
Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
Until now, any attempts to call printk prior to early serial init has
caused page faults due to the device not being mapped yet. Add static
variable to track the pre-init status, and instead of page faulting
just suppress the characters and log a warning right after init to
give an indication that output characters have been lost.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
Using newlibc with AArch64 is causing an alignement fault in
z_bss_zero() when the code is run on real hardware (on QEMU the problem
is not reproducible).
The main cause is that the memset() function exported by newlibc is
using 'DC ZVA' to zero out memory.
While this is often a nice optimization, this is causing the issue on
AArch64 because memset() is being used before the MMU is enabled, and
when the MMU is disabled all data accesses will be treated as
Device_nGnRnE.
This is a problem because quoting from the ARM reference manual: "If the
memory region being zeroed is any type of Device memory, then DC ZVA
generates an Alignment fault which is prioritized in the same way as
other alignment faults that are determined by the memory type".
newlibc tries to be a bit smart about this reading the DCZID_EL0
register before deciding whether using 'DC ZVA' or not. While this is a
good idea for code running in EL0, currently the Zephyr kernel is
running in EL1. This means that the value of the DCZID_EL0 register is
actually retrieved from the HCR_EL2.TDZ bit, that is always 0 because
EL2 is not currently supported / enabled. So the 'DC ZVA' instruction is
unconditionally used in the newlibc memset() implementation.
The "standard" solution for this case is usually to use a different
memset routine to be specifically used for two cases: (1) against IO
memory or (2) against normal memory but with MMU disabled (which means
all memory is considered device memory for data accesses).
To fix this issue in Zephyr we avoid calling memset() when clearing the
bss, and instead we use a simple loop to zero the memory region.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
In rare cases when a thread may overflow its stack, the
core will not report a Stacking Error. This is the case
when a large stack array is created, making the PSP cross
beyond the stack guard; in this case a MemManage fault
won't cause a stacking error (but only a Data Access
Violation error). We fix the fault handling logic so
such errors are reported as stack overflows and not as
generic CPU exceptions.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
When the MMARVALID bit is not set, do not read the MMFAR
register to get the fault address in a MemManage fault.
This change prevents the fault handler to erroneously
assume MMFAR contains a valid address.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Currently Zephyr links reset-vector.S twice in xtensa builds:
into the bootloader and the main image. It is run at the end
of the boot loader execution and immediately after that again
in the beginning of the main code. This patch adds a
configuration option to select whether to link the file to the
bootloader or to the application. The default is to the
application, as needed e.g. for QEMU, SOF links it to the
bootloader like in native builds.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
Before hooking up the MMU driver code to the Zephyr MMU core code it's
better to match the expected variable types of the two parts.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The MMU code is currently assuming that Zephyr only uses one single set
of page tables shared by kernel and user threads. This could possibly be
not longer true in the future when multiple set of page tables can be
present and swapped at run-time.
With this patch a new arm_mmu_ptables struct is introduced that is used
to host a buffer pointing to the memory region containing the page
tables and the helper variables used to manage the page tables. This new
struct is then used by the ARM64 MMU code instead of assuming that the
kernel page tables are the only ones present.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The ARM64 MMU code used to create the page tables is strictly tied to
the custom arm_mmu_region struct. To be able to hook up this code to the
Zephyr MMU APIs we need to make it more generic.
This patch makes the mapping function more generic and creates a new
helper function add_arm_mmu_region() to map the regions defined by the
old arm_mmu_region structs using this new generic function.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
In the current code the base xlat table is a standalone array. This is
done because we know at compile time the size of this table so we can
allocate the correct size and save a bit of memory. All the other xlat
tables are statically allocated in a different array with full size.
With this patch we move all the page tables in one single array,
including the base table. This is probably going to waste a bit of space
but it makes easier to:
- have all the page tables mapped in one single contiguous memory region
instead of having to take care of two different arrays in two
different locations
- duplicate the page tables more quickly if we need to
- use a pre-allocated space to host the page tables
- use a pre-computed set of page tables saved in a contiguous memory
region
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
There may be Xtensa SoCs which don't have high enough interrupt
levels for EPC6/EPS6 to exist in _restore_context. So changes
these to those which should be available according to the ISA
config file.
Fixes#30126
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
In the current MMU code a new table is created when mapping a memory
region that is overlapping with a block already mapped. The problem is
that the new table is created also when the new and old mappings have
the same attributes.
To avoid using a new table when not needed the attributes of the two
mappings are compared before creating the new table.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The original idea of using a custom switch to main thread
function is to make sure the buffer to save floating point
registers are aligned correctly or else exception would be
raised when saving/restoring those registers. Since
the struct of the buffer is defined with alignment hint
to toolchain, the alignment will be enforced by toolchain
as long as the k_thread struct variable is a dedicated,
declared variable. So there is no need for the custom
switch to main thread function anymore.
This also allows the stack usage calculation of
the interrupt stack to function properly as the end of
the interrupt stack is not being used for the dummy
thread anymore.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
As of today generic _irq_vector_table is used only on 32bit
architectures and 64bit architectures have their own implementation.
Make vectors size adjustable by using uintptr_t instead of uint32_t
for vectors.
The ARCv3 64 bit HS6x processors are going to be first users for
that.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
ATOMIC_OPERATIONS_BUILTIN still has some problem in mwdt toolchain,
so choosing ATOMIC_OPERATIONS_C instead.
Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
This commit adds possibility to disable ECC in Tightly Coupled
Memory in Cortex-R.
Linker scripts places stacks in this memory and marks it as
.noinit section. With ECC enabled, stack read accesses without
previous write result in Data Abort Exception.
Signed-off-by: Wojciech Sipak <wsipak@antmicro.com>
accessing the stack below guard_end is always a bug. some
instrustions (like enter_s {r13-r26, fp, blink}) push a collection
of registers on to the stack. In this situation, the fault_addr will
less than guard_end, but sp will greater than guard_end.
|------stack base-------| <--- high address
| |
| | <--- sp
|------stack top--------|
|------guard_end--------|
| | <--- fault_addr
| |
|------guard_start------| <--- low address
So we need to remove the SP check. Trade-off here is if we prefer
'false' classifications of MPU stack guard area accesses as stack
error or as general mpu error. The faults get caught anyway, this is
just about classification: don't see a strong need for the extra check
to only report stack pointer accesses to guard area as stack error,
instead of all accesses.
Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
Convert device to DEVICE_DEFINE instead of DEVICE_AND_API_INIT
so we can deprecate DEVICE_AND_API_INIT in the future.
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
Fix the following complilation error that happens when specifying a
fixed MMIO address for the UART through X86_SOC_EARLY_SERIAL_MMIO8_ADDR:
arch/x86/core/early_serial.c:30:26: error: #if with no expression
30 | #if DEVICE_MMIO_IS_IN_RAM
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
According to CONFIG_ARMV8_A_NS, using MT_SECURE or MT_NS, to simplify
code change, use MT_DEFAULT_SECURE_STATE instead
Signed-off-by: Peng Fan <peng.fan@nxp.com>
Renamed to make its semantics clearer; this function maps
*physical* memory addresses and is not equivalent to
posix mmap(), which might confuse people.
mem_map test case remains the same name as other memory
mapping scenarios will be added in the fullness of time.
Parameter names to z_phys_map adjusted slightly to be more
consistent with names used in other memory mapping functions.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
In _isr_wrapper, the interrupt ID read from the GIC is blindly used to
index into _sw_isr_table, which is only sized based on CONFIG_NUM_IRQ.
It is possible for both GICv2 and GICv3 to return 1023 for a handful
of scenarios, the simplest of which is a level sensitive interrupt
which has subsequently become de-asserted. Borrowing from the Linux
GIC implementation, a read that returns an interrupt ID of 1023 is
simply ignored.
Minor collateral changes to gic.h to group !_ASMLANGUAGE content
together to allow this header to be used in assembler files.
Signed-off-by: Luke Starrett <luke.starrett@gmail.com>
The page table implementation requires conversion between virtual
and physical addresses when creating and walking page tables. Add
a phys_addr() and virt_addr() functions instead of hard-casting
these values, plus a macro for doing the same in ASM code.
Currently, all pages are identity mapped so VIRT_OFFSET = 0, but
this will now still work if they are not the same.
ASM language was also updated for 32-bit. Comments were left in
64-bit, as long mode semantics don't allow use of Z_X86_PHYS_ADDR
macro; this can be revisited later.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
In native_posix and nrf52_bsim add the cpu_hold() function,
which can be used to emulate the time it takes for code
to execute.
It is very similar to arch_busy_wait(), but while
arch_busy_wait() returns when the requested time has passed,
cpu_hold() ensures that the time passes in the callers
context independently of how much time may pass in some
other context.
Signed-off-by: Alberto Escolar Piedras <alpi@oticon.com>
Fix compiler warnings associated with 'level' and 'entry' variables
'may be used uninitialized in this function'
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
The GIC interrupt controller driver is using a custom init function
called directly from the prep_c function. For consistency move that to
use SYS_INIT.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
This was reporting the wrong page tables for supervisor
threads with KPTI enabled.
Analysis of existing use of this API revealed no problems
caused by this issue, but someone may trip over it eventually.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>