Instead of having some special stack frame when first scheduling new
thread and a new thread entry wrapper to pull out the needed data, we
can reuse the context restore code by adapting the initial stack frame.
This reduces the lines of code and simplify the code at the expense of a
slightly bigger initial stack frame.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
MWDT toolchain has Stackcheck_alloca option enabled by default.
So it adds stack checking in addition to Zephyr's stack checking.
As it is completely redundant let's drop it.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Previously MPU registers macros are only defined within its own header
files and could not be used by other part of program. This commit unify
them together.
Signed-off-by: Yuguo Zou <yuguo.zou@synopsys.com>
Both operands of an operator in the arithmetic conversions
performed shall have the same essential type category.
Changes are related to converting the integer constants to the
unsigned integer constants
Signed-off-by: Aastha Grover <aastha.grover@intel.com>
No need to mix super short version of names with other structures
having full name. Let's follow a more relevant naming where each and
every attribute name is self-documenting then. (such as s/id/apic_id
etc...)
Also make CONFIG_ACPI usable through IS_ENABLED by enclosing exposed
functions with ifdef CONFIG_ACPI.
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
- x0/x1 register printing is reversed
- The error stack frame struct (z_arch_esf_t) had the SPSR and ELR in
the wrong position, inconsistent with the order these regs are pushed
to the stack in z_arm64_svc. This caused all register printing to be
skewed by two.
- Verified by writing known values (abcd0000 -> abcd000f) to x0 - x15
and then forcing a data abort.
Signed-off-by: Luke Starrett <luke.starrett@gmail.com>
We need the same logic for each SOC, instead of copypasting
things just put this in a common file. This approach still
leaves the door open for custom memory layouts if desired.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
As discussed in #22668, there is additional risk ascociated with
splitting linker files, as one may update one script and not be
aware of the other. Especially related to updating GNU ld, and
not mwdt could break code for mwdt unnoticed, as mwdt is not
part of CI.
Let's create a single entry point for linker template.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
The non-secure callable functions' section is only applicable
to Cortex-M with TrustZone-M extension. Remove it from AARCH64
linker script. (CONFIG_ARM_FIRMWARE_HAS_SECURE_ENTRY_FUNCTIONS
is only enabled for Cortex-M so this is a no-op, but still, it
is a useful cleanup.)
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
This code had one purpose only, feed timing information into a test and
was not used by anything else. The custom trace points unfortunatly were
not accurate and this test was delivering informatin that conflicted
with other tests we have due to placement of such trace points in the
architecture and kernel code.
For such measurements we are planning to use the tracing functionality
in a special mode that would be used for metrics without polluting the
architecture and kernel code with additional tracing and timing code.
Furthermore, much of the assembly code used had issues.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
According to Intel 64 and IA-32 Architectures Software
Developer’s Manual, volume 3, chapter 8.2.5, LFENCE provides
a more efficient method of controlling memory ordering than
the CPUID instruction. So use LFENCE here, as all 64-bit
CPUs have LFENCE.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Do not force linker to place text sections after each other
to have more freedom to optimize.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Add linker script template for MWDT toolchain (linker-mwdt.ld)
Move linker.ld to linker-gnu.ld (without changes)
The "linker.ld" is wraper now.
Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
Make the assembly codes compatible with both GNU
and Metaware toolchain.
* replace ".balign" with ".align"
".align" assembler directive is supposed by all
ARC toolchains and it is implemented in a same
way across ARC toolchains.
* replace "mov_s __certain_reg" with "mov __certain_reg"
Even though GCC encodes those mnemonics and even real
HW executes them according to PRM these are restricted
ones for mov_s and CCAC rightfully refuses to accept
such mnemonics. So for compatibility and clarity sake
we switch to 32-bit mov instruction which allows use
of all those instructions.
* Add "%%" prefix while accessing registers from inline
ASM as it is required by MWDT.
* Drop "@" prefix while accessing symbols (defined in C
code) from ASM code as it is required by MWDT.
Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
/#
Replace ASM sys_io implementation with identical C code for ARC.
This significantly improves portability, i.e. compiler decides
which instructions to use for a particular CPU and / or
configuration.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
GNU toolchain and MWDT (Metware) toolchain have different style
for accessing arguments in assembly macro. Implement the
preprocessor macro to handle the difference.
Make all ASM macros in swap_macros.h compatible for both ARC
toolchains.
Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Switch to the _arch_switch() API that is required for an SMP-aware
scheduler instead of using the old arch_swap mechanism.
SMP is not supported yet but this is a necessary step in that direction.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
It implements gdb remote protocol to talk with a host gdb during the
debug session. The implementation is divided in three layers:
1 - The top layer that is responsible for the gdb remote protocol.
2 - An architecture specific layer responsible to write/read registers,
set breakpoints, handle exceptions, ...
3 - A transport layer to be used to communicate with the host
The communication with GDB in the host is synchronous and the systems
stops execution waiting for instructions and return its execution after
a "continue" or "step" command. The protocol has an exception that is
when the host sends a packet to cause an interruption, usually triggered
by a Ctrl-C. This implementation ignores this instruction though.
This initial work supports only X86 using uart as backend.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This reverts commit b51eeb03f4.
The linker script is now putting read-only material in writable
segments, which causes glib with -D_FORTIFY_SOURCE=2 to abort.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
we modify the ARM Cortex-M only API for managing the
security target state of the NVIC IRQs. We remove the
internal ASSERT checking allowing to call the API for
non-implemented NVIC IRQ lines. However we still give the
option to the user to check the success of the IRQ target
state setting operation by allowing the API function to
return the resulting target state.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
All ISRs are meant to take a const struct device pointer, but to
simplify the change let's just move the parameter to constant and that
should be fine.
Fixes#27399
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
All ISRs are meant to take a const struct device pointer, but to
simplify the change let's just move the parameter to constant and that
should be fine.
Fixes#27399
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
There is no need in custom, partially ASM bitops implementation
for ARC, we can use generic one.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
aarch64 has bitops implementation fully identical to generic one.
So drop redundant code.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
aarch32/cortex_a_r has bitops implementation fully
identical to generic one. So drop redundant code.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
As of today 'include/arch/common/sys_io.h" has generic implementation
for MMIO accessors and memory bits manipulation functions. That leads
to several architectures like ARC, ARM/aarch64, ARM/aarch32/corter_a_r
redefine entire 'common/sys_io.h' even if they only have different
MMIO accessors implementation.
So split 'include/arch/common/sys_io.h" to
* sys_io.h - generic MMIO accessors
* sys_bitops.h - generic memory bits manipulation functions
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
To debug hard-to-reproduce faults/panics, it's helpful to get the full
register state at the time a fault occurred. This enables recovering
full backtraces and the state of local variables at the time of a
crash.
This PR introduces a new Kconfig option, CONFIG_EXTRA_EXCEPTION_INFO,
to facilitate this use case. The option enables the capturing of the
callee-saved register state (r4-r11 & exc_return) during a fault. The
info is forwarded to `k_sys_fatal_error_handler` in the z_arch_esf_t
parameter. From there, the data can be saved for post-mortem analysis.
To test the functionality a new unit test was added to
tests/arch/arm_interrupt which verifies the register contents passed
in the argument match the state leading up to a crash.
Signed-off-by: Chris Coleman <chris@memfault.com>
The x86 paging code has been rewritten to support another paging mode
and non-identity virtual mappings.
- Paging code now uses an array of paging level characteristics and
walks tables using for loops. This is opposed to having different
functions for every paging level and lots of #ifdefs. The code is
now more concise and adding new paging modes should be trivial.
- We now support 32-bit, PAE, and IA-32e page tables.
- The page tables created by gen_mmu.py are now installed at early
boot. There are no longer separate "flat" page tables. These tables
are mutable at any time.
- The x86_mmu code now has a private header. Many definitions that did
not need to be in public scope have been moved out of mmustructs.h
and either placed in the C file or in the private header.
- Improvements to dumping page table information, with the physical
mapping and flags all shown
- arch_mem_map() implemented
- x86 userspace/memory domain code ported to use the new
infrastructure.
- add logic for physical -> virtual instruction pointer transition,
including cleaning up identity mappings after this takes place.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This adds the necessary bits in arch code, and Python scripts
to enable coredump support for ARM Cortex-M.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Use CONFIG_TRACING_ISR to exclude tracing ISRs just like other
architectures.
Also, z_sys_trace_isr_exit was not defined (It was renamed some time ago
and this was forgotten...)
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
These macros are used to validate that regions aren't
programmed that allow both writes and execution.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
* zephyr_linker_sources states that RODATA and RWDATA
included will be wrapped in an outer section, so
add these sections for the posix arch too.
Signed-off-by: Pete Johanson <peter@peterjohanson.com>
These stacks are appropriate for threads that run purely in
supervisor mode, and also as stacks for interrupt and exception
handling.
Two new arch defines are introduced:
- ARCH_KERNEL_STACK_GUARD_SIZE
- ARCH_KERNEL_STACK_OBJ_ALIGN
New public declaration macros:
- K_KERNEL_STACK_RESERVED
- K_KERNEL_STACK_EXTERN
- K_KERNEL_STACK_DEFINE
- K_KERNEL_STACK_ARRAY_DEFINE
- K_KERNEL_STACK_MEMBER
- K_KERNEL_STACK_SIZEOF
If user mode is not enabled, K_KERNEL_STACK_* and K_THREAD_STACK_*
are equivalent.
Separately generated privilege elevation stacks are now declared
like kernel stacks, removing the need for K_PRIVILEGE_STACK_ALIGN.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This had been copy-pasted between linker scripts, create
a central header for it.
The linker scripts for xtensa and posix have very different
structure and have been left alone.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The core kernel computes the initial stack pointer
for a thread, properly aligning it and subtracting out
any random offsets or thread-local storage areas.
arch_new_thread() no longer needs to make any calculations,
an initial stack frame may be placed at the bounds of
the new 'stack_ptr' parameter passed in. This parameter
replaces 'stack_size'.
thread->stack_info is now set before arch_new_thread()
is invoked, z_new_thread_init() has been removed.
The values populated may need to be adjusted on arches
which carve-out MPU guard space from the actual stack
buffer.
thread->stack_info now has a new member 'delta' which
indicates any offset applied for TLS or random offset.
It's used so the calculations don't need to be repeated
if the thread later drops to user mode.
CONFIG_INIT_STACKS logic is now performed inside
z_setup_new_thread(), before arch_new_thread() is called.
thread->stack_info is now defined as the canonical
user-accessible area within the stack object, including
random offsets and TLS. It will never include any
carved-out memory for MPU guards and must be updated at
runtime if guards are removed.
Available stack space is now optimized. Some arches may
need to significantly round up the buffer size to account
for page-level granularity or MPU power-of-two requirements.
This space is now accounted for and used by virtue of
the Z_THREAD_STACK_SIZE_ADJUST() call in z_setup_new_thread.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
NOP instruction is available via builtin for ARC so get rid of all
ASM inlines with NOP/NOP_S instructions.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Zephyr has generic find_msb_set and find_lsb_set implementations.
They are based on builtins so they are optimized enough.
Drop custom ASM implementation.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Switch nSIM from custom ARC UART to ns16550 model. That will
allow us to use zephyr images built for nSIM on other platforms
like HAPS, QEMU, etc...
This patch do:
* switch nSIM board to ns16550 UART usage
* change nSIM simulator configuration to use ns16550 UART model
* drop checks for CONFIG_UART_NSIM in ARC code
* update nSIM documentation
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>