This include make possible to use the arm_arch_timer on
platform such as Cortex-A9 or Cortex-R7 which has support for
ARM Global Timer.
The global timer is a 64 bit incrementing counter, memory-mapped
in the private memory region.
Signed-off-by: Julien Massot <julien.massot@iot.bzh>
This adds X86 keyword to the kconfigs to indicate these are
for x86. The old options are still there marked as
deprecated.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Add support for backup SRAM initialization found in multiple STM32
microcontrollers. Linker script facilities are also provided to make it
easy to define variables in the backup SRAM.
Signed-off-by: Gerard Marull-Paretas <gerard@teslabs.com>
The linker script defines `z_mapped_size` as follows:
```
z_mapped_size = z_mapped_end - z_mapped_start;
```
This is done with the belief that precomputed values at link time will
make the code smaller and faster.
On Aarch64, symbol values are relocated and loaded relative to the PC
as those are normally meant to be memory addresses.
Now if you have e.g. `CONFIG_SRAM_BASE_ADDRESS=0x2000000000` then
`z_mapped_size` might still have a reasonable value, say 0x59334.
But, when interpreted as an address, that's very very far from the PC
whose value is in the neighborhood of 0x2000000000. That overflows the
4GB relocation range:
```
kernel/libkernel.a(mmu.c.obj): in function `z_mem_manage_init':
kernel/mmu.c:527:(.text.z_mem_manage_init+0x1c):
relocation truncated to fit: R_AARCH64_ADR_PREL_PG_HI21
```
The solution is to define `Z_KERNEL_VIRT_SIZE` in terms of
`z_mapped_end - z_mapped_start` at the source code level. Given this
is used within loops that already start with `z_mapped_start` anyway,
the compiler is smart enough to combine the two occurrences and
dispense with a size counter, making the code effectively
slightly better for all while avoiding the Aarch64 relocation
overflow:
```
text data bss dec hex filename
1216 8 294936 296160 484e0 mmu.c.obj.arm64.before
1212 8 294936 296156 484dc mmu.c.obj.arm64.after
1110 8 9244 10362 287a mmu.c.obj.x86-64.before
1106 8 9244 10358 2876 mmu.c.obj.x86-64.after
```
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Introduce an additional single-bit flag to hold
the current length of the MPU Stack Guard size
(long or default) for the case when building
with MPU_STACK_GUARD and FPU_SHARING, in Cortex-M
platforms.
In addition, the commit adds some documentation
about the existing status flags in Arm's
thread arch.mode variable.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Major changes:
- move related functions together
- optimize add_map() not to walk the page tables *twice* on
every loop
- properly handle leftover size when a range is already mapped
- don't overwrite existing mappings by default
- return an error when the mapping fails
and make the code clearer overall.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Both _IRQ_VECTOR_TABLE_SECTION_NAME and _SW_ISR_TABLE_SECTION_NAME
are defined with asterisk at the end in an attempt to include
all related symbols in the linker script. However, these two
macros are also being used in the source code to specify
the destination sections for variables. Asterisks in the name
results in older GCC (4.x) complaining about those asterisks.
So create new macros for use in linker script, and keep
the names asterisk free.
Fixes#29936
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Correct functioning of spinlocks requires that they be memory barriers.
Most architectures achieve this by using the GCC extended asm syntax to
force a compiler soft barrier at the point the interrupt status is
changing. This clobber was missing from the SPARC definition, so add
it.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
With classic volatile pointer access gcc something generates
access instructions with immediate offset value, like
str w4, [x1], #4
Such instructions produce invalid syndrome in HSR register when are
trapped by hypervisor. This leads to inability to emulate device access
in hypervisor.
So we need to make sure that any access to device memory is done
with plain str/ldr instructions without offset.
Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
Image header is compatible with Linux aarch64 boot protocol,
so zephyr can be booted with U-boot or Xen loader.
Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
All arch_ APIs and macros are implemented, and the page fault
handling code will call into the core kernel.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Add linker symbols corresponding to the start and end of the
mapped Zephyr image. This is not used by the ARM arch yet, but
is required to compile the core kernel MMU code.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We will use this to map the kernel instead of all RAM.
The end of the kernel is always page-aligned, regardless
of CONFIG_SRAM_REGION_PERMISSIONS as it must be mapped.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Fix sys_read32 return value from uint16_t to uint32_t.
Current implementation causes read/modify/write of 32bit
registers to fail on the high bits.
Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
All arch_ APIs and macros are implemented, and the page fault
handling code will call into the core kernel.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Add linker symbols corresponding to the start and end of the
mapped Zephyr image. This is not used by the ARM arch yet, but
is required to compile the core kernel MMU code.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We will use this to map the kernel instead of all RAM.
The end of the kernel is always page-aligned, regardless
of CONFIG_SRAM_REGION_PERMISSIONS as it must be mapped.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The inline asm code was not conveying in all cases that registers r1-r3
would get clobbered by the SVC handler code. In the cases that we can't
list r1-r3 in the clobber list the registers need to show up as outputs
to know that they values are not preserved by the callee.
Fixes#30393
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
Firmware implementing the PSCI functions described in ARM document
number ARM DEN 0022A ("Power State Coordination Interface System
Software on ARM processors") can be used by Zephyr to initiate various
CPU-centric power operations.
It is needed for virtualization, it is used to coordinate OSes and
hypervisors and it provides the functions used for SMP bring-up such as
CPU_ON and CPU_OFF.
A new PSCI driver is introduced to setup a proper subsystem used to
communicate with the PSCI firmware, implementing the basic operations:
get_version, cpu_on, cpu_off and affinity_info.
The current implementation only supports PSCI 0.2 and PSCI 1.0
The PSCI conduit (SMC or HVC) is setup reading the corresponding
property in the DTS node.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
1. Exclude the CODE UNREACHABLE line while generating coverage report.
2. Exclude the memory domain deprecated API when calculating code
coverage.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
Adds a linker section for Cortex-M instruction tightly coupled memory
(ITCM), similar to the existing section for DTCM. A new executable MPU
region is not added as there isn't currently a need to make this section
accessible to user mode. This section can be enabled by setting a device
tree chosen node zephyr,itcm.
Signed-off-by: Maureen Helm <maureen.helm@nxp.com>
Adds missing DTCM_ADDR and CCM_ADDR macros needed by the linker script
generated by scripts/gen_relocate_app.py.
Moves associated bss_end, noinit_end, and data_end linker symbols to
account for section relocation. Without this change, the section sizes
calculated in z_bss_zero() and z_data_copy() are incorrect.
Signed-off-by: Maureen Helm <maureen.helm@nxp.com>
core-macros.h includes other files not part of the xtensa HAL, make this
esp32 specific
Fixes#31301
Signed-off-by: Sylvio Alves <sylvio.alves@espressif.com>
x86 and x86_64 require certain alignment in the k_thread struct
since the buffer to save/restore FPU/SSE registers requires
strict alignment.
Fixes#29589Fixes#29629
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Before hooking up the MMU driver code to the Zephyr MMU core code it's
better to match the expected variable types of the two parts.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The MMU code is currently assuming that Zephyr only uses one single set
of page tables shared by kernel and user threads. This could possibly be
not longer true in the future when multiple set of page tables can be
present and swapped at run-time.
With this patch a new arm_mmu_ptables struct is introduced that is used
to host a buffer pointing to the memory region containing the page
tables and the helper variables used to manage the page tables. This new
struct is then used by the ARM64 MMU code instead of assuming that the
kernel page tables are the only ones present.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Fix some issues in linker.ld when enabling coverage report feature
for qemu_x86_64 platform:
1. Fix the gcov record data should not be in BSS section.
2. Fix some test case crash due to lack of memory page align before
gcov rodata.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
According to CONFIG_ARMV8_A_NS, using MT_SECURE or MT_NS, to simplify
code change, use MT_DEFAULT_SECURE_STATE instead
Signed-off-by: Peng Fan <peng.fan@nxp.com>
As timer interrupt is level triggered, we need to mask it before leaving
ISR or it will be delivered again.
Also, Xen automatically masks timer interrupt when it injects IRQ to
a guest, so we need to unmask it again, when setting new timeout.
Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
Signed-off-by: Peng Fan <peng.fan@nxp.com>
- Remove SYS_ prefix
- shorten POWER_MANAGEMENT to just PM
- DEVICE_POWER_MANAGEMENT -> PM_DEVICE
and use PM_ as the prefix for all PM related Kconfigs
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Such interrupt remapping controller may be found along with Intel VT-D
hardware. Its base-address is via ACPI, and it enables up to 64K
interrupt indexes.
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
And implement the support for intel64 which is basically the
architecture that will require it for now.
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
This is part of Intel VT-D and how to discover capabilities, base
addresses and so on in order to start taking advantage from it.
There is a lot to get from there, but currently we are interested only
by getting the remapping hardware base address. And more specifically
for interrupt remapping usage.
There might be more than one of such hardware so the exposed function is
made to retrieve all of them.
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
ARM64 is currently using SP_ELx as stack pointer for kernel and threads
because everything is running in EL1. If support for EL0 is required, it
is necessary to switch to use SP_EL0 instead, that is the only stack
pointer that can be accessed at all exception levels by threads.
While it is not required to keep using SP_EL0 also during the
exceptions, the current code implementation makes it easier to use the
same stack pointer as the one used by threads also during the
exceptions.
This patch moves the code from using SP_ELx to SP_EL0 and fill in the
missing entries in the vector table.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>