Commit Graph

1021 Commits

Author SHA1 Message Date
Anas Nashif
6e27478c3d benchmarking: remove execution benchmarking code
This code had one purpose only, feed timing information into a test and
was not used by anything else. The custom trace points unfortunatly were
not accurate and this test was delivering informatin that conflicted
with other tests we have due to placement of such trace points in the
architecture and kernel code.

For such measurements we are planning to use the tracing functionality
in a special mode that would be used for metrics without polluting the
architecture and kernel code with additional tracing and timing code.

Furthermore, much of the assembly code used had issues.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-09-05 13:28:38 -05:00
Daniel Leung
80fb6538b3 x86: use =A as output for RDTSC on x86-32
The timing_info benchmark on qemu_x86 shows this is a bit faster.

Before:
  START - Time Measurement
  Timing results: Clock frequency: 1000 MHz
  Context switch                               : 896 cycles ,   895 ns
  Interrupt latency                            : 768 cycles ,   767 ns
  Tick overhead                                :14912 cycles , 14911 ns
  Thread creation                              :18688 cycles , 18687 ns
  Thread abort (non-running)                   :49216 cycles , 49215 ns
  Thread abort (_current)                      :55616 cycles , 55615 ns
  Thread suspend                               :11072 cycles , 11071 ns
  Thread resume                                :10272 cycles , 10271 ns
  Thread yield                                 :12213 cycles , 12212 ns
  Thread sleep                                 :17984 cycles , 17983 ns
  Heap malloc                                  :21702 cycles , 21701 ns
  Heap free                                    :15176 cycles , 15175 ns
  Semaphore take with context switch           :19168 cycles , 19167 ns
  Semaphore give with context switch           :18400 cycles , 18399 ns
  Semaphore take without context switch        :2208 cycles ,  2207 ns
  Semaphore give without context switch        :4704 cycles ,  4703 ns
  Mutex lock                                   :1952 cycles ,  1951 ns
  Mutex unlock                                 :7936 cycles ,  7935 ns
  Message queue put with context switch        :20320 cycles , 20319 ns
  Message queue put without context switch     :5792 cycles ,  5791 ns
  Message queue get with context switch        :22112 cycles , 22111 ns
  Message queue get without context switch     :5312 cycles ,  5311 ns
  Mailbox synchronous put                      :27936 cycles , 27935 ns
  Mailbox synchronous get                      :23392 cycles , 23391 ns
  Mailbox asynchronous put                     :11808 cycles , 11807 ns
  Mailbox get without context switch           :20416 cycles , 20415 ns
  Drop to user mode                            :643712 cycles , 643711 ns
  User thread creation                         :652096 cycles , 652095 ns
  Syscall overhead                             :2720 cycles ,  2719 ns
  Validation overhead k_object init            :4256 cycles ,  4255 ns
  Validation overhead k_object permission      :4224 cycles ,  4223 ns
  Time Measurement finished

After:
  START - Time Measurement
  Timing results: Clock frequency: 1000 MHz
  Context switch                               : 896 cycles ,   895 ns
  Interrupt latency                            : 768 cycles ,   767 ns
  Tick overhead                                :14752 cycles , 14751 ns
  Thread creation                              :18464 cycles , 18463 ns
  Thread abort (non-running)                   :48992 cycles , 48991 ns
  Thread abort (_current)                      :55552 cycles , 55551 ns
  Thread suspend                               :10848 cycles , 10847 ns
  Thread resume                                :10048 cycles , 10047 ns
  Thread yield                                 :12213 cycles , 12212 ns
  Thread sleep                                 :17984 cycles , 17983 ns
  Heap malloc                                  :21702 cycles , 21701 ns
  Heap free                                    :15176 cycles , 15175 ns
  Semaphore take with context switch           :19104 cycles , 19103 ns
  Semaphore give with context switch           :18368 cycles , 18367 ns
  Semaphore take without context switch        :1984 cycles ,  1983 ns
  Semaphore give without context switch        :4480 cycles ,  4479 ns
  Mutex lock                                   :1728 cycles ,  1727 ns
  Mutex unlock                                 :7712 cycles ,  7711 ns
  Message queue put with context switch        :20224 cycles , 20223 ns
  Message queue put without context switch     :5568 cycles ,  5567 ns
  Message queue get with context switch        :22016 cycles , 22015 ns
  Message queue get without context switch     :5088 cycles ,  5087 ns
  Mailbox synchronous put                      :27840 cycles , 27839 ns
  Mailbox synchronous get                      :23296 cycles , 23295 ns
  Mailbox asynchronous put                     :11584 cycles , 11583 ns
  Mailbox get without context switch           :20192 cycles , 20191 ns
  Drop to user mode                            :643616 cycles , 643615 ns
  User thread creation                         :651872 cycles , 651871 ns
  Syscall overhead                             :2464 cycles ,  2463 ns
  Validation overhead k_object init            :4032 cycles ,  4031 ns
  Validation overhead k_object permission      :4000 cycles ,  3999 ns
  Time Measurement finished

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-09-05 13:28:38 -05:00
Daniel Leung
c234821861 x86: use LFENCE instead of CPUID before reading TSC for x86_64
According to Intel 64 and IA-32 Architectures Software
Developer’s Manual, volume 3, chapter 8.2.5, LFENCE provides
a more efficient method of controlling memory ordering than
the CPUID instruction. So use LFENCE here, as all 64-bit
CPUs have LFENCE.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-09-05 13:28:38 -05:00
Eugeniy Paltsev
05b6468a73 ARC: linker: add more place for optimization
Do not force linker to place text sections after each other
to have more freedom to optimize.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2020-09-05 10:22:56 -05:00
Wayne Ren
cc897a5198 ARC: add linker script template for metware toolchain
Add linker script template for MWDT toolchain (linker-mwdt.ld)
Move linker.ld to linker-gnu.ld (without changes)

The "linker.ld" is wraper now.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2020-09-05 10:22:56 -05:00
Eugeniy Paltsev
0227056066 ARC: change direct IRQ declaration for metaware toolchain
* change direct IRQ declaration for metaware toolchain
* drop unused irq-related definitions

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2020-09-05 10:22:56 -05:00
Eugeniy Paltsev
932e178007 ARC: use MWDT intrinsics to access aux regs in case of MWDT toolchain
Metaware doesn't support gcc's builtins so use corresponding intrinsics
instead.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2020-09-05 10:22:56 -05:00
Wayne Ren
ef224ce1cd ARC: make the assembly codes compatible
Make the assembly codes compatible with both GNU
and Metaware toolchain.

* replace ".balign" with ".align"
  ".align" assembler directive is supposed by all
  ARC toolchains and it is implemented in a same
  way across ARC toolchains.
* replace "mov_s __certain_reg" with "mov __certain_reg"
  Even though GCC encodes those mnemonics and even real
  HW executes them according to PRM these are restricted
  ones for mov_s and CCAC rightfully refuses to accept
  such mnemonics. So for compatibility and clarity sake
  we switch to 32-bit mov instruction which allows use
  of all those instructions.
* Add "%%" prefix while accessing registers from inline
  ASM as it is required by MWDT.
* Drop "@" prefix while accessing symbols (defined in C
  code) from ASM code as it is required by MWDT.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>

/#
2020-09-05 10:22:56 -05:00
Eugeniy Paltsev
596cd869c3 ARC: sys_io: rewrite to C code
Replace ASM sys_io implementation with identical C code for ARC.
This significantly improves portability, i.e. compiler decides
which instructions to use for a particular CPU and / or
configuration.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2020-09-05 10:22:56 -05:00
Wayne Ren
d67475ab6e ARC: handle the difference of assembly macro definition
GNU toolchain and MWDT (Metware) toolchain have different style
for accessing arguments in assembly macro. Implement the
preprocessor macro to handle the difference.

Make all ASM macros in swap_macros.h compatible for both ARC
toolchains.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2020-09-05 10:22:56 -05:00
Carlo Caione
df4aa230c8 arch: arm64: Use _arch_switch() API
Switch to the _arch_switch() API that is required for an SMP-aware
scheduler instead of using the old arch_swap mechanism.

SMP is not supported yet but this is a necessary step in that direction.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-09-05 12:06:38 +02:00
Flavio Ceolin
5408f3102d debug: x86: Add gdbstub for X86
It implements gdb remote protocol to talk with a host gdb during the
debug session. The implementation is divided in three layers:

1 - The top layer that is responsible for the gdb remote protocol.
2 - An architecture specific layer responsible to write/read registers,
    set breakpoints, handle exceptions, ...
3 - A transport layer to be used to communicate with the host

The communication with GDB in the host is synchronous and the systems
stops execution waiting for instructions and return its execution after
a "continue" or "step" command. The protocol has an exception that is
when the host sends a packet to cause an interruption, usually triggered
by a Ctrl-C. This implementation ignores this instruction though.

This initial work supports only X86 using uart as backend.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2020-09-02 20:54:57 -04:00
Peter Bigot
039e3edcda Revert "posix: linker: Wrap rodata and rwdata in sections."
This reverts commit b51eeb03f4.

The linker script is now putting read-only material in writable
segments, which causes glib with -D_FORTIFY_SOURCE=2 to abort.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2020-09-02 14:46:01 -04:00
Ioannis Glaropoulos
4ec7725110 arch: arm: cortex-m: Modify ARM-only API for IRQ target state mgmt
we modify the ARM Cortex-M only API for managing the
security target state of the NVIC IRQs. We remove the
internal ASSERT checking allowing to call the API for
non-implemented NVIC IRQ lines. However we still give the
option to the user to check the success of the IRQ target
state setting operation by allowing the API function to
return the resulting target state.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-09-02 15:01:30 +02:00
Tomasz Bursztyka
93cd336204 arch: Apply dynamic IRQ API change
Switching to constant parameter.

Fixes #27399

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-09-02 13:48:13 +02:00
Tomasz Bursztyka
6df8b3995e irq: Change dynamic API to take a constant parameter
All ISRs are meant to take a const struct device pointer, but to
simplify the change let's just move the parameter to constant and that
should be fine.

Fixes #27399

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-09-02 13:48:13 +02:00
Tomasz Bursztyka
84942e4fbc irq: Change offload API to take a constant parameter
All ISRs are meant to take a const struct device pointer, but to
simplify the change let's just move the parameter to constant and that
should be fine.

Fixes #27399

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-09-02 13:48:13 +02:00
Eugeniy Paltsev
7547b44441 ARC: use generic bitops implementation
There is no need in custom, partially ASM bitops implementation
for ARC, we can use generic one.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2020-09-01 13:36:48 +02:00
Eugeniy Paltsev
874d9426b3 ARM: aarch64: use generic bitops implementation
aarch64 has bitops implementation fully identical to generic one.
So drop redundant code.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2020-09-01 13:36:48 +02:00
Eugeniy Paltsev
c45fd74ab4 ARM: aarch32: cortex_a_r: use generic bitops implementation
aarch32/cortex_a_r has bitops implementation fully
identical to generic one. So drop redundant code.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2020-09-01 13:36:48 +02:00
Eugeniy Paltsev
ebe51e26cb ARCH: COMMON: split sys_io.h for MMIO & memory bits functions
As of today 'include/arch/common/sys_io.h" has generic implementation
for MMIO accessors and memory bits manipulation functions. That leads
to several architectures like ARC, ARM/aarch64, ARM/aarch32/corter_a_r
redefine entire 'common/sys_io.h' even if they only have different
MMIO accessors implementation.

So split 'include/arch/common/sys_io.h" to
 * sys_io.h - generic MMIO accessors
 * sys_bitops.h - generic memory bits manipulation functions

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2020-09-01 13:36:48 +02:00
Chris Coleman
99a268fa16 arch: arm: Collect full register state in Cortex-M Exception Stack Frame
To debug hard-to-reproduce faults/panics, it's helpful to get the full
register state at the time a fault occurred. This enables recovering
full backtraces and the state of local variables at the time of a
crash.

This PR introduces a new Kconfig option, CONFIG_EXTRA_EXCEPTION_INFO,
to facilitate this use case. The option enables the capturing of the
callee-saved register state (r4-r11 & exc_return) during a fault. The
info is forwarded to `k_sys_fatal_error_handler` in the z_arch_esf_t
parameter. From there, the data can be saved for post-mortem analysis.

To test the functionality a new unit test was added to
tests/arch/arm_interrupt which verifies the register contents passed
in the argument match the state leading up to a crash.

Signed-off-by: Chris Coleman <chris@memfault.com>
2020-08-31 10:13:27 +02:00
Andrew Boie
38e17b68e3 x86: paging code rewrite
The x86 paging code has been rewritten to support another paging mode
and non-identity virtual mappings.

 - Paging code now uses an array of paging level characteristics and
   walks tables using for loops. This is opposed to having different
   functions for every paging level and lots of #ifdefs. The code is
   now more concise and adding new paging modes should be trivial.

 - We now support 32-bit, PAE, and IA-32e page tables.

 - The page tables created by gen_mmu.py are now installed at early
   boot. There are no longer separate "flat" page tables. These tables
   are mutable at any time.

 - The x86_mmu code now has a private header. Many definitions that did
   not need to be in public scope have been moved out of mmustructs.h
   and either placed in the C file or in the private header.

 - Improvements to dumping page table information, with the physical
   mapping and flags all shown

 - arch_mem_map() implemented

 - x86 userspace/memory domain code ported to use the new
   infrastructure.

 - add logic for physical -> virtual instruction pointer transition,
   including cleaning up identity mappings after this takes place.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-25 15:49:59 -04:00
Daniel Leung
181d07321f coredump: add support for ARM Cortex-M
This adds the necessary bits in arch code, and Python scripts
to enable coredump support for ARM Cortex-M.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-08-24 20:28:24 -04:00
Daniel Leung
8fbb14ef50 coredump: add support for x86 and x86_64
This adds the necessary bits to enable coredump for x86
and x86_64.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-08-24 20:28:24 -04:00
Anas Nashif
a45c403c56 tracing: arc: depend on CONFIG_TRACING_ISR for ISRs
Use CONFIG_TRACING_ISR to exclude tracing ISRs just like other
architectures.

Also, z_sys_trace_isr_exit was not defined (It was renamed some time ago
and this was forgotten...)

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-08-24 13:21:12 +02:00
Andrew Boie
6f32dd7110 x86: add missing memory domain checks
These macros are used to validate that regions aren't
programmed that allow both writes and execution.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-20 13:58:54 -04:00
Henrik Brix Andersen
e7f51fa918 arch: arm: aarch32: add support for Cortex-M1
Add support for the ARM Cortex-M1 CPU.

Signed-off-by: Henrik Brix Andersen <henrik@brixandersen.dk>
2020-08-14 13:35:39 -05:00
Pete Johanson
b51eeb03f4 posix: linker: Wrap rodata and rwdata in sections.
* zephyr_linker_sources states that RODATA and RWDATA
  included will be wrapped in an outer section, so
  add these sections for the posix arch too.

Signed-off-by: Pete Johanson <peter@peterjohanson.com>
2020-08-13 11:41:39 +02:00
Andrew Boie
8ce260d8df kernel: introduce supervisor-only stacks
These stacks are appropriate for threads that run purely in
supervisor mode, and also as stacks for interrupt and exception
handling.

Two new arch defines are introduced:

- ARCH_KERNEL_STACK_GUARD_SIZE
- ARCH_KERNEL_STACK_OBJ_ALIGN

New public declaration macros:

- K_KERNEL_STACK_RESERVED
- K_KERNEL_STACK_EXTERN
- K_KERNEL_STACK_DEFINE
- K_KERNEL_STACK_ARRAY_DEFINE
- K_KERNEL_STACK_MEMBER
- K_KERNEL_STACK_SIZEOF

If user mode is not enabled, K_KERNEL_STACK_* and K_THREAD_STACK_*
are equivalent.

Separately generated privilege elevation stacks are now declared
like kernel stacks, removing the need for K_PRIVILEGE_STACK_ALIGN.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie
59a75dd3a8 arches: centralize noinit linker defintiions
This had been copy-pasted between linker scripts, create
a central header for it.

The linker scripts for xtensa and posix have very different
structure and have been left alone.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie
b0c155f3ca kernel: overhaul stack specification
The core kernel computes the initial stack pointer
for a thread, properly aligning it and subtracting out
any random offsets or thread-local storage areas.
arch_new_thread() no longer needs to make any calculations,
an initial stack frame may be placed at the bounds of
the new 'stack_ptr' parameter passed in. This parameter
replaces 'stack_size'.

thread->stack_info is now set before arch_new_thread()
is invoked, z_new_thread_init() has been removed.
The values populated may need to be adjusted on arches
which carve-out MPU guard space from the actual stack
buffer.

thread->stack_info now has a new member 'delta' which
indicates any offset applied for TLS or random offset.
It's used so the calculations don't need to be repeated
if the thread later drops to user mode.

CONFIG_INIT_STACKS logic is now performed inside
z_setup_new_thread(), before arch_new_thread() is called.

thread->stack_info is now defined as the canonical
user-accessible area within the stack object, including
random offsets and TLS. It will never include any
carved-out memory for MPU guards and must be updated at
runtime if guards are removed.

Available stack space is now optimized. Some arches may
need to significantly round up the buffer size to account
for page-level granularity or MPU power-of-two requirements.
This space is now accounted for and used by virtue of
the Z_THREAD_STACK_SIZE_ADJUST() call in z_setup_new_thread.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Eugeniy Paltsev
3f544ca5b2 ARC: replace NOP ASM inlines with builtin
NOP instruction is available via builtin for ARC so get rid of all
ASM inlines with NOP/NOP_S instructions.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2020-07-24 12:05:12 +02:00
Eugeniy Paltsev
79a55ae7a4 ARC: use generic lsb/msb implementation instead of ASM code
Zephyr has generic find_msb_set and find_lsb_set implementations.
They are based on builtins so they are optimized enough.
Drop custom ASM implementation.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2020-07-24 12:05:12 +02:00
Eugeniy Paltsev
3eee762e08 ARC: NSIM: switch to ns16550 UART model
Switch nSIM from custom ARC UART to ns16550 model. That will
allow us to use zephyr images built for nSIM on other platforms
like HAPS, QEMU, etc...

This patch do:
 * switch nSIM board to ns16550 UART usage
 * change nSIM simulator configuration to use ns16550 UART model
 * drop checks for CONFIG_UART_NSIM in ARC code
 * update nSIM documentation

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2020-07-20 13:34:34 -04:00
Aastha Grover
97ecad69f0 include: Implement API's for cache flush and cache invalidate
arch: arc: core: Add Cache Implementation function & prototype for arc

Signed-off-by: Aastha Grover <aastha.grover@intel.com>
2020-07-15 15:53:26 -07:00
Sandeep Tripathy
8aa0248acd drivers: interrupt_controller: Add gicv3 SGI api
Add api to raise SGI to target cores in affinity level identified
by MPIDR. Currently only EL1S is supported.

Signed-off-by: Sandeep Tripathy <sandeep.tripathy@broadcom.com>
2020-07-01 08:02:57 -04:00
Sandeep Tripathy
500451ad67 arch: arm64: Add MPIDR related defines
Add macros to access MPIDR and extract affinity levels.

Signed-off-by: Sandeep Tripathy <sandeep.tripathy@broadcom.com>
2020-07-01 08:02:57 -04:00
Andy Ross
7fe8caebc0 arch/x86: Add z_acpi_find_table(), MCFG support
The existing minimal ACPI implementation was enough to find the MADT
table for dumping CPU info.  Enhance it with a slightly less minimal
implementation that can fetch any table, supports the ACPI 2.0 XSDT
directory (technically required on 64 bit systems so tables can live
>4G) and provides definitions for the MCFG table with the PCI
configuration pointers.

Note that there is no use case right now for high performance table
searching, so the "init" step has been removed and tables are probed
independently from scratch for each one requested (there are only
two).

Note also that the memory to which these tables point is not
understood by the Zephyr MMU configuration, so in long mode all ACPI
calls have to be done very early, before z_x86_paging_init() (or on a
build with the MMU initialization disabled).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-06-23 13:07:39 +02:00
Andrew Boie
87dd0492db x86: add CONFIG_X86_KERNEL_OFFSET
Previously, DTS specification of physical RAM bounds did not
correspond to the actual bounds of system RAM as the first
megabyte was being skipped.

There were reasons for this - the first 1MB on PC-like systems
is a no-man's-land of reserved memory regions, but we need DTS
to accurately capture physical memory bounds.

Instead, we introduce a config option which can apply an offset
to the beginning of physical memory, and apply this to the "RAM"
region defined in the linker scripts.

This also fixes a problem where an extra megabyte was being
added to the size of system RAM.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-06-18 19:35:52 +02:00
Kumar Gala
9ca541a504 linker: Remove deprecated Kconfig options related to linker scripts
Remove Kconfig, linker script, and related bits associated with
CUSTOM_RODATA_LD, CUSTOM_RWDATA_LD, CUSTOM_SECTIONS_LD,
SOC_NOINIT_LD, SOC_RODATA_LD, and SOC_RWDATA_LD options that have been
deprecated since Zephyr 2.2.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-06-12 11:14:28 +02:00
Peter Bigot
4bc1ca9e7e arch: x86: add extern "C" to arch.h
Without this builds for qemu_x86 can't invoke k_cycle_get_32()
because z_timer_cycle_get_32() is installed with a mangled name.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2020-06-09 17:45:09 -04:00
Stephanos Ioannidis
6098f099ca arch: arm: Remove header shims
This commit removes the header shims introduced after AArch32/64
re-organisation in the commit d048faacf2.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-06-09 10:38:36 +02:00
Kumar Gala
a1b77fd589 zephyr: replace zephyr integer types with C99 types
git grep -l 'u\(8\|16\|32\|64\)_t' | \
		xargs sed -i "s/u\(8\|16\|32\|64\)_t/uint\1_t/g"
	git grep -l 's\(8\|16\|32\|64\)_t' | \
		xargs sed -i "s/s\(8\|16\|32\|64\)_t/int\1_t/g"

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-06-08 08:23:57 -05:00
Andrew Boie
32ebe3cd24 x86: 32-bit: allow direct ISRs only with !KPTI
This doesn't work as expected with kernel page table isolation
turned on, and fixing it would likely lose any latency benefits
that direct ISRs are supposed to provide.

For now, just prevent these macros from being defined if KPTI
is turned on, like other arches that do not implement this
feature.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-05-29 10:46:00 +02:00
Daniel Leung
37516a7818 x86: add ability for SoC to add MMU regions
The SoCs usually have devices that are accessed through MMIO.
This requires the corresponding regions to be marked readable
and writable in the MMU or else accesses will result in page
faults.

This adds a function which can be implemented in the SoC code to
specify those pages to be added to MMU.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-05-19 19:19:51 +02:00
Daniel Leung
06e86b68e7 x86: intel64: add missing _locore_end to linker script
_locore_end is missing from the linker script so add it.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-05-19 19:19:51 +02:00
Sandeep Tripathy
17fcaa3fc4 drivers: timer: get ppi configuration from dt
Use device tree provided configurations for arm architecture timer
PPIs.

This fixes issue of timer ppi not working on most hardware where
edge-triggered PPI are not supported.

Signed-off-by: Sandeep Tripathy <sandeep.tripathy@broadcom.com>
2020-05-18 11:07:57 +02:00
Andrew Boie
d149909b03 x86: properly align initial dummy thread
x86-32 thread objects require special alignment since they
contain a buffer that is passed to fxsave/fxrstor instructions.
This fell over if the dummy thread is created in a stack frame.

Implement a custom swap to main for x86 which still uses a
dummy thread, but in an unused part of the interrupt stack
with proper alignment.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-05-13 21:23:52 +02:00
Martí Bolívar
6e8775ff84 devicetree: remove DT_HAS_NODE_STATUS_OKAY
Several reviewers agreed that DT_HAS_NODE_STATUS_OKAY(...) was an
undesirable API for the following reasons:

- it's inconsistent with the rest of the DT_NODE_HAS_FOO names
- DT_NODE_HAS_FOO_BAR_BAZ(node) was agreed upon as a shorthand
  for macros which are equivalent to
  DT_NODE_HAS_FOO(node) && DT_NODE_HAS_BAR(node) &&
- DT_NODE_HAS_BAZ(node), and DT_HAS_NODE_STATUS_OKAY is an odd duck
- DT_NODE_HAS_STATUS(..., okay) was viewed as more readable anyway
- it is seen as a somewhat aesthetically challenged name

Replace all users with DT_NODE_HAS_STATUS(..., okay), which is
semantically equivalent.

This is mostly done with sed, but a few remaining cases were done by
hand, along with whitespace, docs, and comment changes. These special
cases include the Nordic SOC static assert files.

Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
2020-05-13 18:24:42 +02:00