Commit Graph

258 Commits

Author SHA1 Message Date
Ioannis Glaropoulos
a6cb8b06db kernel: introduce k_float_disable system call
We introduce k_float_disable() system call, to allow threads to
disable floating point context preservation. The system call is
to be used in FP Sharing Registers mode (CONFIG_FP_SHARING=y).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-06-12 09:17:45 -07:00
Andy Ross
a12f2d6666 kernel/smp: Rename smp_init()
This name collides with one in the bt subsystem, and wasn't named in
proper zephyrese anyway.

Fixes #16604

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-06-05 17:15:55 -04:00
Nicolas Pitre
58d839bc3c misc: memory address type conversions
The uintptr_t type is more appropriate to represent memory addresses
than u32_t.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-06-03 21:14:57 -04:00
Jakob Olesen
c8708d9bf3 misc: Replace uses of __builtin_*_overflow() with <misc/math_extras.h>.
Use the new math_extras functions instead of calling builtins directly.

Change a few local variables to size_t after checking that all uses of
the variable actually expects a size_t.

Signed-off-by: Jakob Olesen <jolesen@fb.com>
2019-05-14 19:53:30 -05:00
Benoit Leforestier
9915b4ec4e C++: Fix compilation error "invalid conversion"
When some header are included into C++ source file, this kind of
compilations errors are generated:
error: invalid conversion from 'void*'
	to 'u32_t*' {aka 'unsigned int*'} [-fpermissive]

Signed-off-by: Benoit Leforestier <benoit.leforestier@gmail.com>
2019-05-03 14:27:07 -04:00
Ioannis Glaropoulos
873dd10ea4 kernel: mem_domain: update name/doc of API function for partition add
Update the name of mem-domain API function to add a partition
so that it complies with the 'z_' prefix convention. Correct
the function documentation.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-05-02 11:37:38 -04:00
Flavio Ceolin
4f99a38b06 arch: all: Remove not used struct _caller_saved
The struct _caller_saved is not used. Most architectures put
automatically the registers onto stack, in others architectures the
exception code does it.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-04-18 12:24:56 -07:00
Flavio Ceolin
d61c679d43 arch: all: Remove legacy code
The struct _kernel_ach exists only because ARC' s port needed it, in
all other ports this was defined as an empty struct. Turns out that
this struct is not required even for ARC anymore, this is a legacy
code from nanokernel time.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-04-18 12:24:56 -07:00
Patrik Flykt
7c0a245d32 arch: Rename reserved function names
Rename reserved function names in arch/ subdirectory. The Python
script gen_priv_stacks.py was updated to follow the 'z_' prefix
naming.

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-04-03 17:31:00 -04:00
Andrew Boie
526807c33b userspace: add const qualifiers to user copy fns
The source data is never modified.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-29 22:21:16 -04:00
Patrik Flykt
21358baa72 all: Update unsigend 'U' suffix due to multiplication
As the multiplication rule is updated, new unsigned suffixes
are added in the code.

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-28 17:15:58 -05:00
Patrik Flykt
24d71431e9 all: Add 'U' suffix when using unsigned variables
Add a 'U' suffix to values when computing and comparing against
unsigned variables.

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-28 17:15:58 -05:00
Flavio Ceolin
2df02cc8db kernel: Make if/iteration evaluate boolean operands
Controlling expression of if and iteration statements must have a
boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-03-26 22:06:45 -04:00
Flavio Ceolin
2ecc7cfa55 kernel: Make _is_thread_prevented_from_running return a bool
This function was returning an essentially boolean value. Just changing
the signature to return a bool.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-03-26 14:31:29 -04:00
Andy Ross
e59d19628d kernel/sched: Rework prio validity assertion
This is throwing errors in static analysis, complaining that comparing
that a prior is higher and lower is impossible.  That is wrong per my
eyes (I swear I think it might be cueing off the names of the
functions, which invert "higher" and "lower" to match our reversed
priority numbers).

But frankly this was never a very readable macro to begin with.
Refactor to put the bounds into the term, so the static analyzer can
prove it locally, and add a build assertion to catch any errors (there
are none currently) where the low<->high priority range is invalid.

Long term, we should probably remove this macro, it doesn't provide
much value.  But removing it in response to a static analysis failure
is... not very responsible as a development practice.

Fixes #14816
Fixes #14820

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-23 09:53:55 -05:00
Andrew Boie
7ea211256e userspace: properly namespace handler functions
Now prefixed with z_hdlr_ instead of just hdlr_.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-18 09:23:11 -07:00
Andrew Boie
50be938be5 userspace: renamespace some internal macros
These private macros are now all prefixed with Z_.

Fixes: #14447

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-18 09:23:11 -07:00
Ioannis Glaropoulos
cac20e91d8 kernel: userspace: correct documentation for Z_SYSCALL_MEMORY_ macros
Corrections in the documentation of arguments in
Z_SYSCALL_MEMORY, Z_SYSCALL_MEMORY_READ, and
Z_SYSCALL_MEMORY_WRITE macros.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-03-13 15:36:15 -07:00
Ioannis Glaropoulos
c686dd5064 kernel: enhance documentation of z_arch_buffer_validate
This commit enhances the documentation of z_arch_buffer_validate
describing the cases where the validation is performed
successfully, as well as the cases where the result is
undefined.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-03-13 15:36:15 -07:00
Andy Ross
42ed12a387 kernel/sched: arch/x86_64: Support synchronous k_thread_abort() in SMP
Currently thread abort doesn't work if a thread is currently scheduled
on a different CPU, because we have no way of delivering an interrupt
to the other CPU to force the issue.  This patch adds a simple
framework for an architecture to provide such an IPI, implements it
for x86_64, and uses it to implement a spin loop in abort for the case
where a thread is currently scheduled elsewhere.

On SMP architectures (xtensa) where no such IPI is implemented, we
fall back to waiting on an arbitrary interrupt to occur.  This "works"
for typical code (and all current tests), but of course it cannot be
guaranteed on such an architecture that k_thread_abort() will return
in finite time (e.g. the other thread on the other CPU might have
taken a spinlock and entered an infinite loop, so it will never
receive an interrupt to terminate itself)!

On non-SMP architectures this patch changes no code paths at all.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-13 19:15:20 +01:00
Patrik Flykt
4344e27c26 all: Update reserved function names
Update reserved function names starting with one underscore, replacing
them as follows:
   '_k_' with 'z_'
   '_K_' with 'Z_'
   '_handler_' with 'z_handl_'
   '_Cstart' with 'z_cstart'
   '_Swap' with 'z_swap'

This renaming is done on both global and those static function names
in kernel/include and include/. Other static function names in kernel/
are renamed by removing the leading underscore. Other function names
not starting with any prefix listed above are renamed starting with
a 'z_' or 'Z_' prefix.

Function names starting with two or three leading underscores are not
automatcally renamed since these names will collide with the variants
with two or three leading underscores.

Various generator scripts have also been updated as well as perf,
linker and usb files. These are
   drivers/serial/uart_handlers.c
   include/linker/kobject-text.ld
   kernel/include/syscall_handler.h
   scripts/gen_kobject_list.py
   scripts/gen_syscall_header.py

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-11 13:48:42 -04:00
Flavio Ceolin
d9876be30c kernel: Make statements evaluate boolean expressions
MISRA-C requires that the if statement has essentially Boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-03-05 14:58:58 -08:00
Andrew Boie
62fad96802 userspace: zero app memory bss earlier
Some init tasks may use some bss app memory areas and
expect them to be zeroed out. Do this much earlier
in the boot process, before any of the init tasks
run.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-05 08:27:20 -05:00
Andrew Boie
6dc3fd8e50 userspace: fix x86 issue with adding partitions
On x86, if a supervisor thread belonging to a memory domain
adds a new partition to that domain, subsequent context switches
to another thread in the same domain, or dropping itself to user
mode, does not have the correct setup in the page tables.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-03 23:44:13 -05:00
Andrew Boie
f5951cd88f kernel: syscall_handler: get rid of stdarg
We can just implement this as a macro and not needlessly
run afoul of MISRC-C rule 17.1

Fixes: #10012

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-25 13:42:03 -08:00
Andrew Boie
2cfeba8507 x86: implement interrupt stack trampoline
Upon hard/soft irq or exception entry/exit, handle transitions
off or onto the trampoline stack, which is the only stack that
can be used on the kernel side when the shadow page table
is active. We swap page tables when on this stack.

Adjustments to page tables are now as follows:

- Any adjustments for stack memory access now are always done
  to the user page tables

- Any adjustments for memory domains are now always done to
  the user page tables

- With KPTI, resetting a page now clears the present bit

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-14 12:46:36 -05:00
Andy Ross
1bf9bd04b1 kernel: Add _unlocked() variant to context switch primitives
These functions, for good design reason, take a locking key to
atomically release along with the context swtich.  But there's still a
common pattern in code to do a switch unconditionally by passing
irq_lock() directly.  On SMP that's a little hurtful as it spams the
global lock.  Provide an _unlocked() variant for
_Swap/_reschedule/_pend_curr for simplicity and efficiency.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
ec554f44d9 kernel: Split reschdule & pend into irq/spin lock versions
Just like with _Swap(), we need two variants of these utilities which
can atomically release a lock and context switch.  The naming shifts
(for byte count reasons) to _reschedule/_pend_curr, and both have an
_irqlock variant which takes the traditional locking.

Just refactoring.  No logic changes.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
aa6e21c24c kernel: Split _Swap() API into irqlock and spinlock variants
We want a _Swap() variant that can atomically release/restore a
spinlock state in addition to the legacy irqlock.  The function as it
was is now named "_Swap_irqlock()", while _Swap() now refers to a
spinlock and takes two arguments.  The former will be going away once
existing users (not that many!  Swap() is an internal API, and the
long port away from legacy irqlocking is going to be happening mostly
in drivers) are ported to spinlocks.

Obviously on uniprocessor setups, these produce identical code.  But
SMP requires that the correct API be used to maintain the global lock.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andrew Boie
4b4f773484 libc: set up memory partitions
* Newlib now defines a special z_newlib_partition containing
  all globals relevant to newlib. Most of these are in libc.a
  with a heap tracking variable in newlib's hooks.

* Both C libraries now expose a k_mem_partition containing the
  bounds of the malloc heap arena. Threads that want to use
  libc malloc() will need to add this to their memory domain.

* z_newlib_get_heap_bounds has been removed, in favor of the
  memory partition for the heap arena

* ztest now includes the C library partitions in its memory
  domain.

* The mem_alloc test now runs in user mode to prove that this
  all works for both C libraries.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-08 07:04:30 -05:00
Andy Ross
b2791b0ac8 kernel/sched: Force inlining of some routines within the scheduler guts
GCC 6.2.0 is making frustratingly poor inlining decisions with some of
these routines, resulting in an awful lot of runtime calls for code
that is only ever expanded once or twice within the file.

Treat with targetted ALWAYS_INLINE's to force the issue.  The
scheduler code is a hot path.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-01 15:57:21 -05:00
Peter A. Bigot
b4ece0ad44 kernel: timeout: detect inactive timeouts using dnode linked state
Whether a timeout is linked into the timeout queue can be determined
from the corresponding sys_dnode_t linked state.  This removes the need
to use a special flag value in dticks to determine that the timeout is
inactive.

Update _abort_timeout to return an error code, rather than the flag
value, when the timeout to be aborted was not active.

Remove the _INACTIVE flag value, and replace its external uses with an
internal API function that checks whether a timeout is inactive.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-01-23 20:46:49 +01:00
Peter A. Bigot
25fbe7b60d kernel: timeout: remove local fix for double-remove
Use the new generic capability to detect unlinked sys_dnode_t instances.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-01-23 20:46:49 +01:00
Andy Ross
762ff2f428 kernel/swap: Simply/robustify return value handling
The call to _arch_switch is a giant screaming sign inviting optimizer
bugs.  The code that appears before is what happened long ago when we
were switched out, but the version that EXECUTED just now is actually
in a different thread.  So the assignment to _current before the
switch actually assigned OUR thread (the "new_thread" of the old
context!) to _current.

But obviously the optimizer looks at that code and assumes that the
_current which got assigned to the thread we were switching to long
ago is still correct, and used it when retrieving the swap return
value.

Obviously the real bug here is that the _arch_switch() in question
lacked a memory clobber (and it's getting one).

But we can remove two lines, remove code from inside the interrupt
lock and make the implementation more robust by moving the read to
after the irq_unlock() (which generally also has a memory clobber).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-01-11 15:18:52 -05:00
Flavio Ceolin
76b3518ce6 kernel: Make statements evaluate boolean expressions
MISRA-C requires that the if statement has essentially Boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-01-07 08:52:07 -05:00
Andrew Boie
74f114caef userspace: easy checking for specific driver
In general driver system calls are implemented at a subsystem
layer. However, some drivers may have capabilities specific to
the hardware not covered by the subsystem API. Such drivers may
want to define their own system calls.

This macro makes it simple to validate in the driver-specific
system call handlers that not only does the untrusted device
pointer correspond to the expected subsystem, initialization
state, and caller permissions, but also that the device object
is an instance of a specific driver (and not just any driver in
that subsystem).

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-12-27 20:31:58 -05:00
Flavio Ceolin
b7287ceb4e kernel: syscall: Object validation checks boolean statement
The function that checks if an object is valid is essentially a boolean
function. Just changing its return type to reflect it.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-11-30 08:05:11 -08:00
Pawel Dunaj
baea22407d kernel: Always set clock expiry with sync with timeout module
System must not set the clock expiry via backdoor as it may
effect in unbound time drift of all scheduled timeouts.

Fixes: #11502

Signed-off-by: Pawel Dunaj <pawel.dunaj@nordicsemi.no>
2018-11-26 12:24:59 +01:00
Andrew Boie
42cfd4ff26 kernel: expose k_busy_wait() to user mode
If we just had the kernel's implementation, we could
just move this to lib/, but possible arch-specific
implementations dictate that we just make this a
syscall.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-11-15 16:20:36 -05:00
Flavio Ceolin
a406b88fca kernel: Remove duplicated identifier
There was an struct and a variable called _kernel. This is error prone
and a MISRA-C violation. It is changing the struct to have a unique
identifier.

MISRA-C rule 5.8

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-11-04 11:37:24 -05:00
Flavio Ceolin
a3dddedab6 kernel: Use distinct macro names
There is a struct and a macro called _ready_q, this is error
prone. Just removing it.

MISRA-C rule 5.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-10-31 19:43:47 -04:00
Marek Pieta
e87193896a subsys: debug: tracing: Fix thread tracing
Change fixes issue with thread execution tracing.

Signed-off-by: Marek Pieta <Marek.Pieta@nordicsemi.no>
2018-10-29 22:09:12 -04:00
Adithya Baglody
6176692f4b kernel: ksched.h: Incorrect argument type in _pend_current_thread
In _pend_current_thread the argument key is always a unsigned
interger type and this function forces it to become a signed
interger. This is a dangerous behavior and cant be trusted to
work as expected.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-10-17 12:17:58 -04:00
Paul Sokolovsky
b779ea2d19 kernel: syscall_handler.h: Typo fix in docstring
Should be "fails" instead of "files".

Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
2018-10-17 10:32:10 -04:00
Adithya Baglody
1424561252 kernel: sched: Fixed incorrect argument type of _reschedule()
This API shouldn't take a int type but instead it should take
u32_t. This argument has to be similar to irq_lock() and
irq_unlock().

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-10-17 07:59:51 -04:00
Andy Ross
cfe62038d2 kernel: Checkpatch fixups
I was pretty careful, but these snuck in.  Most of them are due to
overbroad string replacements in comments.  The pull request is very
large, and I'm too lazy to find exactly where to back-merge all of
these.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
987c0e5fc1 kernel: New timeout implementation
Now that the API has been fixed up, replace the existing timeout queue
with a much smaller version.  The basic algorithm is unchanged:
timeouts are stored in a sorted dlist with each node nolding a delta
time from the previous node in the list; the announce call just walks
this list pulling off the heads as needed.  Advantages:

* Properly spinlocked and SMP-aware.  The earlier timer implementation
  relied on only CPU 0 doing timeout work, and on an irq_lock() being
  taken before entry (something that was violated in a few spots).
  Now any CPU can wake up for an event (or all of them) and everything
  works correctly.

* The *_thread_timeout() API is now expressible as a clean wrapping
  (just one liners) around the lower-level interface based on function
  pointer callbacks.  As a result the timeout objects no longer need
  to store backpointers to the thread and wait_q and have shrunk by
  33%.

* MUCH smaller, to the tune of hundreds of lines of code removed.

* Future proof, in that all operations on the queue are now fronted by
  just two entry points (_add_timeout() and z_clock_announce()) which
  can easily be augmented with fancier data structures.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
52e444bc05 kernel: Move timeout_remaining API
_timeout_remaining_get() was a function on a struct _timeout, doing
iteration on the timeout list, but it was defined in timer.c (the
higher level abstraction).

Move it to where it belongs.  Also have it return ticks instead of ms
to conform to scheme in the rest of the timeout API.  And rename it to
a more standard zephyr name.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
fe82f1c2af kernel/timeout: Refactor API
Add the callback parameter to add_timeout(), and remove the thread
argument.  Now the "low level" timeout API can be expressed without
reference to threads.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
5d203523b6 kernel/timeout: Eliminate wait_q parameters from API
Now that this is known to be an unused value, remove it from the API.
Note that this caught a few spots where we were passing values (a
non-NULL wait_q with a NULL thread handle) that were always being
ignored before.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
2ae8f50936 kernel/include: Move stubs for timeout functions to their declarations
The timeout_q.h scheme, where it declared real functions, but the
stubs for when there was no clock were in wait_q.h was senselessly
weird.  Put them in the same file.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
9098a45c84 kernel: New timeslicing implementation
Instead of checking every time we hit the low-level context switch
path to see if the new thread has a "partner" with which it needs to
share time, just run the slice timer always and reset it from the
scheduler at the points where it has already decided a switch needs to
happen.  In TICKLESS_KERNEL situations, we pay the cost of extra timer
interrupts at ~10Hz or whatever, which is low (note also that this
kind of regular wakeup architecture is required on SMP anyway so the
scheduler can "notice" threads scheduled by other CPUs).  Advantages:

1. Much simpler logic.  Significantly smaller code.  No variance or
   dependence on tickless modes or timer driver (beyond setting a
   simple timeout).

2. No arch-specific assembly integration with _Swap() needed

3. Better performance on many workloads, as the accounting now happens
   at most once per timer interrupt (~5 Hz) and true rescheduling and
   not on every unrelated context switch and interrupt return.

4. It's SMP-safe.  The previous scheme kept the slice ticks as a
   global variable, which was an unnoticed bug.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
8b54953e4b kernel/sys_clock: Fix build when !SYS_CLOCK_EXISTS
This got broken.  Add some #ifery to handle the case.  Not clean, will
clean up in a future pass once the API is final.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
1c08aefe56 kernel/timeoutq: Uninline the timeout methods
There was no good reason to have these rather large functions in a
header.  Put them into sys_clock.c for now, pending rework to the
system.

Now the API is clearly visible in a small header.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
722a888ef7 timer: Clean up hairy tickless APIs
The tickless driver had a bunch of "hairy" APIs which forced the timer
drivers to do needless low-level accounting for the benefit of the
kernel, all of which then proceeded to implement them via cut and
paste.  Specifically the "program_time" calls forced the driver to
expose to the kernel exactly when the next interrupt was due and how
much time had elapsed, in a parallel API to the existing "what time is
it" and "announce a tick" interrupts that carry the same information.

Remove these from the kernel, replacing them with synthesized logic
written in terms of the simpler APIs.

In some cases there will be a performance impact due to the use of the
64 bit uptime call, but that will go away soon.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross
ab488277bc drivers/timer: Unify timeout setting APIs
The existing API had two almost identical functions: _set_time() and
_timer_idle_enter().  Both simply instruct the timer driver to set the
next timer interrupt expiration appropriately so that the call to
z_clock_announce() will be made at the requested number of ticks.  On
most/all hardware, these should be implementable identically.

Unfortunately because they are specified differently, existing drivers
have implemented them in parallel.

Specify a new, unified, z_clock_set_timeout().  Document it clearly
for implementors.  And provide a shim layer for legacy drivers that
will continue to use the old functions.

Note that this patch fixes an existing bug found by inspection: the
old call to _set_time() out of z_clock_announce() failed to test for
the "wait forever" case in the situation where clock_always_on is
true, meaning that a system that reached this point and then never set
another timeout would freeze its uptime clock incorrectly.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Flavio Ceolin
18af4c6299 kernel: Fix overflow test problem introduced in 92ea2f9
The builtin function __builtin_umul_overflow returns a boolean and
should not checked as an integer.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-10-04 05:20:29 -07:00
Flavio Ceolin
6fc84feaf2 kernel: syscalls: Change handlers namespace
According C99 the first 31 characters of an identifier must be unique.
Shortening the namespace of the generated objects to achieve it.

C99 - 5.2.4.1
MISRA-C rule 5.1

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-28 07:58:19 +05:30
Flavio Ceolin
ea716bf023 kernel: Explicitly comparing pointer with NULL
MISRA-C rule: 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-28 06:28:41 +05:30
Flavio Ceolin
92ea2f9189 kernel: Calling Z_SYSCALL_VERIFY_MSG with boolean expressions
Explicitly making a boolean expression when calling
Z_SYSCALL_VERIFY_MSG macro.

MISRA-C rule: 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-28 06:28:41 +05:30
Flavio Ceolin
02ed85bd82 kernel: sched: Change boolean APIs to return bool
Change APIs that essentially return a boolean expression  - 0 for
false and 1 for true - to return a bool.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-28 06:28:41 +05:30
Flavio Ceolin
6fdc56d286 kernel: Using boolean types for boolean constants
Make boolean expressions use boolean types.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-28 06:28:41 +05:30
Anas Nashif
57554055d2 kernel: add a new API for setting thread names
Added k_thread_name_set() and enable thread name setting when declaring
static threads. This is enabled only when THREAD_MONITOR is used. System
threads get a name by default.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-09-27 08:58:55 +05:30
Daniel Leung
7228a60173 kernel: Fix compilation errors when CONFIG_TIMESLICING=n
Add ifdef guard to the z_reset_timeslice() to fix compilation
errors when CONFIG_TIMESLICING is disabled.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2018-09-25 12:54:58 +05:30
Findlay Feng
3c834bdf27 kernel: Fix list-node add again corruption case in timeout handling
The node of the timeout temporary list cannot be continued
to index the next node after being added again.

Signed-off-by: Findlay Feng <i@fengch.me>
2018-09-21 13:29:09 -04:00
Flavio Ceolin
b3d9202704 kernel: Using boolean constants instead of 0 or 1
MISRA C requires that every controlling expression of and if or while
statement have a boolean type.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-18 13:57:15 -04:00
Sebastian Bøe
878a0f050e ld: Put 'sizeof(struct device)' in the generated offsets header
Rename _DEVICE_STRUCT_SIZE to _DEVICE_STRUCT_SIZEOF. This causes it to
be picked by the script 'gen_offset_header.py' and inserted into the
header file 'include/generated/offsets.h'.

Renaming from x_SIZE to x_SIZEOF will align it's name with the other
symbols that denote a sctruct's size, like K_THREAD_SIZEOF.

Furthermore, it will allow the symbol to be accessed through a header
file define, instead of only as an extern symbol. This is more
flexible, and more aligned with the other symbols in offsets.

Finally, if we are able to move all of offsets.c symbols into the
offsets.h header file we be able to remove offsets.o from the link and
thereby simplify the linking process.

Signed-off-by: Sebastian Bøe <sebastian.boe@nordicsemi.no>
2018-09-18 16:23:40 +02:00
Flavio Ceolin
a7fffa9e00 headers: Fix headers guards
Any word started with underscore followed by and uppercase letter or a
second underscore is a reserved word according with C99.

With have *many* violations on Zephyr's code, this commit is tackling
only the violations caused by headers guards. It also takes the
opportunity to normalize them using the filename in uppercase and
replacing dot with underscore. e.g file.h -> FILE_H

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-17 15:49:26 -04:00
Flavio Ceolin
98c64b6d92 kernel: Change _reschedule signature
_reschedule return's value is not used anywhere, except erroneously by
pthread_barrier_wait.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-14 16:55:37 -04:00
Flavio Ceolin
8a9ba10c2c kernel: swap: Fix __swap signature
__swap function was returning -EAGAIN in some case, though its return
value was declared as unsigned int.

This commit changes this function to return int since it can return a
negative value and its return was already been propagate as int.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-14 16:55:37 -04:00
Andy Ross
9ecc4ead68 sched: Properly account for timeslicing in tickless mode
When adding a new runnable thread in tickless mode, we need to detect
whether it will timeslice with the running thread and reset the timer,
otherwise it won't get any CPU time until the next interrupt fires at
some indeterminate time in the future.

This fixes the specific bug discussed in #7193, but the broader
problem of tickless and timeslicing interacting badly remains.  The
code as it exists needs some rework to avoid all the #ifdef mess.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-08-29 10:01:41 -04:00
Anas Nashif
0e07f8e97a Revert "sched: Properly account for timeslicing in tickless mode"
This reverts commit bc6fb65c81.

Causes MPU faults on multiple platforms.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-08-27 18:39:51 -04:00
Andy Ross
bc6fb65c81 sched: Properly account for timeslicing in tickless mode
When adding a new runnable thread in tickless mode, we need to detect
whether it will timeslice with the runnable thread and reset the
timer, otherwise it won't get any CPU time until the next interrupt
fires at some indeterminate time in the future.

This fixes the specific bug discussed in #7193, but the broader
problem of tickless and timeslicing interacting badly remains.  The
code as it exists needs some rework to avoid all the #ifdef mess.

Note that the patch also moves _ready_thread() from a ksched.h inline
to sched.c.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-08-27 13:19:29 -04:00
Andy Ross
d8d5ec3f91 kernel: Fix double-list-removal corruption case in timeout handling
This fixes #8669, and is distressingly subtle for a one-line patch:

The list iteration code in _handle_expired_timeouts() would remove the
timeout from our (temporary -- the dlist header is on the stack of our
calling function) list of expired timeouts before invoking the
handler.  But sys_dlist_remove() only fixes up the containing list
pointers, leaving garbage in the node.  If the action of that handler
is to re-add the timeout (which is very common!) then that will then
try to remove it AGAIN from the same list.

Even then, the common case is that the expired list contains only one
item, so the result is a perfectly valid empty list that affects
nothing.  But if you have more than one, you get a corrupt cycle in
the iteration list and things get weird.

As it happens, there's no value in trying to remove this timeout from
the temporary list at all.  Just iterate over it naturally.

Really, this design is fragile: we shouldn't be reusing the list nodes
in struct _timeout for this purpose and should figure out some other
mechanism.  But this fix should be good for now.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-08-26 19:39:52 -07:00
Anas Nashif
483910ab4b systemview: add support natively using tracing hooks
Add needed hooks as a subsystem that can be enabled in any application.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-08-21 05:45:47 -07:00
Anas Nashif
a2248782a2 kernel: event_logger: remove kernel_event_logger
Move to more generic tracing hooks that can be implemented in different
ways and do not interfere with the kernel.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-08-21 05:45:47 -07:00
Anas Nashif
b6304e66f6 tracing: support generic tracing hooks
Define generic interface and hooks for tracing to replace
kernel_event_logger and existing tracing facilities with something more
common.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-08-21 05:45:47 -07:00
Adithya Baglody
bb918d85f8 tests: benchmarks: timing_info: Enable benchmarks for xtensa.
This patch provides support needed to get timing related
information from xtensa based SOC.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-08-20 06:51:25 -07:00
Flavio Ceolin
8aec087268 kernel: Fix bitwise operators with unsigned operators
Bitwise operators should be used only with unsigned integer operands
because the result os bitwise operations on signed integers are
implementation-defined.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-08-16 19:47:41 -07:00
Flavio Ceolin
ec462f872c kernel: Remove unused definition
_thread definition is not used, just removing it.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-08-16 19:47:41 -07:00
Sebastian Bøe
1186f5bb29 cmake: Deprecate the 2 symbols _SYSCALL_{LIMIT,BAD}
There exist two symbols that became equivalent when PR #9383 was
merged; _SYSCALL_LIMIT and K_SYSCALL_LIMIT. This patch deprecates the
redundant _SYSCALL_LIMIT symbol.

_SYSCALL_LIMIT was initally introduced because before PR #9383 was
merged K_SYSCALL_LIMIT was an enum, which couldn't be included into
assembly files. PR #9383 converted it into a define, which can be
included into assembly files, making _SYSCALL_LIMIT redundant.

Likewise for _SYSCALL_BAD.

Signed-off-by: Sebastian Bøe <sebastian.boe@nordicsemi.no>
2018-08-15 11:46:51 -07:00
Andrew Boie
83fda7c68f userspace: add _k_object_recycle()
This is used to reset the permissions on an object while
also initializing it.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-08-13 07:19:39 -07:00
Andrew Boie
c8188f6722 userspace: add functions for copying to/from user
We now have functions for handling all the details of copying
data to/from user mode, including C strings and copying data
into resource pool allocations.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-07-31 07:47:15 -07:00
Andrew Boie
1f2eedff18 kernel: add z_arch_user_string_nlen prototype
This is used to measure the length of potentially unsafe
strings.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-07-31 07:47:15 -07:00
Andy Ross
9f06a35450 kernel: Add the old "multi queue" scheduler algorithm as an option
Zephyr 1.12 removed the old scheduler and replaced it with the choice
of a "dumb" list or a balanced tree.  But the old multi-queue
algorithm is still useful in the space between these two (applications
with large-ish numbers of runnable threads, but that don't need fancy
features like EDF or SMP affinity).  So add it as a
CONFIG_SCHED_MULTIQ option.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-07-03 17:09:15 -04:00
Andy Ross
225c74bbdf kernel/Kconfig: Reorgnize wait_q and sched algorithm choices
Make these "choice" items instead of a single boolean that implies the
element unset.

Also renames WAITQ_FAST to WAITQ_SCALABLE, as the rbtree is really
only "fast" for large queue sizes (it's constant factor overhead is
bigger than a list's!)

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-07-03 17:09:15 -04:00
Andy Ross
55a7e46b66 kernel/poll: Remove POLLING thread state bit
The _THREAD_POLLING bit in thread_state was never actually a
legitimate thread "state".  It is a clever synchronization trick
introduced to allow the thread to release the irq_lock while looping
over the input event array without dropping events.

Instead, make that flag a word in the "poller" struct that lives on
the stack of the thread calling k_poll.  The disadvantage is the 4
bytes of thread space needed.  Advantages:

+ Cleaner API, it's now internal to poll instead of being globally
  visible.

+ The thread_state bit space is just one byte, and was almost full
  already.

+ Smaller code to write/test a full word and not a bitfield

+ Words are atomic, so no need for one of irq lock/unlock pairs.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-06-11 17:25:38 -04:00
Andrew Boie
2dd91eca0e kernel: move thread monitor init to common code
The original implementation of CONFIG_THREAD_MONITOR would
try to leverage a thread's initial stack layout to provide
the entry function with arguments for any given thread.

This is problematic:

- Some arches do not have a initial stack layout suitable for
this
- Some arches never enabled this at all (riscv32, nios2)
- Some arches did not enable this properly
- Dropping to user mode would erase or provide incorrect
information.

Just spend a few extra bytes to store this stuff directly
in the k_thread struct and get rid of all the arch-specific
code for this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-06-06 14:26:45 -04:00
Andy Ross
eace1df539 kernel/sched: Fix SMP scheduling
Recent changes post-scheduler-rewrite broke scheduling on SMP:

The "preempt_ok" feature added to isolate preemption points wasn't
honored in SMP mode.  Fix this by adding a "swap_ok" field to the CPU
record (not the thread) which is set at the same time out of
update_cache().

The "queued" flag wasn't being maintained correctly when swapping away
from _current (it was added back to the queue, but the flag wasn't
set).

Abstract out a "should_preempt()" predicate so SMP and uniprocessor
paths share the same logic, which is distressingly subtle.

There were two places where _Swap() was predicated on
_get_next_ready_thread() != _current.  That's no longer a benign
optimization in SMP, where the former function REMOVES the next thread
from the queue.  Just call _Swap() directly in SMP, which has a
unified C implementation that does this test already.  Don't change
other architectures in case it exposes bugs with _Swap() switching
back to the same thread (it should work, I just don't want to break
anything).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-31 14:02:03 -04:00
Andrew Boie
538754cb28 kernel: handle early entropy issues
We generalize querying the entropy driver directly with
a new internal API, which is now used by CONFIG_STACK_RANDOM
and stack canary initialization.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-23 19:38:06 -07:00
Kumar Gala
177bbbd35f kernel: Fix trivial typo in CONFIG_WAIT_Q_FAST
The Kconfig option is CONFIG_WAITQ_FAST not CONFIG_WAIT_Q_FAST.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2018-05-23 17:57:06 -04:00
Andy Ross
4a2e50f6b0 kernel: Earliest-deadline-first scheduling policy
Very simple implementation of deadline scheduling.  Works by storing a
single word in each thread containing a deadline, setting it (as a
delta from "now") via a single new API call, and using it as extra
input to the existing thread priority comparison function when
priorities are equal.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-23 14:25:52 -04:00
Andy Ross
1acd8c2996 kernel: Scheduler rewrite
This replaces the existing scheduler (but not priority handling)
implementation with a somewhat simpler one.  Behavior as to thread
selection does not change.  New features:

+ Unifies SMP and uniprocessing selection code (with the sole
  exception of the "cache" trick not being possible in SMP).

+ The old static multi-queue implementation is gone and has been
  replaced with a build-time choice of either a "dumb" list
  implementation (faster and significantly smaller for apps with only
  a few threads) or a balanced tree queue which scales well to
  arbitrary numbers of threads and priority levels.  This is
  controlled via the CONFIG_SCHED_DUMB kconfig variable.

+ The balanced tree implementation is usable symmetrically for the
  wait_q abstraction, fixing a scalability glitch Zephyr had when many
  threads were waiting on a single object.  This can be selected via
  CONFIG_WAITQ_FAST.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-19 07:00:55 +03:00
Andy Ross
c0ba11b281 kernel: Don't _arch_switch() to yourself
The SMP testing missed the case where _Swap() decides to return back
into the _current.  Obviously there is no valid switch handle for the
running thread into which we can restore, and everything blows up.
(What happened is that the new scheduler code opened up a spot where
k_thread_priority_set() does a _reschedule() unconditionally and
doens't check to see whether or not it's needed like the old code).

But that isn't incorrect!  It's entirely possible that _Swap() may
find that no thread is runnable except _current (due, for example, to
another CPU racing the other thread you expected off to sleep or
something).  Don't blow up, check and return a noop.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-19 07:00:55 +03:00
Andy Ross
ccf3bf7ed3 kernel: Fix sloppy wait queue API
There were multiple spots where code was using the _wait_q_t
abstraction as a synonym for a dlist and doing direct list management
on them with the dlist APIs.  Refactor _wait_q_t into a proper opaque
struct (not a typedef for sys_dlist_t) and write a simple wrapper API
for the existing usages.  Now replacement of wait_q with a different
data structure is much cleaner.

Note that there were some SYS_DLIST_FOR_EACH_SAFE loops in mailbox.c
that got replaced by the normal/non-safe macro.  While these loops do
mutate the list in the code body, they always do an early return in
those circumstances instead of returning into the macro'd for() loop,
so the _SAFE usage was needless.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-18 01:48:48 +03:00
Andy Ross
4ca0e07088 kernel: Add _unpend_all convenience wrapper to scheduler API
Refactoring.  Mempool wants to unpend all threads at once.  It's
cleaner to do this in the scheduler instead of the IPC code.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-18 01:48:48 +03:00
Andrew Boie
8345e5ebf0 syscalls: remove policy from handler checks
The various macros to do checks in system call handlers all
implictly would generate a kernel oops if a check failed.
This is undesirable for a few reasons:

* System call handlers that acquire resources in the handler
  have no good recourse for cleanup if a check fails.
* In some cases we may want to propagate a return value back
  to the caller instead of just killing the calling thread,
  even though the base API doesn't do these checks.

These macros now all return a value, if nonzero is returned
the check failed. K_OOPS() now wraps these calls to generate
a kernel oops.

At the moment, the policy for all APIs has not changed. They
still all oops upon a failed check/

The macros now use the Z_ notation for private APIs.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-17 23:34:03 +03:00
Andrew Boie
92e5bd7473 kernel: internal APIs for thread resource pools
Some kernel APIs may need to allocate memory in order to function
correctly, especially if they are exposed to userspace where
buffers provided by user code cannot be trusted.

Instead of simply drawing from the system heap, specific pools
may instead be assigned to threads, and any requests made on
behalf of the calling thread will draw heap memory from that pool.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie
42a2c96422 newlib: fix heap user mode access for MPU devices
MPU devices that enforce power-of-two alignment now
specify the size of the buffer used for the newlib heap.
This buffer will be properly aligned and a pointer
exposed in a kernel header, such that it can be added
to a user thread's memory domain configuration if
necessary.

MPU devices that don't have these restrictions allocate
the heap as normal.

In all cases, if an MPU/MMU region needs to be programmed,
the z_newlib_get_heap_bounds() API will return the necessary
information.

Given how precious MPU regions are, no automatic programming
of the MPU is done; applications will need to do this as
needed in their memory domain configurations.

On x86, the x86 MMU-specific code has been moved to arch/x86
using the new z_newlib_get_heap_bounds() API.

Fixes: #6814

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-10 15:09:02 -07:00
Andy Ross
15c400774e kernel: Rework SMP irq_lock() compatibility layer
This was wrong in two ways, one subtle and one awful.

The subtle problem was that the IRQ lock isn't actually globally
recursive, it gets reset when you context switch (i.e. a _Swap()
implicitly releases and reacquires it).  So the recursive count I was
keeping needs to be per-thread or else we risk deadlock any time we
swap away from a thread holding the lock.

And because part of my brain apparently knew this, there was an
"optimization" in the code that tested the current count vs. zero
outside the lock, on the argument that if it was non-zero we must
already hold the lock.  Which would be true of a per-thread counter,
but NOT a global one: the other CPU may be holding that lock, and this
test will tell you *you* do.  The upshot is that a recursive
irq_lock() would almost always SUCCEED INCORRECTLY when there was lock
contention.  That this didn't break more things is amazing to me.

The rework is actually simpler than the original, thankfully.  Though
there are some further subtleties:

* The lock state implied by irq_lock() allows the lock to be
  implicitly released on context switch (i.e. you can _Swap() with the
  lock held at a recursion level higher than 1, which needs to allow
  other processes to run).  So return paths into threads from _Swap()
  and interrupt/exception exit need to check and restore the global
  lock state, spinning as needed.

* The idle loop design specifies a k_cpu_idle() function that is on
  common architectures expected to enable interrupts (for obvious
  reasons), but there is no place to put non-arch code to wire it into
  the global lock accounting.  So on SMP, even CPU0 needs to use the
  "dumb" spinning idle loop.

Finally this patch contains a simple bugfix too, found by inspection:
the interrupt return code used when CONFIG_SWITCH is enabled wasn't
correctly setting the active flag on the threads, opening up the
potential for a race that might result in a thread being scheduled on
two CPUs simultaneously.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-02 10:00:17 -07:00