Commit Graph

238 Commits

Author SHA1 Message Date
Lauren Murphy
4c85b4606b kernel: k_sleep: fix return value for absolute timeout
Fixes calculation of remaining ticks returned from z_tick_sleep
so that it takes absolute timeouts into account.

Fixes #32506

Signed-off-by: Lauren Murphy <lauren.murphy@intel.com>
2021-05-26 18:11:52 -05:00
Maksim Masalski
970820e92d sched: create unique function name
In file include/kernel/thread.h in "struct _thread_base" is a member
called "_wait_q_t *pended_on"
At the same time in file kernel/sched.c is function called
"static _wait_q_t *pended_on()"

Coding scanning tool assigns violation (MISRA R5.9) that static
object reused, because thread.h is included in struct.c file.

I think we can rename function to avoid misreading in the future.

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-05-25 19:06:21 -04:00
Andy Ross
851d14afc8 kernel/sched: Remove "cooperative scheduling only" special cases
The scheduler has historically had an API where an application can
inform the kernel that it will never create a thread that can be
preempted, and the kernel and architecture layer would use that as an
optimization hint to eliminate some code paths.

Those optimizations have dwindled to almost nothing at this point, and
they're now objectively a smaller impact than the special casing that
was required to handle the idle thread (which, obviously, must always
be preemptible).

Fix this by eliminating the idea of "cooperative only" and ensuring
that there will always be at least one preemptible priority with value
>=0.  CONFIG_NUM_PREEMPT_PRIORITIES now specifies the number of
user-accessible priorities other than the idle thread.

The only remaining workaround is that some older architectures (and
also SPARC) use the CONFIG_PREEMPT_ENABLED=n state as a hint to skip
thread switching on interrupt exit.  So detect exactly those platforms
and implement a minimal workaround in the idle loop (basically "just
call swap()") instead, with a big explanation.

Note that this also fixes a bug in one of the philosophers samples,
where it would ask for 6 cooperative priorities but then use values -7
through -2.  It was assuming the kernel would magically create a
cooperative priority for its idle thread, which wasn't correct even
before.

Fixes #34584

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-05-24 23:38:16 -04:00
Torbjörn Leksell
f17144349b Tracing: Thread tracing
Add thread tracing hooks, default hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Anas Nashif
6df4405cca doc: fix typos
Fix various typos in the docs.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-30 16:03:08 -04:00
Krzysztof Chruscinski
7dcff6ecfe kernel: Move _kernel from sched to init
_kernel struct can be used when multithreading is disabled.
In that case sched.c may not be compiled.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-04-29 14:50:35 +02:00
Anas Nashif
3f4f3f6c43 kernel: make tests of a value against zero should be made explicit
Tests of a value against zero should be made explicit, unless the
operand is effectively Boolean. This is based on MISRA rule 14.4.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Anas Nashif
25c87db860 kernel/arch: cleanup function definitions
make identifiers used in the declaration and definition identical. This
is based on MISRA rule 8.3.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Anas Nashif
bbbc38ba8f kernel: Make both operands of operators of same essential type category
Add a 'U' suffix to values when computing and comparing against
unsigned variables and other related fixes of the same MISRA rule (10.4)

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Anas Nashif
5c90ceb105 clock: rename z_tick_get_32 -> sys_clock_tick_get_32
Do not use z_ for internal APIs, z_ is for private APIs within one
subsystem only.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-03-19 11:22:17 -04:00
Anas Nashif
9c1efe6b4b clock: remove z_ from semi-public APIs
The clock/timer APIs are not application facing APIs, however, similar
to arch_ and a few other APIs they are available to implement drivers
and add support for new hardware and are documented and available to be
used outside of the clock/kernel subsystems.

Remove the leading z_ and provide them as clock_* APIs for someone
writing a new timer driver to use.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-03-19 11:22:17 -04:00
Lauren Murphy
d88ce65463 kernel/sched: only send IPI to abort thread if hardware supports it
Wrap arch_sched_ipi() call in z_thread_abort() with ifdef checking for
hardware support of IPI.

Fixes #32723

Signed-off-by: Lauren Murphy <lauren.murphy@intel.com>
2021-03-10 14:27:33 -05:00
Spoorthy Priya Yerabolu
4118ed1d4d kernel: sched: removing dead code
Due to the recent changes to scheduler z_find_first_thread_to_unpend
& z_remove_thread_from_ready_q are not used anymore. So removing the
dead code.

fixes: #32691

Signed-off-by: Spoorthy Priya Yerabolu <spoorthy.priya.yerabolu@intel.com>
2021-03-05 11:05:25 +03:00
Peter Bigot
0259c864df kernel: add private scheduler APIs
These functions are a subset of proposed public APIs to clean up
several issues related to safely handling waking of threads.  They
have been made private as they interface may change, but their use
will simplify the reimplementation of the k_work functionality.

See: https://github.com/zephyrproject-rtos/zephyr/pull/29668

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-03-03 20:06:00 -05:00
James Harris
6543e06914 kernel: sched: avoid unnecessary lock in z_impl_k_yield
`z_impl_k_yield` unlocked sched_spinlock, only to lock it again
immediately, do a little bit more work, then unlock it again.
This causes performance issues on SMP, where `sched_spinlock`
is often fairly highly contended and cores often end up spinning
for quite a while waiting to retake the lock in `z_swap_unlocked`.

Instead directly pass the spinlock key to `z_swap` and avoid the
extra lock+unlock.

Signed-off-by: James Harris <james.harris@intel.com>
2021-03-02 14:35:21 -05:00
James Harris
2cd0f66515 kernel: sched: change to 3-way thread priority comparison
`z_is_t1_higher_prio_than_t2` was being called twice in both the
context-switch fastpath and in `z_priq_rb_lessthan`, just to
dealing with priority ties. In addition, the API was error-prone
(and too much in the fastpath to be able to assert its invarients)
- see also #32710 for a previous example of this API breaking
and returning a>b but also b>a.

Replacing this with a direct 3-way comparison `z_cmp_t1_prio_with_t2`
sidesteps most of these issues. There is still a concern that
`sgn(z_cmp_t1_prio_with_t2(a,b)) != -sgn(z_cmp_t1_prio_with_t2(b,a))`
but I don't see any way to alleviate this aside from adding an
assert to the fastpath.

Signed-off-by: James Harris <james.harris@intel.com>
2021-03-02 14:27:14 -05:00
James Harris
3330ab12d8 kernel: fix yielding between tasks with same deadline
Previously two tasks with the same deadline and priority would
always have `z_is_t1_higher_prio_than_t2` `true` in both directions.

This is logically inconsistent, and results in `k_yield` not actually
yielding between identical threads.

Signed-off-by: James Harris <james.harris@intel.com>
2021-02-27 10:25:47 +01:00
Andy Ross
6fb6d3cfbe kernel: Add new k_thread_abort()/k_thread_join()
Add a newer, much smaller and simpler implementation of abort and
join.  No need to involve the idle thread.  No need for a special code
path for self-abort.  Joining a thread and waiting for an aborting one
to terminate elsewhere share an implementation.  All work in both
calls happens under a single locked path with no unexpected
synchronization points.

This fixes a bug with the current implementation where the action of
z_sched_single_abort() was nonatomic, releasing the lock internally at
a point where the thread to be aborted could self-abort and confuse
the state such that it failed to abort at all.

Note that the arm32 and native_posix architectures, which have their
own thread abort implementations, now see a much simplified
"z_thread_abort()" internal API.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andy Ross
c0c8cb0e97 kernel: Remove abort and join implementation (UNBISECTABLE)
THIS COMMIT DELIBERATELY BREAKS BISECTABILITY FOR EASE OF REVIEW.
SKIP IF YOU LAND HERE.

Remove the existing implementatoin of k_thread_abort(),
k_thread_join(), and the attendant facilities in the thread subsystem
and idle thread that support them.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andy Ross
419f37043b kernel/sched: Clamp minimum timeslice when TICKLESS
When the kernel is TICKLESS, timeouts are set as needed, and drivers
all have some minimum amount of time before which they can reliably
schedule an interrupt.  When this happens, drivers will kick the
requested interrupt out by one tick.  This means that it's not
reliably possible to get a timeout set for "one tick in the
future"[1].

And attempting to do that is dangerous anyway.  If the driver will
delay a one-tick interrupt, then code that repeatedly tries to
schedule an imminent interrupt may end up in a state where it is
constantly pushing the interrupt out into the future, and timer
interrupts stop arriving!  The timeout layer actually has protection
against this case.

Finally getting to the point: in recent changes, the timeslice layer
lost its integration with the "imminent" test in the timeout code, so
it's now able to run into this situation: very rapidly context
switching code (or rapidly arriving interrupts) will have the effect
of infinitely[2] delaying timeouts and stalling the whole timeout
subsystem.

Don't try to be fancy.  Just clamp timeslice duration such that a
slice is 2 ticks at minimum and we'll never hit the problem.  Adjust
the two tests that were explicitly requesting very short slice rates.

[1] Of course, the tradeoff is that the tick rate can be 100x higher
or more, so on balance tickless is a huge win.

[2] Actually it only lasts until a 31 bit signed rollover in the HPET
cycle count in practice.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andy Ross
a202670c18 kernel/sched: Remove now-spurious SWAP_NONATOMIC workaround
Recent work to normalize use of the thread QUEUED state bit means that
we never attempt to remove unqueued threads from the low-level run
queue.  So the old workaround for SWAP_NONATOMIC that was trying to
detect this condition isn't necessary anymore.

Which is serendipitous, because it was written to encode some very
specific logic about the circumstances where _current could be
dequeued that I'd like to be able to break.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andy Ross
05c468f594 kernel/sched: Make z_ready_thread() safe vs. already-running threads
This is part of the scheduler API, and was always just a synchronized
wrapper around the internal ready_thread() function.  But where the
internal users seem to be careful not to call it on threads that are
not known to be already queued or running, the general users in the
IPC code seem to be less strict.

Add a simple test to detect the case where a thread is already
running.  Right now this just loops over the array of CPUs, so is O(N)
in the CPU count even though N is never more than four for us
currently.  But this is possible without modifying data structures.  A
more scalable way to do this if we ever need to run on very parallel
systems would be to use another state bit for RUNNING, or to keep a
backpointer in the thread struct to the CPU it's running on, etc...

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andy Ross
6b84ab3830 kernel/sched: Adjust locking in z_swap()
Swap was originally written to use the scheduler lock just to select a
new thread, but it would be nice to be able to rely on scheduler
atomicity later in the process (in particular it would be nice if the
assignment to cpu.current could be seen atomically).  Rework the code
a bit so that swap takes the lock itself and holds it until just
before the call to arch_switch().

Note that the local interrupt mask has always been required to be held
across the swap, so extending the lock here has no effect on latency
at all on uniprocessor setups, and even on SMP only affects average
latency and not worst case.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andy Ross
37866336f9 kernel/sched: Fix race between thread wakeup timeout and abort
Aborted threads will cancel their timeouts, but the timeout subsystem
isn't protected under the same lock so it's possible for a timeout to
fire just as a thread is being aborted and wake it up unexpectedly.
Check the state before blowing anything up.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andrei Emeltchenko
377456c5af kernel: Move LOCKED() macro to kernel_internal.h
Remove duplication in the code by moving macro LOCKED() to the correct
kernel_internal.h header.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2021-02-22 14:56:37 -05:00
Andy Ross
4ff457113e kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.

Example:
   * CPU0 is idle, CPU1 is running thread A
   * CPU1 makes high priority thread B runnable
   * CPU1 reaches a schedule point (or returns from an interrupt) and
     decides to run thread B instead
   * CPU0 simultaneously takes its IPI and returns, selecting thread A

Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable.  So we have a deadlock, each CPU is spinning waiting for the
other!

Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation.  The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears.  I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly.  In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.

The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.

Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-14 16:22:45 -05:00
Andy Ross
91946ef21c kernel/sched: Refactor, unify management of QUEUED state
The QUEUED state flag was managed separately from the run queue
insertion/deletion, and the logic (while AFAICT perfectly correct) was
tangled in a few places trying to keep them in sync.  Put the
management of both behind a queue_thread()/dequeue_thread() API for
clarity.  The ALWAYS_INLINE usage seems to be working to get the
compiler to condense the resulting multiple assignments.  No behavior
change.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-14 16:22:45 -05:00
Andy Ross
dd43221540 kernel/sched: Fix race with switch handle
The "null out the switch handle and put it back" code in the swap
implementation is a holdover from some defensive coding (not wanting
to break the case where we picked our current thread), but it hides a
subtle SMP race: when that field goes NULL, another CPU that may have
selected that thread (which is to say, our current thread) as its next
to run will be spinning on that to detect when the field goes
non-NULL.  So it will get the signal to move on when we revert the
value, when clearly we are still running on the stack!

In practice this was found on x86 which poisons the switch context
such that it crashes instantly.

Instead, be firm about state and always set the switch handle of a
currently running thread to NULL immediately before it starts running:
right before entering arch_switch() and symmetrically on the interrupt
exit path.

Fixes #28105

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-14 16:22:45 -05:00
Andy Ross
1ba7414029 kernel/sched: Correct coherence assert
Some legacy spots in our IPC layer (legally) pass a NULL wait queue to
pend().  Allow this in the coherence assertion.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-11 14:47:40 -05:00
Andy Ross
604f0f44b6 kernel/sched: Add missing lock around waitq unpend calls
The two calls to unpend a thread from a wait queue were inexplicably*
unsynchronized, as James Harris discovered.  Rework them to call the
lowest level primities so we can wrap the process inside the scheduler
lock.

Fixes #32136

* I took a brief look.  What seems to have happened here is that these
  were originally synchronized via an implicit from an outer caller
  (remember the original Uniprocessor irq_lock() API is a recursive
  lock), and they were mostly implemented in terms of middle-level
  calls that were themselves locked.  So those got ported over to the
  newer spinlock but the outer wrapper layer got forgotten.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-10 07:43:18 -05:00
Anas Nashif
39f632e7f0 kernel: fix usage of KERNEL_COHERENCE macro
Add missing CONFIG_ to KERNEL_COHERENCE usage in code.

Fixes #30380

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-02-03 10:42:04 -05:00
Enjia Mai
53ca709828 tests: coverage: exclude the CODE UNREACHABLE of code coverage
1. Exclude the CODE UNREACHABLE line while generating coverage report.
2. Exclude the memory domain deprecated API when calculating code
coverage.

Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
2021-01-15 12:42:00 -05:00
Andy Ross
ef626571b2 kernel/sched: Optimize deadline comparison
Needing to check the current cycle time (which involves a spinlock and
register read on most architectures) is wasteful in the scheduler
priority predicate, which is a hot path.  If we "burn" one bit of
precision (and document the rule), we can do the comparison without
knowing the current time.

2^31 cycles is still far longer than a live deadline thread in any
legitimate realtime app should ever live before being scheduled.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-01-15 11:35:50 -05:00
Andy Ross
e956639dd6 kernel: Remove CONFIG_LEGACY_TIMEOUT_API
This was a fallback for an API change several versions ago.  It's time
for it to go.

Fixes: #30893

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-01-14 21:33:16 -05:00
Marcin Niestroj
11cb1cf336 kernel: sched: fix legacy timeout calculation in z_tick_sleep
Ticks should be assigned directly to timeout value in case of
CONFIG_LEGACY_TIMEOUT_API=y, just as they were before referenced patch.

Fixes: 7a815d5d99 ("kernel: sched: Use k_ticks_t in z_tick_sleep")
Signed-off-by: Marcin Niestroj <m.niestroj@grinn-global.com>
2020-12-18 14:03:25 -05:00
Anas Nashif
87ddddae52 Revert "kernel: fix usage of KERNEL_COHERENCE macro"
This reverts commit 67c5e6b0c0.

This is causing build issues on some platforms. Revert for now.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-12-08 14:27:27 -05:00
Anas Nashif
67c5e6b0c0 kernel: fix usage of KERNEL_COHERENCE macro
Add missing CONFIG_ to KERNEL_COHERENCE usage in code.

Fixes #30380

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-12-08 09:30:02 -05:00
Flavio Ceolin
9a16097fd8 kernel: sched: Change variable name in z_tick_sleep
Change a variable name to avoid confusion between time and ticks.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2020-12-04 23:05:57 -05:00
Flavio Ceolin
7a815d5d99 kernel: sched: Use k_ticks_t in z_tick_sleep
z_tick_sleep was using int32_t what could cause a possible overflow
when converting from k_ticks_t.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2020-12-04 23:05:57 -05:00
Krzysztof Chruscinski
3ed8083dc1 kernel: Cleanup logger setup in kernel files
Most of kernel files where declaring os module without providing
log level. Because of that default log level was used instead of
CONFIG_KERNEL_LOG_LEVEL.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2020-11-27 09:56:34 -05:00
Daniel Leung
11e6b43090 tracing: roll thread switch in/out into thread stats functions
Since the tracing of thread being switched in/out has the same
instrumentation points, we can roll the tracing function calls
into the one for thread stats gathering functions.
This avoids duplicating code to call another function.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-11-11 23:55:49 -05:00
Andrew Boie
933b420235 kernel: add context pointer to thread->fn_abort
For compatibility layers like CMSIS where thread objects
are drawn from a pool, provide a context pointer to the
exited thread object so it may be freed.

This is somewhat obscure and has no supporting APIs or
overview documentation and should be considered a private
kernel feature. Applications should really be using
k_thread_join() instead.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-10-22 23:32:37 -04:00
Anas Nashif
d2c71796af kernel: document k_sleep with K_FOREVER
When calling k_sleep with K_FOREVER as the timeout value, we consider
this as a suspend call.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-10-22 07:00:15 -04:00
Anas Nashif
081605ee23 kernel: do not queue a thread that is already queued
Do not add a thread to the run queue if it was already added.

Fixes #29244

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-10-22 07:00:15 -04:00
Anas Nashif
bf69afcdae kernel: only resume suspended threads
Do not try to resume a thread that was not suspended.

Fixes #28694

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-10-22 07:00:15 -04:00
Andy Ross
f6d32ab0a4 kernel: Add cache coherence management framework
Zephyr SMP kernels need to be able to run on architectures with
incoherent caches.  Naive implementation of synchronization on such
architectures requires extensive cache flushing (e.g. flush+invalidate
everything on every spin lock operation, flush on every unlock!) and
is a performance problem.

Instead, many of these systems will have access to separate "coherent"
(usually uncached) and "incoherent" regions of memory.  Where this is
available, place all writable data sections by default into the
coherent region.  An "__incoherent" attribute flag is defined for data
regions that are known to be CPU-local and which should use the cache.
By default, this is used for stack memory.

Stack memory will be incoherent by default, as by definition it is
local to its current thread.  This requires special cache management
on context switch, so an arch API has been added for that.

Also, when enabled, add assertions to strategic places to ensure that
shared kernel data is indeed coherent.  We check thread objects, the
_kernel struct, waitq's, timeouts and spinlocks.  In practice almost
all kernel synchronization is built on top of these structures, and
any shared data structs will contain at least one of them.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-10-21 06:38:53 -04:00
Andrew Boie
b5a71f74a8 userspace: remove threads from domain on abort
When threads exited we were leaving dangling references to
them in the domain's mem_domain_q.

z_thread_single_abort() now calls into the memory domain
code via z_mem_domain_exit_thread() to take it off.

The thread setup code now invokes z_mem_domain_init_thread(),
avoiding extra checks in k_mem_domain_add_thread(), we know
the object isn't currently a member of a doamin.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-10-20 09:37:49 -07:00
Andrew Boie
e0ca403f4c kernel: add assert for mis-used k_thread_create()
k_thread_create() works as expected on both uninitialized memory,
or threads that have completely exited.

However, horrible and difficult to comprehend things can happen if a
thread object is already being used by the kernel and
k_thread_create() is called on it.

Historically this has been a problem with test cases trying to be
parsimonious with thread objects and not properly cleaning up
after themselves. Add an assertion for this which should catch
both the illegal creation of a thread already active, or threads
racing to create the same thread object.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-30 14:11:59 -04:00
Andrew Boie
f5a7e1a108 kernel: handle thread self-aborts on idle thread
Fixes races where threads on another CPU are joining the
exiting thread, since it could still be running when
the joiners wake up on a different CPU.

Fixes problems where the thread object is still being
used by the kernel when the fn_abort() function is called,
preventing the thread object from being recycled or
freed back to a slab pool.

Fixes a race where a thread is aborted from one CPU while
it self-aborts on another CPU, that was currently worked
around with a busy-wait.

Precedent for doing this comes from FreeRTOS, which also
performs final thread cleanup in the idle thread.

Some logic in z_thread_single_abort() rearranged such that
when we release sched_spinlock, the thread object pointer
is never dereferenced by the kernel again; join waiters
or fn_abort() logic may free it immediately.

An assertion added to z_thread_single_abort() to ensure
it never gets called with thread == _current outside of an ISR.

Some logic has been added to ensure z_thread_single_abort()
tasks don't run more than once.

Fixes: #26486
Related to: #23063 #23062

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-30 14:11:59 -04:00
Watson Zeng
37f75d2d1f kernel: sched: bug fix for trace and monitor
sys_trace_thread_abort and z_thread_monitor_exit in
z_thread_single_abort also need to be protected by
sched_spinlock, otherwise when after the spinlock
release, if there is an pending interrupt, it will cause an
sched in the interrrupt exit, and those trace and monitor
function will never reach.

Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
2020-09-17 09:30:22 +02:00