Commit Graph

282 Commits

Author SHA1 Message Date
Nicolas Pitre
524ac8a29a sched: don't call k_sched_time_slice_set() during early init
All we really want here is to set default parameters. However
k_sched_time_slice_set() also calls z_reset_time_slice(_current)
which expects `_current` to be fully initialized.

Simply initialize `slice_ticks` and `slice_max_prio` with default values
directly. Unfortunately the compiler isn't smart enough to expand
k_ms_to_ticks_ceil32(CONFIG_TIMESLICE_SIZE) to a constant expression
at build time so we must do the conversion by hand (and it shouldn't
overflow due to the nature of the value).

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-04-03 19:16:48 -04:00
Nicolas Pitre
405611dc9e sched: remove restriction on single-tick time slices
Slice expirations are now based on the same timeout mechanism as
regular timers which have been recently fixed and proven to work with
single-tick periods.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-04-03 19:16:48 -04:00
Nicolas Pitre
907eea07f2 z_sched_init: don't use arch_num_cpus()
The reason for arch_num_cpus() is to be able to dynamically adapt to
the actual number of available CPUs at run time.

In the z_sched_init() case, it is not the number of active CPUs that
we need but rather the total number of potential CPUs, and that is
represented by CONFIG_MP_MAX_NUM_CPUS not arch_num_cpus().

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-04-03 12:36:30 -04:00
Nicolas Pitre
5879d2d6c1 sched: minor time slicing cleanup
Make sliceable() the actual condition for a sliceable thread. Avoid
creating a slice timeout for non sliceable threads. Always reset
slice_expired even if the next thread is not sliceable. Fold
slice_expired_locked() into z_time_slice() to avoid the hidden
unlock/lock. Change `curr` to `thread` as this is not necessarily
the current thread (yet) being set. Make variables static.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-03-16 09:16:59 +01:00
Aastha Grover
877fc3d508 kernel: events: fix waitq timeout race condition
Updates events to prevent a timeout from corrupting the list of
threads that needs to be waken up.

Signed-off-by: Aastha Grover <aastha.grover@intel.com>
2023-03-09 09:22:21 +01:00
Aastha Grover
5537776898 kernel: Add z_sched_wake_thread API
This API wakes up a given thread and is also called from
z_thread_timeout()

Signed-off-by: Aastha Grover <aastha.grover@intel.com>
2023-03-09 09:22:21 +01:00
Andy Ross
c5c3ad95de kernel/sched: Close hole with cross-core timeslice expirations
Moving timeslice events to timeouts isn't quite enough on SMP, as it's
still possible for systems that don't broadcast their timer interrupts
to end up handling an expiration for a foreign CPU.  There, we need an
IPI, and a symmetric call to z_time_slice() (which is itempotent and
fast) in the IPI ISR.

Signed-off-by: Andy Ross <andyross@google.com>
2023-03-09 09:21:12 +01:00
Andy Ross
f3afd5a4c9 kernel/sched: Use kernel timeouts for timeslice expirations
Rework the fragile and ad-hoc computation of timeslice expirations
into per-CPU struct _timeout objects with regular callbacks.  The
expiration callbacks themselves simply set a per-cpu flag (they might
run on any CPU), which gets checked at the end of the timer ISR on
every CPU.

This simplifies logic and removes a bunch of code.  It also fixes at
least three bugs:

1. As @npitre discovered: On SMP, the number of ticks announced on any
given CPU is going to be a subset of all expired ticks.  This broke
the accounting of timeslice ticks, and effectively meant that
timeslicing only worked on SMP on systems where one CPU could hog all
the announcements, and only on that CPU.

2. The bootstrap path to arm the timer driver after setting the first
timeout in an empty list couldn't take into account
sys_clock_elapsed() ticks, as it didn't know whether it was being
called underneath an existing announce loop.  Now this code is no
longer responsible for knowing anything about time slicing at all.

3. Also on SMP, there was a case where two CPUs timeslicing
simultaneously could stomp on each others' timeouts in
z_set_timeout_expiry(), as neither had a way of knowing what the
other's state was.  CPUs could miss their own expiration and have to
wait for the slice expiration on the other CPU.  Now, timeouts are
global objects with simple expiration times, and there's no need for
that function at all.

Signed-off-by: Andy Ross <andyross@google.com>
2023-03-09 09:21:12 +01:00
Peter Mitsis
31dfd84fd5 kernel: pipes: Change method of unpending waiters
By the time the working list of readers/writers is processed, it is
possible that waiting reader/writer being processed had timed out
and is no longer on the wait queue. As such, we can not blindly
wake the next thread as that next thread might not be the thread we
had just been processing.

To address this, the calls to z_sched_wake() have been replaced
with z_unpend_thread() and z_ready_thread() so that a specific
thread can be safely targeted for waking.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-02-11 06:45:13 +09:00
Peter Mitsis
ca58339e16 kernel: Add routine to walk a wait queue
Adds a routine to safely walk a specified wait queue and invoke a
custom callback function on each waiting thread.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-02-11 06:45:13 +09:00
Flavio Ceolin
2757e711e1 kernel: sched: Remove possible deadcode
Put z_priq_dumb_add inside a ifdef guard to avoid deadcode.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-01-09 12:07:28 -05:00
Gerard Marull-Paretas
737d799660 kernel: sched: fix ticks logging
- Logging supports printing 64-bit values now. Cast to unsigned long and
  use %lu all times.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-11-29 09:52:04 +01:00
Kumar Gala
4f458ba8de kernel: Convert away from CONFIG_MP_NUM_CPUS
Move runtime code to use arch_num_cpus() instead of CONFIG_MP_NUM_CPUS
and use CONFIG_MP_MAX_NUM_CPUS for ifdef and BUILD_ASSERT macros.

Signed-off-by: Kumar Gala <kumar.gala@intel.com>
2022-10-31 17:09:14 +01:00
Kumar Gala
a1195ae39b smp: Move for loops to use arch_num_cpus instead of CONFIG_MP_NUM_CPUS
Change for loops of the form:

for (i = 0; i < CONFIG_MP_NUM_CPUS; i++)
   ...

to

unsigned int num_cpus = arch_num_cpus();
for (i = 0; i < num_cpus; i++)
   ...

We do the call outside of the for loop so that it only happens once,
rather than on every iteration.

Signed-off-by: Kumar Gala <kumar.gala@intel.com>
2022-10-21 13:14:58 +02:00
Andy Ross
c32f376e99 kernel/sched: Fix SMP race on pend
For historical reasons[1] suspending threads would release the
scheduler lock between pend() (which places the current thread onto a
wait queue) and z_swap() (which effects the context swtich).  This
process happens with the caller's lock held, so local interrupts are
masked.  But on SMP this opens a tiny race where another CPU could
grab the pended thread and switch to it while we were still executing
on its stack!

Fix this by elevating the "lock swap" code that already exists in the
(portable/switch-based) z_swap() code one level so that it happens in
z_pend_curr() also.  Now we hold the scheduler lock between pend and
the final context switch.

Note that this technique can't work for the older z_swap_irqlock()
implementation, which exists to vestigially support a few bits of arch
code (mostly direct interrupts) that don't work on SMP anyway.
Address with an assert to prevent future misuse.

[1] z_swap() is a historical API implemented in per-arch assembly for
    older architectures (like ARM32!).  It was designed to be called
    with what at the time was a global IRQ lock, so it doesn't
    understand the idea of a separate scheduler lock.  When we finally
    get all archictures on arch_switch() this design can be cleaned up
    quite a bit.

Signed-off-by: Andy Ross <andyross@google.com>
2022-10-11 12:16:38 -04:00
Kai Vehmanen
e81ccef613 kernel/sched: fix condition for CPU mask set
When building with CONFIG_SCHED_CPU_MASK_PIN_ONLY=y, CPU mask
is fixed and cannot be changed while thread is running.

The current code asserts if thread state is anything but PREPARED.

We do however have interface like k_work_queue_start() where a thread is
started as part of the queue start. To allow user to set the pinned CPU
for the work queue thread, it needs to be possible to suspend the
thread, set the mask, and then call k_thread_resume(). This seems to be
a valid sequence, so relax the assert check to reflect this.

Signed-off-by: Kai Vehmanen <kai.vehmanen@linux.intel.com>
2022-09-09 16:13:35 -04:00
Simon Hein
02cfbfea51 kernel: comply to coding guidelines MISRA C:2012 Rule 14.4
MISRA C:2012 Rule 14.4 (The controlling expression of an if statement
and the controlling expression of an iteration-statement shall have
essentially Boolean type.)

Use `bool' instead of `int' to represent Boolean values.
Use `do { ... } while (false)' instead of `do { ... } while (0)'.
Use comparisons with zero instead of implicitly testing integers.

This commit is a subset of the original commit:
5d02614e34a86b549c7707d3d9f0984bc3a5f22a

Signed-off-by: Simon Hein <SHein@baumer.com>
2022-07-21 06:16:16 -04:00
Andy Ross
fb613594c7 kernel/sched: Panic on aborting essential threads
Documentation specifies that aborting/terminating/exiting essential
threads is a system panic condition, but we didn't actually implement
that and allowed it as for other threads. At least one app wants to
exploit this documented behavior as a "watchdog" kind of condition,
and that seems reasonable.  Do what we say we're supposed to do.

This also includes a small fix to a test, which seemed like it was
written to exercise exactly this condition.  Except that it failed to
detect whether or not a system fatal error was actually signaled and
was (incorrectly) indicating "success".  Check that we actually enter
the handler.

Fixes #45545

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-05-20 12:34:30 +02:00
Gerard Marull-Paretas
cffefc818d kernel: migrate includes to <zephyr/...>
In order to bring consistency in-tree, migrate all kernel code to the
new prefix <zephyr/...>. Note that the conversion has been scripted,
refer to zephyrproject-rtos#45388 for more details.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-05-09 09:26:20 +02:00
Jordan Yates
1ef647f396 kernel: add k_can_yield helper function
Implements a function that application and driver code can use to check
whether it is valid to yield (or block) in the current context. This
check is required for functions that can feasibly be run from multiple
contexts. The primary intended use case is power management transition
functions, which can be run by application code explicitly or
automatically in the idle thread by system PM.

Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
2022-05-06 11:33:10 +02:00
Flavio Ceolin
551038e748 kernel: sched: Change cpu pin only for not executing threads
Do not allow changing the CPU which a thread is pinned when it is
already being executed. This allows further optimizations in some
platforms with incoherent memory since we can safely assume that the
thread will run in the same CPU and avoid invalidate / flush the
cache during context switches.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2022-05-04 13:46:48 -04:00
Andy Ross
b4e9ef0691 kernel/sched: Defer IPI sending to schedule points
The original design intent with arch_sched_ipi() was that
interprocessor interrupts were fast and easily sent, so to reduce
latency the scheduler should notify other CPUs synchronously when
scheduler state changes.

This tends to result in "storms" of IPIs in some use cases, though.
For example, SOF will enumerate over all cores doing a k_sem_give() to
notify a worker thread pinned to each, each call causing a separate
IPI.  Add to that the fact that unlike x86's IO-APIC, the intel_adsp
architecture has targeted/non-broadcast IPIs that need to be repeated
for each core, and suddenly we have an O(N^2) scaling problem in the
number of CPUs.

Instead, batch the "pending" IPIs and send them only at known
scheduling points (end-of-interrupt and swap).  This semantically
matches the locations where application code will "expect" to see
other threads run, so arguably is a better choice anyway.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-05-02 10:23:13 -05:00
Andy Ross
3267cd327e kernel/sched: Refactor IPI signaling
Minor cleanup, we had a bunch of duplicated #if logic to send IPIs,
put it all in one place.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-05-02 10:23:13 -05:00
Anas Nashif
c9d0248867 kernel: introduce convinience apu to pin thread to a cpu
Add an API that clears cpu mask from a thread and sets it to a specific
CPU.

This is the equivelent of:

        k_thread_cpu_mask_clear(&thread);
	k_thread_cpu_mask_enable(&thread, cpu_idx);

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2022-04-19 13:05:09 -04:00
Nicolas Pitre
c9e3e0d956 sched: formalize the passing of NULL to z_get_next_switch_handle()
This is an attempt at formally distinguishing and supporting the case
described in 40795 where an architecture doesn't preserve/restore the
complete thread state upon entering/exiting interrupt exception state.

This is mainly about promoting the current behavior from the accepted
workaround to a formal API specification. This workaround is currently
used on ARM64 but RISC-V requires it too.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-18 13:32:49 -04:00
Andy Ross
3e696896bf kernel: Add "per thread" timeslice mechanism
Zephyr's timeslice implementation has always been somewhat primitive.
You get a global timeslice that applies broadly to the whole bottom of
the priority space, with no ability (beyond that one priority
threshold) to tune it to work on certain threads, etc...

This adds an (optionally configurable) API that allows timeslicing to
be controlled on a per-thread basis: any thread at any priority can be
set to timeslice, for a configurable per-thread slice time, and at the
end of its slice a callback can be provided that can take action.
This allows the application to implement things like responsiveness
heuristics, "fair" scheduling algorithms, etc... without requiring
that facility in the core kernel.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-03-09 13:49:44 -05:00
Peter Mitsis
82c3d531a6 kernel: move thread usage routines to own file
Moves the CONFIG_SCHED_THREAD_USAGE block of code out of sched.c
into its own file. Not only do they employ their own private
spin lock, but it is expected that additional usage routines will be
added in the future.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-01-10 10:38:06 -05:00
Jeremy Bettis
fb1c36f7fd build: hide z_priq_mq_add/z_priq_mq_remove
Move z_priq_mq_add and z_priq_mq_remove into #ifdef CONFIG_SCHED_MULTIQ
block, because they are only used with that config.

Signed-off-by: Jeremy Bettis <jbettis@google.com>
2022-01-04 11:52:10 -05:00
Peter Mitsis
f8b76f3b03 kernel: add 'static' keyword to select routines
Applies the 'static' keyword to the following inlined routines:
    z_priq_dumb_add()
    z_priq_mq_add()
    z_priq_mq_remove()
As those routines are only used in one place, they no longer have
externally visible declarations.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2021-12-13 17:21:58 -05:00
Jeremy Bettis
1e0a36c655 build: Remove unused functions
Removed unused functions, or moved inside #ifdefs.

This allows using -Werror=unused-function on the clang compiler. Tested
by building the ChromeOS EC on all supported platforms with
-Werror=unused-functions.

Signed-off-by: Jeremy Bettis <jbettis@google.com>
2021-12-13 15:49:08 -05:00
Andy Ross
410f911018 kernel/sched: Separate idle from app thread stats in THREAD_USAGE
It turns out that we have a sample (though not a test) that really
does want to use "k_thread_runtime_stats_all_get()" to measure system
uptime.

Instead of breaking this needlessly, separate the accounting for idle
and non-idle threads.  The legacy API can report their sum, and the
more useful value is available via the kernel struct for future
analysis.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross
52351458f4 kernel/sched: Add timing.h support to thread_usage
The runtime stats feature has always supported this, so use the same
kconfig to indirect the timing source in the same way.

(Personally I'm not a fan of the "timing" API, which really doesn't do
anything that the existing core "cycles" API does not except add a
bunch of code due to the separate implementation of frequency
management and conversion routines.  It comes from an era where
"cycles" were fixed to a MHz frequency clock on platforms like x86 yet
we had benchmarks that wanted to use the TSC.  Those days are behind
us and "cycles" can be fast everywhere.)

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross
b62d6e17a4 kernel/sched: Add an optional "all" counter for thread_usage
Tally the runtime of all non-idle threads.  Make it optional via
kconfig to avoid overhead.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross
4ae3250301 sched: Hook SCHED_USAGE from existing tracing hook
On older architectures, we don't have the
architecture-independent/scheduler-internal hooks (which require
USE_SWITCH) but there is a hook shared by the tracing layer we can use.

This is sort of a layering violation (stat tracking is a core feature,
tracing is supposed to be optional), but simple and lightweight.  And
eventually it will go away as these architectures migrate.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross
40d12c142d kernel/sched: Add "thread_usage" API for thread runtime cycle monitoring
This is an alternate backend that does what THREAD_RUNTIME_STATS is
doing currently, but with a few advantages:

* Correctly synchronized: you can't race against a running thread
  (potentially on another CPU!) while querying its usage.

* Realtime results: you get the right answer always, up to timer
  precision, even if a thread has been running for a while
  uninterrupted and hasn't updated its total.

* Portable, no need for per-architecture code at all for the simple
  case. (It leverages the USE_SWITCH layer to do this, so won't work
  on older architectures)

* Faster/smaller: minimizes use of 64 bit math; lower overhead in
  thread struct (keeps the scratch "started" time in the CPU struct
  instead).  One 64 bit counter per thread and a 32 bit scratch
  register in the CPU struct.

* Standalone.  It's a core (but optional) scheduler feature, no
  dependence on para-kernel configuration like the tracing
  infrastructure.

* More precise: allows architectures to optionally call a trivial
  zero-argument/no-result cdecl function out of interrupt entry to
  avoid accounting for ISR runtime in thread totals.  No configuration
  needed here, if it's called then you get proper ISR accounting, and
  if not you don't.

For right now, pending unification, it's added side-by-side with the
older API and left as a z_*() internal symbol.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross
b11e796c36 kernel/sched: Add CONFIG_CPU_MASK_PIN_ONLY
Some SMP applications have threading designs where every thread
created is always assigned to a specific CPU, and never want to
schedule them symmetrically across CPUs under any circumstance.

In this situation, it's possible to optimize the run queue design a
bit to put a separate queue in each CPU struct instead of having a
single global one.  This is probably good for a few cycles per
scheduling event (maybe a bit more on architectures where cache
locality can be exploited) in circumstances where there is more than
one runnable thread.  It's a mild optimization, but a basically simple
one.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-28 20:15:05 -04:00
Andy Ross
b155d06712 kernel/sched: Factor out ready_q initialization
Split "init_ready_q()" into a separate function that operates on the
queue pointer and not the global kernel object.  Pure refactoring.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-28 20:15:05 -04:00
Andy Ross
387fdd2e53 kernel/sched: Refactor/simplify run queue accessors
Similar to the previous patch, the various _priq_run_*() functions are
always passed a first argument that is the singleton system run queue
(this is because the same backend functions are used by wait queues).

Refactor into a simpler API that places the access to the run queue in
just a single spot.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-28 20:15:05 -04:00
Andy Ross
c230fb3580 kernel/sched: Simply de/queue_thread()
Pure refactoring.  For historical reasons these two functions took a
first argument (a pointer to the run queue) that was always the same.
Eliminate it.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-28 20:15:05 -04:00
Chen Peng1
0f63d1135c cmsis_rtos_v1: fix thread instances management.
add a bitarray into struct osThreadDef_t to indicate whether the
thread is used or not, then we can get the first available thread
by searching this array when creating a new thread, and update this
array to add a free thread when terminating a thread.

Signed-off-by: Chen Peng1 <peng1.chen@intel.com>
2021-09-09 12:01:06 -04:00
Andy Ross
0d763e0a10 cmake/compiler/xcc: sched: Support XCC inlining semantics
Cadence XCC is based off of a very old 4.2 gcc compiler, which didn't
perfectly support C99 "inline" semantics with respect to
cross-translation-unit inline linkage (which Zephyr does not use, our
inlines are static only) and declaration order.

Fix the one spot where we were calling an inline before its
ALWAYS_INLINE definition, and add a flag to suppress the warning so
CI's trying to build with XCC and -Werror don't flip out.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-08 09:28:31 -04:00
Andrew Boie
f07df42d49 kernel: make k_current_get() work without syscall
We cache the current thread ID in a thread-local variable
at thread entry, and have k_current_get() return that,
eliminating system call overhead for this API.

DL: changed _current to use z_current_get() as it is
    being used during boot where TLS is not available.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-07-30 20:16:47 -04:00
Anas Nashif
8b3f36c656 kernel: move internal headers into include/kernel
Move 2 headers that are internal to the kernel into include/kernel.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-06-16 20:38:55 -04:00
Maksim Masalski
78ba2ec830 coding guidelines: add to function prototypes form named parameters
Function types shall be in prototype form with named parameters

Found as a coding guideline violation (MISRA R8.2) by static
coding scanning tool.

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-06-04 16:20:06 -05:00
Lauren Murphy
4c85b4606b kernel: k_sleep: fix return value for absolute timeout
Fixes calculation of remaining ticks returned from z_tick_sleep
so that it takes absolute timeouts into account.

Fixes #32506

Signed-off-by: Lauren Murphy <lauren.murphy@intel.com>
2021-05-26 18:11:52 -05:00
Maksim Masalski
970820e92d sched: create unique function name
In file include/kernel/thread.h in "struct _thread_base" is a member
called "_wait_q_t *pended_on"
At the same time in file kernel/sched.c is function called
"static _wait_q_t *pended_on()"

Coding scanning tool assigns violation (MISRA R5.9) that static
object reused, because thread.h is included in struct.c file.

I think we can rename function to avoid misreading in the future.

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-05-25 19:06:21 -04:00
Andy Ross
851d14afc8 kernel/sched: Remove "cooperative scheduling only" special cases
The scheduler has historically had an API where an application can
inform the kernel that it will never create a thread that can be
preempted, and the kernel and architecture layer would use that as an
optimization hint to eliminate some code paths.

Those optimizations have dwindled to almost nothing at this point, and
they're now objectively a smaller impact than the special casing that
was required to handle the idle thread (which, obviously, must always
be preemptible).

Fix this by eliminating the idea of "cooperative only" and ensuring
that there will always be at least one preemptible priority with value
>=0.  CONFIG_NUM_PREEMPT_PRIORITIES now specifies the number of
user-accessible priorities other than the idle thread.

The only remaining workaround is that some older architectures (and
also SPARC) use the CONFIG_PREEMPT_ENABLED=n state as a hint to skip
thread switching on interrupt exit.  So detect exactly those platforms
and implement a minimal workaround in the idle loop (basically "just
call swap()") instead, with a big explanation.

Note that this also fixes a bug in one of the philosophers samples,
where it would ask for 6 cooperative priorities but then use values -7
through -2.  It was assuming the kernel would magically create a
cooperative priority for its idle thread, which wasn't correct even
before.

Fixes #34584

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-05-24 23:38:16 -04:00
Torbjörn Leksell
f17144349b Tracing: Thread tracing
Add thread tracing hooks, default hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Anas Nashif
6df4405cca doc: fix typos
Fix various typos in the docs.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-30 16:03:08 -04:00
Krzysztof Chruscinski
7dcff6ecfe kernel: Move _kernel from sched to init
_kernel struct can be used when multithreading is disabled.
In that case sched.c may not be compiled.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-04-29 14:50:35 +02:00