This migrates all the current iterable section usages to the external
API, dropping the "Z_" prefix:
Z_ITERABLE_SECTION_ROM
Z_ITERABLE_SECTION_ROM_GC_ALLOWED
Z_ITERABLE_SECTION_RAM
Z_ITERABLE_SECTION_RAM_GC_ALLOWED
Z_STRUCT_SECTION_ITERABLE
Z_STRUCT_SECTION_ITERABLE_ALTERNATE
Z_STRUCT_SECTION_FOREACH
Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
Currently P4WQ supports queues with sets of user-provided
worked threads of arbitrary numbers. These threads are started
immediately upon initialisation.
This patch adds support for 3 more thread implementation options:
1. queue per thread. It adds a K_P4WQ_ARRAY_DEFINE() macro which
initialises an array of queues and threads of the same number.
These threads are then uniquely assigned to respective queues.
2. delayed start. With this option threads aren't started
immediately upon queue initialisation. Instead a new function
k_p4wq_enable_static_thread() has to be called to enable those
threads individually.
3. queue per CPU. With this option the user can assign CPU masks
to threads when calling k_p4wq_enable_static_thread().
Otherwise the cpu_mask parameter to that function is ignored.
Currently enabling this option implies option 2 above. Also so
far to enable queues per CPU the user has to use
K_P4WQ_ARRAY_DEFINE(), which means this option also implies 1
above, but both these restrictions can be relaxed in the
future if required.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
Work items in P4WQ currently belong to the user before submission
and after exit from the handler, therefore, unless the handler
re-submits the item, accessing it in p4wq_loop() in such cases
is racy. To fix this we re-define work item ownership. Now the
item belongs to the P4WQ core until the user calls
k_p4wq_wait(). If the work item has its .sync flag set, the
function will sleep until the handler completes processing the
work item or until the timeout expires. If .sync isn't set and
the handler hasn't processed the item yet, the function returns
-EBUSY.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
This adds a somewhat special purpose IPC mechanism. It's intended for
applications which have a "work queue" like architecture of discrete
callback items, but which need the ability to schedule those items
independently in separate threads across multiple CPUs. So P4 Work
items:
1. Can run at any Zephyr scheduler priority and with any deadline
(this feature assumes EDF scheduling is enabled)
2. Can be submitted at any time and from any context, including being
resubmitted from within their own handler.
3. Will preempt any lower priority work as soon as they are runnable,
according to the standard rules of Zephyr priority scheduling.
4. Run from a pool of worker threads that can be allocated efficiently
(i.e. you need as many as the number of CPUs plus the number of
preempted in-progress items, but no more).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>