Providing a set of SQEs with varying delays in different orders. Test
validates each delay is completed at the right amount (and the CQEs are
received in the right order).
Signed-off-by: Luis Ubieda <luisf@croxel.com>
Prior to this change a wrap of the completion count (without SUBMIT_SEM)
would result in looping indefinitely waiting for completions.
Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
Add a test for the behaviour of `rtio_submit` with
`CONFIG_RTIO_SUBMIT_SEM=n` when the `cq_count` variable overflows.
Signed-off-by: Jordan Yates <jordan@embeint.com>
Now that `CONFIG_RTIO_SUBMIT_SEM` is the default option, testing the
alternate code path requires explicitly disabling it.
Signed-off-by: Jordan Yates <jordan@embeint.com>
Few tests with wrongly names platforms. This section is now being
verified for correctness like all others.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
qemu_xtensa_dc233c didn't pan out like I expected, needed to be
qemu_xtensa/dc233c/mmu to be excluded.
Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
This particular qemu platform seems to run into a strange TLB exception
with unclear cause. Disable the test suite on xtensa qemu for the time
being.
Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
It appears some strangeness occurs around the consume semaphore today
that causes a TLB exception on qemu_xtensa_dc233. Unclear *why* just yet
but this fixes things in CI today.
Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
Test that SQE's created with `rtio_sqe_prep_callback_no_cqe` run, but
don't create a completion queue event.
Signed-off-by: Jordan Yates <jordan@embeint.com>
Moves the rtio_ prefixed lockfree queues to sys alongside existing
mpsc/spsc pbuf, ringbuf, and similar queue-like data structures.
Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
Callbacks were a bit neglected in terms of test coverage, especially
when used in chains. It was clear from the code that chained callbacks
may not actually work, and callback ordering then was hard to verify.
Test callbacks chained to transactions work as expected.
The test iodev had built up some cruft over time and in the process
showed a few bugs once callback chaining was fixed so the test iodev now
better matches typical iodev implementations at this point.
Cancellation testing now includes an added case for cancelling a the
second submission in the chain prior to calling submit noting that no
completions notifications should be given back for those.
Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
Two substests cannot be run in the posix architecture
as they require userspace.
Today they are filtered by kconfig, which works but spends
time running cmake.
As native_posix is a default test platform it is better
to filter it alltogether by arch, which saves quite a lot
of time.
Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
The current api rtio_mpsc_pop is not SMP-safe. When muilti threads on
SMP are racing to pop the node, it will likely break the queue. Set
CONFIG_MP_MAX_NUM_CPUS to 1 to temporarily fix the issue.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
Rather than looking at the pool of completions for spinning use an
atomic counter of total completions ever done. The relative number of
completions being waited on by rtio_submit may then always be correctly
done.
Prior to this a race was possible, and understood, as
rtio_cqe_consumable was a likely but not guaranteed count of completions.
Sure enough on an SMP system the likely count was ahead of the actual
available completions and a race was caught by the simple test case.
Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
The test case test_spsc_throughput reuse spsc without reset. Fix it by
resetting the spsc before the test case starts.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
When writing a test suite, it's more common to want the 'before'
hook to run in privilaged mode, even when the test is run in userspace.
Reconfigure ztest to first run the test thread callback in privilaged
mode and only enter userspace after the test rule and suite's 'before'
functions ran.
Signed-off-by: Yuval Peress <peress@google.com>
- Add a new API `rtio_sqe_cancel` to attempt canceling a queued SQE
- Add a new syscall `rtio_sqe_copy_in_get_handles` which allows getting
back the SQE handles generated by the copy_in operation so that they
can be canceled.
Signed-off-by: Yuval Peress <peress@google.com>
Twister now supports using YAML lists for all fields that were written
as space-separated lists. Used twister_to_list.py script. Some artifacts
on string length are due to how ruamel dumps content.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Reworks the zephyr macros and pools to be objects in their own right. Each
pool can be statically defined with a Z_ private macro. The objects can
then be initialized with an rtio instance statically.
This cleans up a lot of code that was otherwise doing little bits of
management around allocation/freeing and reduces the scope those functions
has to the data it needs.
This should enable sharing the pools of sqe, cqe, and mem blocks among rtio
instances in a future improvement easily.
Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
On non-smp systems where multiple cores aren't in play atomics aren't
really necessary and volatile can be used in stead.
Additionally marks the push function as ALWAYS_INLINE as I saw at times
it was not being inlined.
MPSC operation speed is crucial to the performance of rtio, these changes
provided a 30% throughput improvmement in the throughput test.
Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
Rather than the rings, which weren't shared between userspace and kernel
space in Zephyr like they are in Linux with io_uring, use atomic mpsc
queues for submission and completion queues.
Most importantly this removes a potential head of line blocker in the
submission queue as the sqe would be held until a task is completed.
As additional bonuses this avoids some additional locks and restrictions
about what can be submitted and where. It also removes the need for
two executors as all chains/transactions are done concurrently.
Lastly this opens up the possibility for a common pool of sqe's to
allocate from potentially saving lots of memory.
Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
It was previously assumed that the 'sys_mem_blocks' struct would maintain
information about contiguous blocks allocated so the release API only
took the starting address. This led to an issue where allocating 2+
blocks would end up with a memory leak because any block not being the
first would never be released.
Add the buffer length as an argument so the correct number of blocks can
be released. Also, ammend the tests to match and verify.
Signed-off-by: Yuval Peress <peress@google.com>
When building the rtio_api tests with arm-clang we get the following
compiler warning:
rtio_api/src/test_rtio_api.c:436:58: warning: format specifies type
'unsigned long' but the argument has type 'uintptr_t'
(aka 'unsigned int') [-Wformat]
TC_PRINT("userdata is %p, value %lu\n", cqe->userdata, idx);
~~~ ^~~
%u
Use PRIuPTR to fix the issue.
Signed-off-by: Kumar Gala <kumar.gala@intel.com>
- Introduce a new Kconfig to enable mempool in RTIO
- Introduce a new RTIO_DEFINE_WITH_MEMPOOL to allocate an RTIO context
with an associated memory pool.
- Add a new sqe read function rtio_sqe_read_with_pool() for memory pool
enabled RTIO contexts
- Allow IODevs to allocate only the memory they need via rtio_sqe_rx_buf()
- Allow the consumer to get the allocated buffer via
rtio_cqe_get_mempool_buffer()
- Consumers need to release the buffer via rtio_release_buffer() when
processing is complete.
Signed-off-by: Yuval Peress <peress@google.com>
Update the policy such that every completed sqe has a parallel cqe.
This has the primary purpose of making any reads in the sqe visible
to the consumer (since they might have different buffers).
Signed-off-by: Yuval Peress <peress@google.com>
The pending_sqe logic to track where in the ring queue the concurrent
executor had left off was slightly flawed. It didn't account for starting
all sqes in the queue and ending back up at the beginning.
Instead track the last SQE in the queue, from which the next one in the
queue will the one to start next.
If we happen to sweep the last known SQE in the queue, reset it to NULL
so the next time prepare is called we start at the beginning of the queue
again.
Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
Transactional submissions treat a sequence of sqes as a single atomic
submission given to a single iodev and expect as a reply a single
completion.
This is useful for scatter gather like APIs that exist in Zephyr already
for I2C and SPI.
Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
The test suites have grown to cover different units really and having
them in one file was becoming a bit much to scroll around in.
Coincidentally found a accidental reuse of a define between the spsc and
mpsc tests.
Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
Race was possible though very unlikely between the atomic cas
and queue push/pop operations. The outcome of the race had it shown up
would have been a submission not worked on due to the timer never being
started. A small critical section fixes this and clarifies where the single
conumer part of the mpsc queue comes in despite there being multiple
contexts which may enter that section.
Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
Noticed the tests were a bit verbose, saw a few stray printks. Drop those
as they aren't really needed and potentially cause testing issues, printk
is a potential synchronization point.
Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
Renode platform fails the test despite it working well on qemu riscv.
Ignore this particular platform for now.
Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
By using an mpsc queue for each iodev, the iodev itself is shareable across
contexts. Since its lock free, submits may occur even from an ISR context.
Rather than a fixed size queue, and with it the possibility of running
out of pre-allocated spots, each iodev now holds a wait-free mpsc
queue head.
This changes the parameter of iodev submit to be a struct containing 4
pointers for the rtio context, the submission queue entry, and the mpsc
node for the iodevs submission queue.
This solves the problem involving busy iodevs working with real
devices. For example a busy SPI bus driver could enqueue, without locking,
a request to start once the current request is done.
The queue entries are expected to be owned and allocated by the
executor rather than the iodev. This helps simplify potential
tuning knobs to one place, the RTIO context and its executor an
application directly uses.
As the test case shows iodevs can operate effectively lock free
with the mpsc queue and a single atomic denoting the current task.
Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
Adds a lock free/wait free MPSC queue to the rtio subsystem.
While the SPSC ring queue is fast and cache friendly it doesn't
work for all scenarios. Particularly the case where multiple rtio contexts
are attempting to work with a single iodev. An MPSC queue works perfectly
in this scenario.
Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
When a libc partition exists, all user mode threads will need access to
libc variables. Add the libc partition in this test case to allow that.
This was detected by running the test on ARM32 with thread local storage
enabled as that uses z_arm_tls_ptr which is included in the libc partition.
Signed-off-by: Keith Packard <keithp@keithp.com>
Add support for userspace with RTIO by making rtio and rtio_iodev
k_objects. As well as adding three syscalls for copying in submissions,
copying out completions, and starting tasks with submit.
For the small devices Zephyr typically runs on one of the most important
attributes tends to be low memory usage. To maintain the low footprint of
RTIO and its current executor implementations the rings are not shared with
userspace. Sharing the rings it turns out would require copying submissions
before working with them to avoid TOCTOU issues.
The API could still support shared rings in the future so that a
kernel thread could directly poll, copy, verify, and start the submitted
work. This would require a third executor implementation that maintains its
own copy of submissions similiar to how io_uring in Linux works.
Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
Add a bunch of missing "zephyr/" prefixes to #include statements in
various test and test framework files.
Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>