When possible use 'size_t' for sizes and 'uintptr_t' for generic
addresses instead of relying on uint*_t types.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
This patch is the first step to make the rpmsg_multi_instance usable in
a multi-core scenario.
The current driver is using a local driver variable (instance) to track
the number of allocated instances. This counter is practically used to
allocate to the instance the correct portion of the shared memory.
This is fundamentally wrong because this is assuming that it does exist
only one single shared memory region to split amongs all the allocated
instances. When the platform has more than one core this is obviously
not the case since each couple of cores are communicating using a
different memory region.
To solve this issue we introduce a new struct rpmsg_mi_ctx_shm_cfg that
is doing two things: (1) it's carrying the information about the shared
memory and (2) it's carrying an internal variable used to track the
instances allocated in that region. The same struct should be used every
time a new instance is allocated in the same shared memory region.
We also fix a problem with the current code where there is a race
between threads when accessing the instance variable, so this patch is
adding a serializing mutex.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
For the instance configuration the rpmsg_multi_instance code is
currently using a set of configuration info coming from two different
sources: the rpsmg_mi_ctx_cfg struct and Kconfig.
This is not only confusing but it's preventing to configure the
instances using information not coming from Kconfig (for example if we
want to configure the instance using DT).
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
IPC Service allow plugging in different transport backends.
Specifies a generic API that is implemented by the backend.
Signed-off-by: Marcin Jeliński <marcin.jelinski@nordicsemi.no>
This patch implements a service that adds multiple instances
capabilities to RPMsg.
Each instance is allocated a separate piece of shared memory.
Multiple instances provide independent message processing.
Each instance has its own work_q.
Signed-off-by: Marcin Jeliński <marcin.jelinski@nordicsemi.no>
This patch implements a service that adds multiendpoint
capabilities to RPMsg. Multiple endpoints are intended to be used
when multiple modules need services from a remote processor. Each
module may register one or more RPMsg endpoints.
The implementation separates backend from the service, what
allows to extend this module to support other topologies like
Linux <-> Zephyr.
Co-authored-by: Piotr Szkotak <piotr.szkotak@nordicsemi.no>
Signed-off-by: Hubert Miś <hubert.mis@nordicsemi.no>