move misc/mempool_base.h to sys/mempool_base.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Currently, the free block bitmap is roughly 4 times larger than it
needs to, wasting memory.
Let's assume maxsz = 128, minsz = 8 and n_max = 40.
Z_MPOOL_LVLS(128, 8) returns 3. The block size for level #0 is 128,
the block size for level #1 is 128/4 = 32, and the block size for
level #2 is 32/4 = 8. Hence levels 0, 1, and 2 for a total of 3 levels.
So far so good.
Now let's look at Z_MPOOL_LBIT_WORDS(). We get:
Z_MPOOL_LBIT_WORDS_UNCLAMPED(40, 0) = ((40 << 0) + 31) / 32 = 2
Z_MPOOL_LBIT_WORDS_UNCLAMPED(40, 1) = ((40 << 2) + 31) / 32 = 5
Z_MPOOL_LBIT_WORDS_UNCLAMPED(40, 2) = ((40 << 4) + 31) / 32 = 20
None of those are < 2 so Z_MPOOL_LBIT_WORDS() takes the results from
Z_MPOOL_LBIT_WORDS_UNCLAMPED().
Finally, let's look at _MPOOL_BITS_SIZE(. It sums all possible levels
with Z_MPOOL_LBIT_BYTES() which is:
#define Z_MPOOL_LBIT_BYTES(maxsz, minsz, l, n_max) \
(Z_MPOOL_LVLS((maxsz), (minsz)) >= (l) ? \
4 * Z_MPOOL_LBIT_WORDS((n_max), l) : 0)
Or given what we already have:
Z_MPOOL_LBIT_BYTES(128, 8, 0, 40) = (3 >= 0) ? 4 * 2 : 0 = 8
Z_MPOOL_LBIT_BYTES(128, 8, 1, 40) = (3 >= 1) ? 4 * 5 : 0 = 20
Z_MPOOL_LBIT_BYTES(128, 8, 2, 40) = (3 >= 2) ? 4 * 20 : 0 = 80
Z_MPOOL_LBIT_BYTES(128, 8, 3, 40) = (3 >= 3) ? 4 * ??
Wait... we're missing this one:
Z_MPOOL_LBIT_WORDS_UNCLAMPED(40, 3) = ((40 << 6) + 31) / 32 = 80
then:
Z_MPOOL_LBIT_BYTES(128, 8, 3, 40) = (3 >= 3) ? 4 * 80 : 0 = 320
Further levels yeld (3 >= 4), (3 >= 5), etc. so they're all false and
produce 0.
So this means that we're statically allocating 428 bytes to the bitmap
when clearly only the first 3 Z_MPOOL_LBIT_BYTES() results for the
corresponding 3 levels that we have should be summed e.g. only
108 bytes.
Here the code logic gets confused between level numbers and the number
levels, hence the extra allocation which happens to be exponential.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Update reserved function names starting with one underscore, replacing
them as follows:
'_k_' with 'z_'
'_K_' with 'Z_'
'_handler_' with 'z_handl_'
'_Cstart' with 'z_cstart'
'_Swap' with 'z_swap'
This renaming is done on both global and those static function names
in kernel/include and include/. Other static function names in kernel/
are renamed by removing the leading underscore. Other function names
not starting with any prefix listed above are renamed starting with
a 'z_' or 'Z_' prefix.
Function names starting with two or three leading underscores are not
automatcally renamed since these names will collide with the variants
with two or three leading underscores.
Various generator scripts have also been updated as well as perf,
linker and usb files. These are
drivers/serial/uart_handlers.c
include/linker/kobject-text.ld
kernel/include/syscall_handler.h
scripts/gen_kobject_list.py
scripts/gen_syscall_header.py
Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
Any word started with underscore followed by and uppercase letter or a
second underscore is a reserved word according with C99.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
SYS_MEM_POOL_KERNEL and SYS_MEM_POOL_USER are used by k_mem_pool flags
and should be defined as a bit mask.
It was defined as zero failing in all places checking for this
bit. e.g pool_irq_lock and pool_irq_unlock were not working correctly
because the check for this flag was always returning false.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
When a mempool is created with a large number of maximum-size blocks,
the logic for initializing max_inline_level (i.e. when to union the
bitmask with the pointer and when to use the pointer directly) was
wrong. The default state was "zero", which implies that level 0
should be inlined, but that's wrong with >32 base blocks.
Additionally, the type was unsigned, making the "level zero is a
pointer" situation impossible to represent.
Fixes#6727
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
We would like to offer the capability to have memory pool heap data
structures that are usable from user mode threads. The current
k_mem_pool implementation uses IRQ locking and system-wide membership
lists that make it incompatible with user mode constraints.
However, much of the existing memory pool code can be abstracted to some
common functions that are used by both k_mem_pool and the new
sys_mem_pool implementations.
The sys_mem_pool implementation has the following differences:
* The alloc/free APIs work directly with pointers, no internal memory
block structures are exposed to the end user. A pointer to the source
pool is provided for allocation, but freeing memory just requires the
pointer and nothing else.
* k_mem_pool uses IRQ locks and required very fine-grained locking in
order to not affect system latency. sys_mem_pools just use a semaphore
to protect the pool data structures at the API level, since there aren't
implications for system responsiveness with this kind of concurrency
control.
* sys_mem_pools do not support the notion of timeouts for requesting
memory.
* sys_mem_pools are specified at compile time with macros, just like
kernel memory pools. Alternative forms of specification at runtime
will be a later enhancement.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>