Adding new implementation of logging subsystem. New features
includes: support for multiple backends, improving performance
by deferring log processing to the known context, adding
timestamps and logs filtering options (compile time, runtime,
module level, instance level). Console backend added as the
example backend.
Signed-off-by: Krzysztof Chruściński <krzysztof.chruscinski@nordicsemi.no>
The scheduler priq implementation was taking advantage of a subtle
behavior of the way the tree presents the order of its arguments (the
node being inserted is always first). But it turns out the tree got
that wrong in one spot.
As this was subtle voodoo to begin with, it should have been
documented first. Similarly add a little code to the test case to
guarantee this in the future.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Works mostly like the list enumeration macros. Implemented by fairly
clever alloca trickery and some subtle "next node" logic. More
convenient for many uses, can be early-exited, but has somewhat larger
code size than rb_walk().
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This slist variant allows flags to be stored in a node
without taking up any extra space by relying on 4-byte
pointer alignment; flags are stored in the two LSBs.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The new APIs added are used behind the scenes by list_gen.h
to abstract away details on how lists and nodes work.
Name them with z_ prefix as they are not intended for use
by applications.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Singly-linked lists work in a specific way, but this will allow
slist semantics to be defined for any data structure that can store
a "next" pointer in any way at all, possibly something not even
a pointer value, but a table index.
The immediate need for this patch is to allow the easy definition
of an slist variant which stores flags in the low-order bits of the
node pointer.
The compiler does not need them, but the function prototypes have
been left in slist.h to make the header easier to understand by
the end user and not confuse Doxygen.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Returns true if the specified node is in the tree. Allows the tree to
be used for "set" style semantics along with a lessthan_fn that simply
compares the nodes by their address.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
A balanced tree implementation for Zephyr as we grow into bigger
regimes where simpler data structures aren't appropriate.
This implements an intrusive balanced tree that guarantees O(log2(N))
runtime for all operations and amortized O(1) behavior for creation
and destruction of whole trees. The algorithms and naming are
conventional per existing academic and didactic implementations, c.f.:
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
The implementation is size-optimized to prioritize runtime memory
usage. The data structure is intrusive, which is to say the struct
rbnode handle is intended to be placed in a separate struct the same
way other such structures (e.g. Zephyr's dlist list) and requires no
data pointer to be stored in the node. The color bit is unioned with
a pointer (fairly common for such libraries). Most notably, there is
no "parent" pointer stored in the node, the upper structure of the
tree being generated dynamically via a stack as the tree is recursed.
So the overall memory overhead of a node is just two pointers,
identical with a doubly-linked list.
Code size above dlist is about 2-2.5k on most architectures, which is
significant by Zephyr standards but probably still worthwhile in many
situations.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This patch lets a C++ application use more of Zephyr by adding guards
and changeing some constructs to the C++11 equivalent.
Changes include:
- Adding guards
- Switching to static_assert
- Switching to a template for ARRAY_SIZE as g++ doesn't have the
builtin.
- Re-ordering designated initialisers to match the struct field order
as G++ only supports simple designated initialisers.
Signed-off-by: Michael Hope <mlhx@google.com>
We would like to offer the capability to have memory pool heap data
structures that are usable from user mode threads. The current
k_mem_pool implementation uses IRQ locking and system-wide membership
lists that make it incompatible with user mode constraints.
However, much of the existing memory pool code can be abstracted to some
common functions that are used by both k_mem_pool and the new
sys_mem_pool implementations.
The sys_mem_pool implementation has the following differences:
* The alloc/free APIs work directly with pointers, no internal memory
block structures are exposed to the end user. A pointer to the source
pool is provided for allocation, but freeing memory just requires the
pointer and nothing else.
* k_mem_pool uses IRQ locks and required very fine-grained locking in
order to not affect system latency. sys_mem_pools just use a semaphore
to protect the pool data structures at the API level, since there aren't
implications for system responsiveness with this kind of concurrency
control.
* sys_mem_pools do not support the notion of timeouts for requesting
memory.
* sys_mem_pools are specified at compile time with macros, just like
kernel memory pools. Alternative forms of specification at runtime
will be a later enhancement.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Avoid listing internal function in the public API documentation. After
enabling those doxygen configs, we go lots of errors and bad refs that
were fixed.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This was previously just a #define in one header file, but we need
this expressed in Kconfig space in case some feature only works
properly with downward-growing stacks.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The mbedtls test is hitting a compiler bug where two subtests will
soft fail on qemu_xtensa when assertions are enabled. This is despite
the fact that:
+ The failure is entirely internal to the mbedtls suite.
+ The mbedtls code does not use zephyr asserts
+ The mbedtls code does not call into zephyr code that might assert.
+ The behavior persists even when an irq_lock() is held across the
entire test, ruling out any asserts in interrupt/exception context.
+ And EVEN WHEN the mbedtls library blobs are bytewise identical
between assert and non-assert cases.
The bug seems to be a layout thing where the mbedtls code behavior
differently based on code address and/or link-time optimizations
(xtensa has a few).
Unfortunately sanitycheck enables assertions by setting CFLAGS
directly and not via kconfig, so we can't fix this by turning the
feature off in an app right now. This patch adds a simple "override"
flag that can be set by apps like this that hit bugs.
Again, note that zephyr assertions are not used nor needed by this one
test.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
When an __ASSERT() fails compiled for ARCH_POSIX,
instead of spinning forever (probably until sanitycheck times out)
it now terminates immediately and returns 1 to the shell
Signed-off-by: Alberto Escolar Piedras <alpi@oticon.com>
Be consistent with the style of always having braces even if a compound
statement isn't required. Avoids some warnings from static analysis
tools.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
This function wasn't working on systems that enabled the stack
sentinel as the first 4 bytes of the stack buffer contain the
sentinel value for thread stacks.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Based on the feedback, uses conventional spelling for "thread safe"
and also add notices more consistently.
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
These headers provide an efficient, inline implementations of single-
and double- linked lists, and thus not threadsafe. They are intended
to be used as internal kernel APIs (and currently for example not
documented at https://www.zephyrproject.org/doc/). However, to avoid
issues when doing kernel programming (e.g. #4350), it makes sense
to explicitly, even verbosely, document these functions as not
threadsafe.
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
We currently support converting from cpu format to BE for
u16_t and u32_t. Let's add u64_t as well.
NOTE: This will be used in LWM2M subsys later.
Signed-off-by: Michael Scott <michael.scott@linaro.org>
The existing __stack decorator is not flexible enough for upcoming
thread stack memory protection scenarios. Wrap the entire thing in
a declaration macro abstraction instead, which can be implemented
on a per-arch or per-SOC basis.
Issue: ZEP-2185
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The computation of unused stack space is now split off from the function
which sends the result to printk().
The code now assumes that the struct k_thread is stored elsewhere.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Using MPU enabled HW it was evident that a NULL access
(with offset) was happening in the TCP stack due to the
following message:
***** MPU FAULT *****
Executing thread ID (thread): 0x20009b0c
Faulting instruction address: 0x8034496
Data Access Violation
Address: 0x34
Fatal fault in essential thread! Spinning...
Turns out we are referencing a potentially de-referenced
NULL pointer in the SYS_SLIST_PEEK_NEXT_CONTAINER macro.
Let's avoid this by checking the container node for NULL.
Also fix dlist.h SYS_DLIST_PEEK_NEXT_CONTAINER with the same
issue.
Change-Id: I2e765b9af7bcaf8fb13f7c9b7e081f9e6d4928f2
Signed-off-by: Michael Scott <michael.scott@linaro.org>
Scheduler needs to do time slicing only if there are multiple threads
active with the same priority. This function checks if the list has
more than one node. This would be used to check the list containing
threads with same priority for multiple nodes.
Jira: ZEP-339
Change-Id: I8c7daf77a6540c642ce58a3763b26cd1e06ddc30
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
This is needed by application code that wants to print formatted
strings, but only has a fmt and va_list, and lacks memory to spare for
"buf" and something like:
vsnprintk(buf, sizeof(buf), fmt, ap);
printk("%s", buf);
Change-Id: Ic9cc915ec7e5f8f9492c730667f39788ecae65f6
Signed-off-by: Marti Bolivar <marti.bolivar@linaro.org>
GCC supports __attribute__((format (printf ...))) even when the
variadic arguments are not present. In this case, the attribute
argument specifying the start of the variadic arguments should be
zero.
Use this in printk.h to add __printf_like where it's missing.
Change-Id: I7868439d5791e391aeb07356af9819524e68c771
Signed-off-by: Marti Bolivar <marti.bolivar@linaro.org>
Convert code to use u{8,16,32,64}_t and s{8,16,32,64}_t instead of C99
integer types. This handles the remaining includes and kernel, plus
touching up various points that we skipped because of include
dependancies. We also convert the PRI printf formatters in the arch
code over to normal formatters.
Jira: ZEP-2051
Change-Id: Iecbb12601a3ee4ea936fd7ddea37788a645b08b0
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
This is a start to move away from the C99 {u}int{8,16,32,64}_t types to
Zephyr defined u{8,16,32,64}_t and s{8,16,32,64}_t. This allows Zephyr
to define the sized types in a consistent manor across all the
architectures we support and not conflict with what various compilers
and libc might do with regards to the C99 types.
We introduce <zephyr/types.h> as part of this and have it include
<stdint.h> for now until we transition all the code away from the C99
types.
We go with u{8,16,32,64}_t and s{8,16,32,64}_t as there are some
existing variables defined u8 & u16 as well as to be consistent with
Zephyr naming conventions.
Jira: ZEP-2051
Change-Id: I451fed0623b029d65866622e478225dfab2c0ca8
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
We need to move using the PRI* defines to use newlib as the default libc
as different arch's define various base types like {u}int32_t
differently. To deal with that in a consistent manor we need access to
the defines in most spots for print{f,k} or logging functions.
Change-Id: Ic1fbef75cbaee211803d9aaf506056e5e31e73f3
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
sys_slist_merge_slist shall reinit the appended list not the original.
Change-Id: Iacd5244d0243b7ebdb110991574e9e1d265ced14
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
This enables casting if necessary.
Change-Id: I69d537dd1082e1e5a05aa2cacdd503d3f6c1ab95
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
This adds SYS_SLIST_PEEK_TAIL_CONTAINER macro to access the container
directly.
Change-Id: I740138a47936ebda1e0090628f4933e921f6a43b
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
To be API-equivalent with doubly-linked lists.
Change-Id: I98b781f4c649e248abb04f660f686ad76d6b39de
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
Like SYS_DLIST_FOR_EACH_NODE(), but __dn contains a node where to fetch
the next node from, NULL to start at the head.
Note that the function does not iterate from @a node, but from
node->next. This allows the following:
sys_dnode_t *funcA(sys_dlist_t *list, sys_dnode_t *node)
{
SYS_DLIST_ITERATE_FROM_NODE(list, node) {
if (node == <some condition>) {
return node;
}
}
return NULL;
}
sys_dlist_t list = &<some list>;
sys_dnode_t *node = NULL;
do {
node = funcA(list, node)
if (node == <some other condition>) {
goto found;
}
} while(node);
<handle error>
found:
<do stuff with node>
Change-Id: I17a5787594a0ed1a4745bd2e1557dd54895105ca
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
While iterating on each node element it is much more common to try to
access the container struct which up until now have been done manually
using CONTAINER_OF macro, so this introduce CONTAINER variants that allow
to iterate directly with container pointer rather than the list node.
Change-Id: Ie1e0da948cb9517c3c5cd8e86b59b95d7d027bfa
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
While iterating on each node element it is much more common to try to
access the container struct which up until now have been done manually
using CONTAINER_OF macro, so this introduce CONTAINER variants that allow
to iterate directly with container pointer rather than the list node.
Change-Id: Ia24d9b88d5e2c43ffd476f565faf5bb523a9927b
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
Replace the existing Apache 2.0 boilerplate header with an SPDX tag
throughout the zephyr code tree. This patch was generated via a
script run over the master branch.
Also updated doc/porting/application.rst that had a dependency on
line numbers in a literal include.
Manually updated subsys/logging/sys_log.c that had a malformed
header in the original file. Also cleanup several cases that already
had a SPDX tag and we either got a duplicate or missed updating.
Jira: ZEP-1457
Change-Id: I6131a1d4ee0e58f5b938300c2d2fc77d2e69572c
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
This trick was pioneered in Linux as the config_enabled() macro. The
madness has the effect of taking a macro value that may be defined to
"1" (e.g. CONFIG_MYFEATURE), or may not be defined at all and turning
it into a literal expression that can be used at "runtime". That is,
it works similarly to "defined(CONFIG_MYFEATURE)" does except that it
is an expansion that can exist in a standard expression and be seen by
the compiler and optimizer. Thus much #ifdef usage can be replaced
with cleaner expressions like:
if (IS_ENABLED(CONFIG_MYFEATURE))
myfeature_enable();
Change-Id: I40657d2aa3f802429ac33675a1fe245a5da86615
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This moves the shell component into its own subsys and groups all
related files and options into a single place.
Additionally, one Kconfig option will now be required to enable the
shell:
CONFIG_CONSOLE_SHELL=y
The header files was also moved to include/shell/shell.h and can be now
referenced with
#include <shell/shell.h>
instead of
#include <misc/shell.h>
Updated documentation as well.
Change-Id: Iffbba4acfa05408055e9fd28dffa213451351f94
Signed-off-by: Anas Nashif <anas.nashif@intel.com>