This patch brings support for AF_PACKET and SOCK_RAW type of sockets.
In net_conn_input() function the new flag has been introduced -
'raw_pkt_continue' to indicate if there are other than AF_PACKET
connections registered.
If we do not have other connections than AF_PACKET, the packet is
solely handled in net_conn_input() (or to be more specific in its
helper function - conn_raw_socket()).
Otherwise, it is passed back to net_conn_input in IPv4/6 processing.
Signed-off-by: Lukasz Majewski <lukma@denx.de>
The new function - namely conn_raw_socket(); has been introduced to
handle raw sockets processing. Its code, up till now, only was
executed when IS_ENABLED(CONFIG_NET_SOCKETS_PACKET) was defined.
After this change it can be reused when one would like to handle
raw sockets also when CONFIG_NET_{UDP|TCP} are enabled.
Signed-off-by: Lukasz Majewski <lukma@denx.de>
Add a check to stop a multicast address to be registered multiple times.
This can happen if the application is using net_if_ipv6_maddr_add()
directly.
Tested on the existing bluetooth/ipsp sample:
<wrn> net_if: Multicast address ff02::1 is is already registered.
Signed-off-by: Fabio Baltieri <fabio.baltieri@gmail.com>
The #if statement used IS_ENABLED to check if it was defined.
IS_ENABLED will only return true if the value is 1, and false otherwise.
If the NET_TCP_MAX_SEND_WINDOW_SIZE value would be e.g. 8, then the
check would fail.
Signed-off-by: Emil Gydesen <emil.gydesen@nordicsemi.no>
If we receive data that is out-of-order, queue sequential
TCP segments until we have received earlier segment or a timeout
happens.
Note that we only queue data sequentially in current version i.e.,
there should be no holes in the queue. For example, if we receive
SEQs 5,4,3,6 and are waiting SEQ 2, the data in segments 3,4,5,6 is
queued (in this order), and then given to application when we receive
SEQ 2. But if we receive SEQs 5,4,3,7 then the SEQ 7 is discarded
because the list would not be sequential as number 6 is be missing.
Fixes#30364
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
As the tcp header struct can be cast to unaligned memory, mark
it as packed and access fields using UNALIGNED_GET/PUT when
needed.
Fixes#31145
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Both RS and DAD timeouts are simplified because the delay is a
constant, and by construction the list of timeouts is in increasing
time remaining.
Refactor to avoid repeating the expression that represents the time
until DAD state expires. Uniformly use unsigned operands in deadline
calculation.
Note a case where the racy idiom for retaining an existing timeout is
required in the current work API, but can be replaced with a robust
solution in the proposed new API (the reschedule API replaces any
existing pending update, but the schedule API will leave an existing
scheduled submission in place).
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
The existing implementation is inconsistent in that checking for
expired routers when a timeout is processed detects end-of-life
correctly (when the remaining duration exceeds the signed maximum),
but the calculation of time remaining before expiration uses only
unsigned calculation. So when the set of routers is changed the newly
calculated timeout will not recognize routers that have expired, and
so those routers expired late. In the worst case if the only
remaining router had expired the timer may be set for almost two
months in the future.
Refactor to calculate remaining time in one place and as a signed
value. Change a function name to more clearly reflect what it does.
Avoid unnecessary race conditions in k_work API.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
The net_timeout structure is documented to exist because of behavior
that is no longer true, i.e. that `k_delayed_work_submit()` supports
only delays up to INT32_MAX milliseconds. Nonetheless, use of 32-bit
timestamps within the work handlers mean the restriction is still
present.
This infrastructure is currently used for two timers with long
durations:
* address for IPv6 addresses
* prefix for IPv6 prefixes
The handling of rollover was subtly different between these: address
wraps reset the start time while prefix wraps did not.
The calculation of remaining time in ipv6_nbr was incorrect when the
original requested time in seconds was a multiple of
NET_TIMEOUT_MAX_VALUE: the remainder value would be zero while the
wrap counter was positive, causing the calculation to indicate no time
remained.
The maximum value was set to allow a 100 ms latency between elapse of
the deadline and assessment of a given timer, but detection of
rollover assumed that the captured time in the work handler was
precisely the expected deadline, which is unlikely to be true. Use of
the shared system work queue also risks observed latency exceeding 100
ms. These calculations could produce delays to next event that
exceeded the maximum delay, which introduced special cases.
Refactor so all operations that use this structure are encapsulated
into API that is documented and has a full-coverage unit test. Switch
to the standard mechanism of detecting completed deadlines by
calculating the signed difference between the deadline and the current
time, which eliminates some special cases.
Uniformly rely on the scanning the set of timers to determine the next
deadline, rather than assuming that the most recent update is always
next.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
If there are multiple interfaces a change to the timeout for one
cannot determine the correct delay until the next timeout event. That
can be determined only by checking for the next event over all
interfaces, which is exactly what's done by the timeout worker.
Refactor interface timeout configuration to just set the start time
and request time, and trigger the worker to calculate the next
scheduled event.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
When a renewal occurs the client enters RENEWING, sends a request,
then sets a short timeout (about 4 s) for the response. In the common
case the response will arrive immediately, which will trigger an
attempt to reset the timer with T1 which is generally large.
However the check for updating the timer performs the update only if
the new deadline is closer than the currently set one. Thus the timer
fires at the time the RENEWING request would have been retransmitted,
and only then updates to the correct deadline (T1) for the current
machine state.
Remove the extra timeout by unconditionally setting the timeout to the
new value.
This works when there is one interface; it could be wrong if there
were multiple interfaces one of which had a closer deadline, but
multiple interfaces are mishandled anyway and will be fixed next.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
When there is only a single interface the timeout infrastructure can
correctly calculate time to next event, because timeouts only occur
when an event for that interface is due. This is not the case when
multiple interfaces are present: the timeout is scheduled for the next
event calculated over all interfaces.
When calculating the next event for an interface where the timeout is
not due the current code returns the original absolute delay
associated with its current state, without accounting for the time
that has passed since the start time.
For example if interface A's T1 is 3600 s and is due at 3610, but at
3605 a timeout for interface B occurs, the contribution of A to the
delay to the next scheduled event would be 3600 rather than 5,
preventing the renewal from occurring at the scheduled time.
Fix this by replacing the boolean timed-out state with the number of
seconds remaining until the interface event will occur, and
propagating that through the system so the correct delay over all
interfaces can be maintained.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
If send_request() fails it would return UINT32_MAX as the next
timeout. Callers pass the returned value to update_timeout_work
without validating it. This has worked only because
update_timeout_work will not set a timeout if an existing timeout
would fire earlier, and the way the state is currently structured it
is likely there will be an existing timeout. However, if work thread
retransmission from REQUESTING failed the timer would not be
rescheduled, causing the state machine to stop.
A more clean solution, which matches the behavior of send_discover(),
is to return the timeout for the next transmission even in the case
when the send fails. The observed behavior is the same as if the
network, rather than the sender, failed to transport the request.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
A variable named "timeout" is used to represent the current time in
comparisons against timeouts calculated from a start time and an
interval. Since this current time is not the timeout change its name
to "now" to reduce maintainer confusion.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
If assertions are disabled the send operation would continue on to
transmit a message. Stop it from doing so.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
The flag value UINT32_MAX is returned from manage_timers() when a send
operation did not succeed. This indicates that the timeout should not
be rescheduled, but because it will never replace the starting update
value UINT32_MAX-1 the check will never pass, and in cases where it
should work will be submitted to run at UINT32_MAX-1 seconds.
Fix the upper bound.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
When a connection is lost the client will first attempt to renew, and
then to rebind, and finally to select. Options like gateway may have
been provided by the original connection, but not the new connection,
resulting in an inconsistent configuration for the new network.
Remove the partial state clearing when entering INIT, and expand the
state cleared when entering SELECTING to be more comprehensive.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
The start time is negative only if the interface came up in the the
first milliscond since startup; even then changing the sign of the
start is not appropriate. Presumably a left-over from signed 32-bit
timestamps.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
When needing to lock non-connection related access, use k_mutex
instead of locking irq. There is really no reason to prevent the
system from generating interrupts.
Fixes#30636
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
We did not check that user has supplied network interface index
in "net stats iface <idx>" command.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Debug messages used the address of a member of the work_q structure as
an identifier; that field is not public API, so replace it with the
address of the work queue itself.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Set the RX/TX thread priorities so that if cooperative priorities
are used, then lowest priority thread will have priority -1 which
is the lowest cooperative priority. The higest net thread priority
will depend on number of traffic classes but with max value 8,
the highest priority will be -8.
If preemptive priorities are used, then highest priority thread
will have priority 0, which is the highest preemptive priority.
In this case, the lowest thread priority will be 7 if there are
8 traffic classes.
The motivation for this change is that for cooperative priorities
we want to let other cooperative priority threads to run before
networking. But if preemptive priorities are used, we want
networking threads to run reasonably often compared to other
preemptive priority threads.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
If networking pre-emptive thread priorities are enabled,
then use the proper macro to enable them.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Let user to decide whether the RX/TX threads are run in either
co-operative or pre-emptive thread priority.
Default is co-operative threading.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
A cast made assumption on how data is stored.
This commit makes the following test pass on qemu_leon3:
- net.udp
Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
A cast made assumption on how data is stored.
This commit makes the following test pass on qemu_leon3:
- net.ipv6.fragment
Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
If user has enabled TCP debugging, print detailed internal TCP2
information too when user gives "net conn" command. This is useful
to have when debugging.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Comment out TCP1 specific stuff when TCP2 is enabled. This means
shuffling the code around a bit so that common code is placed at
the end. Introduce also net_tcp_foreach() function to TCP2 so that
it can be used from net-shell.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
The data length is already calculated in tcp_in() so no need
to do it again in tcp_data_get(). Just pass the length to the
tcp_data_get() function.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
This change adds net_context_port_in_use(), which is a simple
wrapper around net_context_check_port() and is used to check
if a particular socket is bound to a given IP address.
Fixes#29649
Signed-off-by: Christopher Friedt <chrisfriedt@gmail.com>
Do not send the original pkt in 6lo based networks as in those
the IPv6 header is mangled and we would not be able to do any
resends of the original pkt. So for 6lo networks, clone the
pkt and send it to peer. The original pkt is kept in sent list
in case we need to resend to peer.
Fixes#29771
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Track the number of dropped TCP data segments and number of dropped
TCP packets in network statistics. It is useful to see these
numbers separately.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
If the peer ACKs data when it closes the connection, update
our sequence number accordinly. The connection would eventually
be terminated but this will avoid extra resends by the peer.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
When a connection is being closed, it is possible that the application
will have a lock to net_context and TCP2 connection lock. If we then
receive a final TCP2 ACK and close the connection, the locking order
get switched and TCP2 will first try to get its own lock and then the
net_context lock. This will lead to deadlock as the locking ordering
is now mixed.
The solution is to unref the TCP connection after releasing the
connection lock. The TCP connection unref function will anyway get the
lock so no need to do double locking.
Fixes#29444
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
using CONFIG_NET_BUF_POOL_USAGE monitor avail_count,
this variable should be protect.
Protecting it by using atomic variable
Signed-off-by: Ehud Naim <ehudn@marvell.com>
The commit 93e5181f ("net: context: Add locking for concurrent
access") added net_context locking to only IPv4 sockets.
That is not enough and we need locking also to other supported
socket address families like IPv6, SocketCAN and packet socket.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
When we are sending ICMPv6 error message, we need to store the
link local addresses of the received packet somewhere in order
to know where to send the new error message.
Easiest is to store the ll addresses is to the error message
itself, just before where the sent packet will start in memory.
We cannot use the original pkt to store the ll addresses
as that packet might get overwritten if we receive lot of packets.
Fixes#29398
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
If we are sending data directly, we already have TCP lock so
there is no need to do any locking. But when data is re-sent,
the work queue handler is doing the sending so we need to lock
the TCP connection.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
When receiving data that needs to be passed the data to application,
queue it for short time so that we do not have TCP connection lock
held. This way if the application wants to send data, there is no
possibility that the connection lock would prevent sending data.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Release the context lock before passing data to the application
socket as that might cause deadlock if the application is run
before the RX thread and it starts to send data and if the RX
thread is never able to run (because of priorities etc).
Fixes#29347
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
net_icmp4_input() may net_pkt_unref() a packet. The header mustn't be
accessed after this or the system may crash.
Signed-off-by: Oliver Hitz <oliver@net-track.ch>
If there are more than one RX or TX threads, then make the name
of each of them unique so that it is easier to figure them out
in "kernel stacks" command.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Windows 10 sends ECN-Echo and Congestion Window Reduced (CWR) flags
together with SYN flag in the connection establishment but the code
did not ignore these flags and send just SYN back (instead of SYN|ACK).
This caused the connection establishement in application level to
fail as the application was never notified about it.
Fixes#29258
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Clip the timeout to 64 seconds, this avoids the timeout value to
increase to high values (e.g. several years).
Signed-off-by: Armand Ciejak <armand@riedonetworks.com>
This avoids code duplication. The same logic was present
in dhcpv4_send_request() and dhcpv4_send_discover().
Signed-off-by: Armand Ciejak <armand@riedonetworks.com>
The TCP2 was calling accept callback before actually finalizing
the connection attempt.
Fixes#29164
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
The L4 connected/disconnected events are usually used to detect
when the application is connected to the network. Unfortunately
if the device has also a static address, then the connected event
might be created (for the static address) even if DHCPv4 is not
ready yet and application would not be able to connect (yet) to the
network. In order to allow the application to fine tune the network
connection creation, generate start, bound and stop events for DHCPv4.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
The "tc" field which was holding the traffic class thread
priority is not used nor needed so remove it from the struct
to save some space.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
No real need to assert when the send timer is cancelled. Just
check if there is re-transmission going on and do nothing if
there is not.
Fixes#28758
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
The local and accepted socket was not bound which caused the
local address to be set as NULL. This then caused issues when
zsock_getsockname() was called by the application.
Fixes#28735
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Since the info parameter is only read from and never written the const
in the function parameter should be present.
Signed-off-by: Christian Taedcke <christian.taedcke@lemonbeat.com>
When compiling with CONFIG_NET_NATIVE=n and CONFIG_NET_OFFLOAD=n
then the following error is printed.
```
In file included from zephyr/subsys/net/ip/ net_if.c:23:0:
zephyr/subsys/net/ip/net_private.h: In function 'net_context_state':
zephyr/subsys/net/ip/net_private.h:58:27:
error: type of 'context' defaults to 'int' [-Werror=implicit-int]
static inline const char *net_context_state(context)
```
This add `struct net_context *` as type for context.
Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
Remove obsolete include of the shell_uart.h file.
It is sufficient to include the shell.h file.
Signed-off-by: Jakub Rzeszutko <jakub.rzeszutko@nordisemi.no>
We run various TCP function from work queue. Make sure the
connection lock is taken before accessing the connection.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Instead of casting k_delayed_work directly to k_work, use the
k_work field name. This avoids warnings from Coverity and
allows the code to work even if the k_delayed_work fields are
re-ordered in the future.
Coverity-CID: 214346
Fixes#28659
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Add dependency to UDP in DHCPv4 Kconfig option as UDP is needed
in DHPCv4 to work properly.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Bug description:
When in tcp_conn_unref(), in case one of the delayed works is already
submitted to sysworkq (after delay period), e.g. send_timer, the check
of k_delayed_work_remaining_get() prevents calling
k_delayed_work_cancel().
This leads to corrupting sysworkq when zeroing struct tcp* conn.
Note that the "next" pointer for the work queue is part of the struct
work (in _reserved field). Which is, in this case, a member of struct
tcp.
Scenario leading to the bug:
(1) net_tcp_connect() is called from a work in sysworkq
(2) net_tcp_connect() submits conn->send_timer to sysworkq
(3) while net_tcp_connect() is waiting on connect_sem, delay period
passes (z_timeout) and send_timer enters sysworkq work slist
(4) also, some other code (app) submits more works to queue, now pointed
by conn->send_timer in sysworkq work list
(5) connection fails (no answer to SYN), causing a call to
tcp_conn_unref()
(6) tcp_conn_unref() is calling tcp_send_queue_flush()
(7) checking k_delayed_work_remaining_get(&conn->send_timer) returns 0
due to delay period end, but send_timer is still in sysworkq work
slist (sysworkq thread still hasn't handled the work)
(8) BUG!: no call to k_delayed_work_cancel(&conn->send_timer)
(9) back in tcp_conn_unref(), a call to memset(conn, 0, sizeof(*conn))
zeroes conn->send_timer
(10) conn->send_timer is pointed to in sysworkq work slist, but is
zeroed, clearing pointer to following works submitted in stage (4)
(11) EFFECT! the works in stage (4) are never executed!!
NOTES:
* k_delayed_work_cancel(), handles both states:
(1) delayed work pends on timeout and
(2) work already in queue.
So there is no need to check k_delayed_work_remaining_get()
* This is also relevant for conn->send_data_timer
Solution:
removing checks of k_delayed_work_remaining_get(), always calling
k_delayed_work_cancel() for work in struct tcp, in unref, before memset
Signed-off-by: David Komel <a8961713@gmail.com>
Since conn->send_data_total is of time size_t we need to use %zu or
we'll get build errors in sanitycheck on 64-bit platforms
Fixes#28605
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
If the network driver for some reason did not set the data in
the network packet properly, then just drop it as we cannot do
anything with just plain net_pkt.
Fixes#28131
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
If there is no handler for IPv4 broadcast packet, then ignore it
instead of trying to send an ARP message to resolve the senders
address.
Fixes#21016
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
We need to have timer that closes the connection for good if
we do not get the FIN and ACK reponse from the peer.
If there is any pending data when application does close(),
send them before sending FIN.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
If there is some error during connection creation, just bail
out in order to avoid null pointer access.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
If we try to send data but the sending window is full, then
try to kick the resend of the pending data.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
If there is no space in the sending window, then return -EAGAIN
so that the caller may try later.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
We should have a max value for sending window so that application
is not able to use all our net_bufs for queueing packets.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
The socket layer expects to receive -ENOBUFS if we do not have
any network buffers so cannot build a network message to be sent.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
If there is an error, the net_context.c:context_sendto() will
free the net_pkt, so we must not do it here.
This commit fixes this error message:
<err> net_pkt: *** ERROR *** pkt 0x20421908 is freed already
(context_sendto():1672)
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Usually the out-of-memory situation will clear itself eventually,
so if that happens in TCP, then keep the connection running and
let the user to decide what to do next.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Instead of hardcoded value of 3, use the value from Kconfig file
so that user can tweak the TCP retry count.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Use SYS_SLIST_FOR_EACH_CONTAINER_SAFE() macro when searching
the connection list so that we notice if new entries are added
or removed in the list.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Instead of forcing the slist node to be first in the tcp struct,
use the pointer to node when accessing the slist. This way we
can change the ordering of fields in tcp struct.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Because the previous commit fixed the destination address check,
we need to handle msghdr (used by sendmsg()) in CAN socket.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
No need to specifically check CAN sockets here. This also means
that SocketCAN will need to support sendmsg() call too. This will
come in following commit.
Fixes#28229
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
When the connection is terminated, make sure that any pending
data is feed to the application.
Fixes#28057
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
If we receive a TCP segment with FIN | ACK | PSH flags, then
update the ack values properly.
Fixes#27982
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
When connection is closed and we send ACK flag, use proper seq
values so that any data that is still in flight will get acked too.
Currently this assumes that window is still open.
Fixes#27876
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Now that device_api attribute is unmodified at runtime, as well as all
the other attributes, it is possible to switch all device driver
instance to be constant.
A coccinelle rule is used for this:
@r_const_dev_1
disable optional_qualifier
@
@@
-struct device *
+const struct device *
@r_const_dev_2
disable optional_qualifier
@
@@
-struct device * const
+const struct device *
Fixes#27399
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
Allow user to create SOCK_DGRAM type AF_PACKET socket. This
allows user to send raw IP packets without specifying
L2 (like Ethernet) headers.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
-Wimplicit-fallthrough=2 requires a fallthrough comment or a compiler
to tells gcc that this happens intentionally.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>