If CONFIG_NET_TCP_ISN_RFC6528 is disabled, then mbedtls include
files are not available so check this.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Calculate Initial Sequence Number (ISN) as described in RFC 6528
https://tools.ietf.org/html/rfc6528
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Currently there is no way to distinguish between a caller
explicitly asking for a semaphore with a limit that
happens to be `UINT_MAX` and a semaphore that just
has a limit "as large as possible".
Add `K_SEM_MAX_LIMIT`, currently defined to `UINT_MAX`, and akin
to `K_FOREVER` versus just passing some very large wait time.
In addition, the `k_sem_*` APIs were type-confused, where
the internal data structure was `uint32_t`, but the APIs took
and returned `unsigned int`. This changes the underlying data
structure to also use `unsigned int`, as changing the APIs
would be a (potentially) breaking change.
These changes are backwards-compatible, but it is strongly suggested
to take a quick scan for `k_sem_init` and `K_SEM_DEFINE` calls with
`UINT_MAX` (or `UINT32_MAX`) and replace them with `K_SEM_MAX_LIMIT`
where appropriate.
Signed-off-by: James Harris <james.harris@intel.com>
Now that the old API has been reimplemented with the new API remove
the old implementation and its tests.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
The tcp2 infrastructure is using the legacy delayed work API, and
relies heavily on the transient state indicated by an estimate of
delayed time remaining to determine whether a delayed work item is
still active. While the wrappers for this work in most cases, one use
is unsanctioned: directly accessing the fields of k_delayed_work
structure to satisfy the calling parameters of the handler when
invoked directly.
The chosen solution for this specific need in the new API is to use a
schedule (rather than reschedule) operation, which leaves any previous
timer unchanged but allows immediate submission if the work is idle.
This changes behavior in that the resend is delegated to the work
queue, rather than done immediately. The former behavior can be
supported by further refactoring that turns the work handler into a
wrapper around a function that takes a connection reference, and
invoking that here, while the handler invokes it after reconstructing
the connection from the contained work item.
For now put in a hack that also uses the non-public fields of the
delayed work structure to implement the required behavior. The
complete fix if this solution is used requires replacing all use of
k_delayed_work in this module with k_work_delayable, leveraging the
new functionality of the API to avoid having to guess about the true
state of a work item based on its transient timer or flag states.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
When pushing received data to the application, check that app
was able to receive the data. If the application already closed
the socket, then we must free the received net_pkt in order to
avoid memory leak.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
If we cannot allocate net_pkt or net_buf, then check this condition
properly and release other resources that were already allocated.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
We need to make sure that when listening a connection establishment,
the connection gets cleared if we do not receive final ACK.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
The #if statement used IS_ENABLED to check if it was defined.
IS_ENABLED will only return true if the value is 1, and false otherwise.
If the NET_TCP_MAX_SEND_WINDOW_SIZE value would be e.g. 8, then the
check would fail.
Signed-off-by: Emil Gydesen <emil.gydesen@nordicsemi.no>
If we receive data that is out-of-order, queue sequential
TCP segments until we have received earlier segment or a timeout
happens.
Note that we only queue data sequentially in current version i.e.,
there should be no holes in the queue. For example, if we receive
SEQs 5,4,3,6 and are waiting SEQ 2, the data in segments 3,4,5,6 is
queued (in this order), and then given to application when we receive
SEQ 2. But if we receive SEQs 5,4,3,7 then the SEQ 7 is discarded
because the list would not be sequential as number 6 is be missing.
Fixes#30364
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
As the tcp header struct can be cast to unaligned memory, mark
it as packed and access fields using UNALIGNED_GET/PUT when
needed.
Fixes#31145
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
When needing to lock non-connection related access, use k_mutex
instead of locking irq. There is really no reason to prevent the
system from generating interrupts.
Fixes#30636
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Comment out TCP1 specific stuff when TCP2 is enabled. This means
shuffling the code around a bit so that common code is placed at
the end. Introduce also net_tcp_foreach() function to TCP2 so that
it can be used from net-shell.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
The data length is already calculated in tcp_in() so no need
to do it again in tcp_data_get(). Just pass the length to the
tcp_data_get() function.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Do not send the original pkt in 6lo based networks as in those
the IPv6 header is mangled and we would not be able to do any
resends of the original pkt. So for 6lo networks, clone the
pkt and send it to peer. The original pkt is kept in sent list
in case we need to resend to peer.
Fixes#29771
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
If the peer ACKs data when it closes the connection, update
our sequence number accordinly. The connection would eventually
be terminated but this will avoid extra resends by the peer.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
When a connection is being closed, it is possible that the application
will have a lock to net_context and TCP2 connection lock. If we then
receive a final TCP2 ACK and close the connection, the locking order
get switched and TCP2 will first try to get its own lock and then the
net_context lock. This will lead to deadlock as the locking ordering
is now mixed.
The solution is to unref the TCP connection after releasing the
connection lock. The TCP connection unref function will anyway get the
lock so no need to do double locking.
Fixes#29444
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
If we are sending data directly, we already have TCP lock so
there is no need to do any locking. But when data is re-sent,
the work queue handler is doing the sending so we need to lock
the TCP connection.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
When receiving data that needs to be passed the data to application,
queue it for short time so that we do not have TCP connection lock
held. This way if the application wants to send data, there is no
possibility that the connection lock would prevent sending data.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Windows 10 sends ECN-Echo and Congestion Window Reduced (CWR) flags
together with SYN flag in the connection establishment but the code
did not ignore these flags and send just SYN back (instead of SYN|ACK).
This caused the connection establishement in application level to
fail as the application was never notified about it.
Fixes#29258
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
The TCP2 was calling accept callback before actually finalizing
the connection attempt.
Fixes#29164
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
No real need to assert when the send timer is cancelled. Just
check if there is re-transmission going on and do nothing if
there is not.
Fixes#28758
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
The local and accepted socket was not bound which caused the
local address to be set as NULL. This then caused issues when
zsock_getsockname() was called by the application.
Fixes#28735
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
We run various TCP function from work queue. Make sure the
connection lock is taken before accessing the connection.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Instead of casting k_delayed_work directly to k_work, use the
k_work field name. This avoids warnings from Coverity and
allows the code to work even if the k_delayed_work fields are
re-ordered in the future.
Coverity-CID: 214346
Fixes#28659
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Bug description:
When in tcp_conn_unref(), in case one of the delayed works is already
submitted to sysworkq (after delay period), e.g. send_timer, the check
of k_delayed_work_remaining_get() prevents calling
k_delayed_work_cancel().
This leads to corrupting sysworkq when zeroing struct tcp* conn.
Note that the "next" pointer for the work queue is part of the struct
work (in _reserved field). Which is, in this case, a member of struct
tcp.
Scenario leading to the bug:
(1) net_tcp_connect() is called from a work in sysworkq
(2) net_tcp_connect() submits conn->send_timer to sysworkq
(3) while net_tcp_connect() is waiting on connect_sem, delay period
passes (z_timeout) and send_timer enters sysworkq work slist
(4) also, some other code (app) submits more works to queue, now pointed
by conn->send_timer in sysworkq work list
(5) connection fails (no answer to SYN), causing a call to
tcp_conn_unref()
(6) tcp_conn_unref() is calling tcp_send_queue_flush()
(7) checking k_delayed_work_remaining_get(&conn->send_timer) returns 0
due to delay period end, but send_timer is still in sysworkq work
slist (sysworkq thread still hasn't handled the work)
(8) BUG!: no call to k_delayed_work_cancel(&conn->send_timer)
(9) back in tcp_conn_unref(), a call to memset(conn, 0, sizeof(*conn))
zeroes conn->send_timer
(10) conn->send_timer is pointed to in sysworkq work slist, but is
zeroed, clearing pointer to following works submitted in stage (4)
(11) EFFECT! the works in stage (4) are never executed!!
NOTES:
* k_delayed_work_cancel(), handles both states:
(1) delayed work pends on timeout and
(2) work already in queue.
So there is no need to check k_delayed_work_remaining_get()
* This is also relevant for conn->send_data_timer
Solution:
removing checks of k_delayed_work_remaining_get(), always calling
k_delayed_work_cancel() for work in struct tcp, in unref, before memset
Signed-off-by: David Komel <a8961713@gmail.com>
Since conn->send_data_total is of time size_t we need to use %zu or
we'll get build errors in sanitycheck on 64-bit platforms
Fixes#28605
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
We need to have timer that closes the connection for good if
we do not get the FIN and ACK reponse from the peer.
If there is any pending data when application does close(),
send them before sending FIN.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
If there is some error during connection creation, just bail
out in order to avoid null pointer access.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
If we try to send data but the sending window is full, then
try to kick the resend of the pending data.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
If there is no space in the sending window, then return -EAGAIN
so that the caller may try later.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
We should have a max value for sending window so that application
is not able to use all our net_bufs for queueing packets.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
If there is an error, the net_context.c:context_sendto() will
free the net_pkt, so we must not do it here.
This commit fixes this error message:
<err> net_pkt: *** ERROR *** pkt 0x20421908 is freed already
(context_sendto():1672)
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Usually the out-of-memory situation will clear itself eventually,
so if that happens in TCP, then keep the connection running and
let the user to decide what to do next.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Instead of hardcoded value of 3, use the value from Kconfig file
so that user can tweak the TCP retry count.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Use SYS_SLIST_FOR_EACH_CONTAINER_SAFE() macro when searching
the connection list so that we notice if new entries are added
or removed in the list.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Instead of forcing the slist node to be first in the tcp struct,
use the pointer to node when accessing the slist. This way we
can change the ordering of fields in tcp struct.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
When the connection is terminated, make sure that any pending
data is feed to the application.
Fixes#28057
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
If we receive a TCP segment with FIN | ACK | PSH flags, then
update the ack values properly.
Fixes#27982
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
When connection is closed and we send ACK flag, use proper seq
values so that any data that is still in flight will get acked too.
Currently this assumes that window is still open.
Fixes#27876
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>