1) They don't use the giant "MAX_CPU" define and instead
are allocated dynamically based on mp_ncpus
2) Will zero with the netstat -z -s -p sctp
3) Will be properly handled by both the sctp_init and finish
(the multi-net stuff was incorrectly bzero'ing in sctp_init
the wrong size.. the bzero is now moved to the right places).
And of course the free is put in at the very end.
MFC after: 3 Months
worker can ask the main privileged process to connect in worker's behalf
and then we can migrate descriptor using this socketpair to worker.
This is not really needed now, but will be needed once we start to use
capsicum for sandboxing.
MFC after: 1 week
threads. These serve as input threads and are queued
packets based on the V-tag number. This is similar to
what a modern card can do with queue's for TCP... but
alas modern cards know nothing about SCTP.
MFC after: 3 months (maybe)
make use of the aac_ioctl_event callback, if aac_alloc_command fails. This
can end up in an infinite loop in the while loop in aac_release_command.
Further investigation into the issue mentioned by Scott Long [1] will be
necessary.
[1] http://lists.freebsd.org/pipermail/freebsd-current/2007-October/078740.html
should_yield(). Use this in various places. Encapsulate the common
case of check-and-yield into a new function maybe_yield().
Change several checks for a magic number of iterations to use
should_yield() instead.
MFC after: 1 week
proto_connection_{send,recv} and change them to return proto_conn
structure. We don't operate directly on descriptors, but on
proto_conns.
- Add wrap method to wrap descriptor with proto_conn.
- Remove methods to send and receive descriptors and implement this
functionality as additional argument to send and receive methods.
MFC after: 1 week
If timeout argument to proto_connect() is -1, then the caller needs to use
this new function to wait for connection.
This change is in preparation for capsicum, where sandboxed worker wants
to ask main process to connect in worker's behalf and pass descriptor
to the worker. Because we don't want the main process to wait for the
connection, it will start async connection and pass descriptor to the
worker who will be responsible for waiting for the connection to finish.
MFC after: 1 week
never committed from p4 dtrace branch.
[The correct include path is referenced from every opensolaris compat
consumer's module Makefile, so it doesn't serve any purpose anyway.]
Reported by: arundel on freebsd-hackers@ via clang
Approved by: kib (mentor)
MFC after: 1 week
2) Add separate max-bursts for retransmit and hb. These
are set to sysctlable values but not settable via the
socket api. This makes sure we don't blast out HB's or
fast-retransmits.
3) Determine on the first data transmission on a net if
its local-lan (by being under or over a RTT). This
can later be used to think about different algorithms
based on locallan vs big-i (experimental)
4) The cwnd should NOT be allowed to grow when an ECNEcho
is seen (TCP has this same bug). We fix this in SCTP
so an ECNe being seen prevents an advance of cwnd.
5) CWR's should not be sent multiple times to the
same network, instead just updating the TSN being
transmitted if needed.
MFC after: 1 Month
so that future allocations start with most recently allocated block
rather than the beginning of the filesystem.
- Fix ext2_alloccg() to properly scan for 8 block chunks that are not
aligned on 8-bit boundaries. Previously this was causing new blocks
to be allocated in a highly fragmented fashion (block 0 of a file at
lbn N, block 1 at lbn N + 8, block 2 at lbn N + 16, etc.).
- Cosmetic tweaks to the currently-disabled fancy realloc sysctls.
PR: kern/153584
Discussed with: bde
Tested by: Pedro F. Giffuni giffunip at yahoo, Zheng Liu (lz)
The clock_t type in OpenSolaris is long (int64_t on amd64).
On FreeBSD clock_t is int32_t. The clock_t type is used in several places
in the ZFS code to store system uptime in milliseconds ("seconds * hz").
With hz=1000 we have a 32-bit integer overflow in 24 days, 20 hours,
31 minutes and 23.648 seconds. This has a user reported negative impact
on l2arc_feed_thread() and may cause unexpected results from other functions
using clock_t.
Reported by: Artem Belevich <fbsdlist@src.cx> on freebsd-fs@
MFC after: 1 week
collect phases. The unp_discard() function executes
unp_externalize_fp(), which might make the socket eligible for gc-ing,
and then, later, taskqueue will close the socket. Since unp_gc()
dropped the list lock to do the malloc, close might happen after the
mark step but before the collection step, causing collection to not
find the socket and miss one array element.
I believe that the race was there before r216158, but the stated
revision made the window much wider by postponing the close to
taskqueue sometimes.
Only process as much array elements as we find the sockets during
second phase of gc [1]. Take linkage lock and recheck the eligibility
of the socket for gc, as well as call fhold() under the linkage lock.
Reported and tested by: jmallett
Submitted by: jmallett [1]
Reviewed by: rwatson, jeff (possibly)
MFC after: 1 week
top 8 bits of the 32 bit signal bit field space for internal use. These private
signals should not be leaked outside of a module.
Given that many algorithm modules use the NewReno hook functions to simplify
their implementation, the obvious place such a leak would show up is in the
NewReno cong_signal hook function.
- Show the full number of significant bits in the signal type definitions in
<netinet/cc.h>.
- Add a bitmask to simplify figuring out if a given signal is in the private or
public bit range.
- Add a sanity check in newreno_cong_signal() to ensure private signals are not
being leaked into the hook function.
Sponsored by: FreeBSD Foundation
Discussed with: David Hayes <dahayes at swin edu au>
MFC after: 1 week
X-MFC with: r215166
The AR5416 and later TX descriptors have new fields for supporting
11n bits (eg 20/40mhz mode, short/long GI) and enabling/disabling
RTS/CTS protection per rate.
These functions will be responsible for initialising the TX descriptors
for the AR5416 and later chips for both HT and legacy frames.
Beacon frames will remain using the non-11n TX descriptor setup for now;
Linux ath9k does much the same.
Note that these functions aren't yet used anywhere; a few more framework
changes are needed before all of the right rate information is available
for TX.
algorithm described in the paper "Improved coexistence and loss tolerance for
delay based TCP congestion control" by Hayes and Armitage. It is implemented as
a kernel module compatible with the recently committed modular congestion
control framework.
CHD enhances the approach taken by the Hamilton-Delay (HD) algorithm to provide
tolerance to non-congestion related packet loss and improvements to coexistence
with loss-based congestion control algorithms. A key idea in improving
coexistence with loss-based congestion control algorithms is the use of a shadow
window, which attempts to track how NewReno's congestion window (cwnd) would
evolve. At the next packet loss congestion event, CHD uses the shadow window to
correct cwnd in a way that reduces the amount of unfairness CHD experiences when
competing with loss-based algorithms.
In collaboration with: David Hayes <dahayes at swin edu au> and
Grenville Armitage <garmitage at swin edu au>
Sponsored by: FreeBSD Foundation
Reviewed by: bz and others along the way
MFC after: 3 months
function; which will be later used by the TX path to determine
whether to use the extended features or not.
* Break out the descriptor chaining logic into a separate function;
again so it can be switched out later on for the 11n version when
needed.
* Refactor out the encryption-swizzling code that's common in the
raw and normal TX path.
algorithm based on the paper "A strategy for fair coexistence of loss and
delay-based congestion control algorithms" by Budzisz, Stanojevic, Shorten and
Baker. It is implemented as a kernel module compatible with the recently
committed modular congestion control framework.
HD uses a probabilistic approach to reacting to delay-based congestion. The
probability of reducing cwnd is zero when the queuing delay is very small,
increasing to a maximum at a set threshold, then back down to zero again when
the queuing delay is high. Normal operation keeps the queuing delay below the
set threshold. However, since loss-based congestion control algorithms push the
queuing delay high when probing for bandwidth, having the probability of
reducing cwnd drop back to zero for high delays allows HD to compete with
loss-based algorithms.
In collaboration with: David Hayes <dahayes at swin edu au> and
Grenville Armitage <garmitage at swin edu au>
Sponsored by: FreeBSD Foundation
Reviewed by: bz and others along the way
MFC after: 3 months