sys/contrib/rdma/rdma_cma.c:1259:8: error: case value not in enumerated type 'enum iw_cm_event_status' [-Werror,-Wswitch]
case ECONNRESET:
^
@/sys/errno.h:118:20: note: expanded from macro 'ECONNRESET'
#define ECONNRESET 54 /* Connection reset by peer */
^
sys/contrib/rdma/rdma_cma.c:1263:8: error: case value not in enumerated type 'enum iw_cm_event_status' [-Werror,-Wswitch]
case ETIMEDOUT:
^
@/sys/errno.h:124:19: note: expanded from macro 'ETIMEDOUT'
#define ETIMEDOUT 60 /* Operation timed out */
^
sys/contrib/rdma/rdma_cma.c:1260:8: error: case value not in enumerated type 'enum iw_cm_event_status' [-Werror,-Wswitch]
case ECONNREFUSED:
^
@/sys/errno.h:125:22: note: expanded from macro 'ECONNREFUSED'
#define ECONNREFUSED 61 /* Connection refused */
^
This is because the switch uses iw_cm_event::status, which is an enum
iw_cm_event_status, while ECONNRESET, ETIMEDOUT and ECONNREFUSED are
just plain defines from errno.h.
It looks like there is only one use of any of the enumeration values of
iw_cm_event_status, in:
sys/contrib/rdma/rdma_iwcm.c: if (iw_event->status == IW_CM_EVENT_STATUS_ACCEPTED) {
So messing around with the enum definitions to fix the warning seems too
disruptive; the simplest fix is to cast the argument of the switch to
int.
Reviewed by: kmacy
MFC after: 1 week
revision 1.173
date: 2011/11/09 12:36:03; author: camield; state: Exp; lines: +11 -12
State expire time is a baseline time ("last active") for expiry
calculations, and does _not_ denote the time when to expire. So
it should never be added to (set into the future).
Try to reconstruct it with an educated guess on state import and
just set it to the current time on state updates.
This fixes a problem on pfsync listeners where the expiry time
could be double the expected value and cause a lot more states
to linger.
forwarding a packet, that creates state, until
pfsync(4) peer acks state addition (or 10 msec
timeout passes).
This is needed for active-active CARP configurations,
which are poorly supported in FreeBSD and arguably
a good idea at all.
Unfortunately by the time of import this feature in
OpenBSD was turned on, and did not have a switch to
turn it off. This leaked to FreeBSD.
This change make it possible to turn this feature
off via ioctl() and turns it off by default.
Obtained from: OpenBSD
Revert r233555 and apply a fix for the reference counting regressions.
Tested by: andreast, lme, nwhitehorn,
Sevan / Venture37 (venture37 at gmail dot com)
Submitted by: Robert Moore (robert dot moore at intel dot com)
Temporarily revert an upstream commit. This change caused regressions for
too many laptop users. Especially, automatic repair for broken _BIF caused
strange reference counting issues and kernal panics. This reverts:
c995fed15a
make use of it where possible.
This primarily brings in support for newer hardware, and FreeBSD is not yet
able to support the abundance of IRQs on new hardware and many features in the
Ethernet driver.
Because of the changes to IRQs in the Simple Executive, we have to maintain our
own list of Octeon IRQs now, which probably can be pared-down and be specific
to the CIU interrupt unit soon, and when other interrupt mechanisms are added
they can maintain their own definitions.
Remove unmasking of interrupts from within the UART device now that the
function used is no longer present in the Simple Executive. The unmasking
seems to have been gratuitous as this is more properly handled by the buses
above the UART device, and seems to work on that basis.
revision 1.146
date: 2010/05/12 08:11:11; author: claudio; state: Exp; lines: +2 -3
bzero() the full compressed update struct before setting the values.
This is needed because pf_state_peer_hton() skips some fields in certain
situations which could result in garbage beeing sent to the other peer.
This seems to fix the pfsync storms seen by stephan@ and so dlg owes me
a whiskey.
I didn't see any storms, but this definitely fixes a useless memory
allocation on the receiving side, due to non zero scrub_flags field
in a pfsync_state_peer structure.
Try to make the "rtable" handling work but the current version of
pf(4) does not fully support it yet as especially callers of
PF_MISMATCHAW() are not fully FIB-aware. OpenBSD seems to have
fixed this in a later version. Prepare as much as possible.
Sponsored by: Cisco Systems, Inc.
M_NOWAIT. Currently, the code allows for sleeping in the ioctl path
to guarantee allocation. However code also handles ENOMEM gracefully, so
propagate this error back to user-space, rather than sleeping while
holding the global pf mutex.
Reviewed by: glebius
Discussed with: bz
- Define schednetisr() to swi_sched.
- In the swi handler check if there is some data prepared,
and if true, then call pfsync_sendout(), however tell it
not to schedule swi again.
- Since now we don't obtain the pfsync lock in the swi handler,
don't use ifqueue mutex to synchronize queue access.
revision 1.128
date: 2009/08/16 13:01:57; author: jsg; state: Exp; lines: +1 -5
remove prototypes of a bunch of functions that had their implementations
removed in pfsync v5.
o Make the pfsync.ko actually usable. Before this change loading it
didn't register protosw, so was a nop. However, a module /boot/kernel
did confused users.
o Rewrite the way we are joining multicast group:
- Move multicast initialization/destruction to separate functions.
- Don't allocate memory if we aren't going to join a multicast group.
- Use modern API for joining/leaving multicast group.
- Now the utterly wrong pfsync_ifdetach() isn't needed.
o Move module initialization from SYSINIT(9) to moduledata_t method.
o Refuse to unload module, unless asked forcibly.
o Improve a bit some FreeBSD porting code:
- Use separate malloc type.
- Simplify swi sheduling.
This change is probably wrong from VIMAGE viewpoint, however pfsync
wasn't VIMAGE-correct before this change, too.
Glanced at by: bz
destroyed prior to pfsync_uninit(). To do this, move all the
initialization to the module_t method, instead of SYSINIT(9).
o Fix another panic after module unload, due to not clearing the
m_addr_chg_pf_p pointer.
o Refuse to unload module, unless being unloaded forcibly.
o Revert the sub argument to MODULE_DECLARE, to the stable/8 value.
This change probably isn't correct from viewpoint of VIMAGE, but
the module wasn't VIMAGE-correct before the change, as well.
Glanced at by: bz
revision 1.170
date: 2011/10/30 23:04:38; author: mikeb; state: Exp; lines: +6 -7
Allow setting big MTU values on the pfsync interface but not larger
than the syncdev MTU. Prompted by the discussion with and tested
by Maxim Bourmistrov; ok dlg, mpf
Consistently use sc_ifp->if_mtu in the MTU check throughout the
module. This backs out r228813.
value used in sys/ofed/include/linux/netdevice.h), so there will be no
buffer overruns in the rest of the inline functions in this file.
Reviewed by: kmacy
MFC after: 1 week
revision 1.122
date: 2009/05/13 01:01:34; author: dlg; state: Exp; lines: +6 -4
only keep track of the number of updates on tcp connections. state sync on
all the other protocols is simply pushing the timeouts along which has a
resolution of 1 second, so it isnt going to be hurt by pfsync taking up
to a second to send it over.
keep track of updates on tcp still though, their windows need constant
attention.
revision 1.120
date: 2009/04/04 13:09:29; author: dlg; state: Exp; lines: +5 -5
use time_uptime instead of time_second internally. time_uptime isnt
affected by adjusting the clock.
revision 1.175
date: 2011/11/25 12:52:10; author: dlg; state: Exp; lines: +3 -3
use time_uptime to set state creation values as time_second can be
skewed at runtime by things like date(1) and ntpd. time_uptime is
monotonic and therefore more useful to compare against.
revision 1.118
date: 2009/03/23 06:19:59; author: dlg; state: Exp; lines: +8 -6
wait an appropriate amount of time before giving up on a bulk update,
rather than giving up after a hardcoded 5 seconds (which is generally much
too short an interval for a bulk update).
pointed out by david@, eyeballed by mcbride@
revision 1.171
date: 2011/10/31 22:02:52; author: mikeb; state: Exp; lines: +2 -1
Don't forget to cancel bulk update failure timeout when destroying an
interface. Problem report and fix from Erik Lax, thanks!
Start a brief note of revisions merged from OpenBSD.
7.x, 8.x and 9.x with pf(4) imports: pfsync(4) should suppress CARP
preemption, while it is running its bulk update.
However, reimplement the feature in more elegant manner, that is
partially inspired by newer OpenBSD:
- Rename term "suppression" to "demotion", to match with OpenBSD.
- Keep a global demotion factor, that can be raised by several
conditions, for now these are:
- interface goes down
- carp(4) has problems with ip_output() or ip6_output()
- pfsync performs bulk update
- Unlike in OpenBSD the demotion factor isn't a counter, but
is actual value added to advskew. The adjustment values for
particular error conditions are also configurable, and their
defaults are maximum advskew value, so a single failure bumps
demotion to maximum. This is for POLA compatibility, and should
satisfy most users.
- Demotion factor is a writable sysctl, so user can do
foot shooting, if he desires to.
of scheduling next run pfsync_bulk_update(), pfsync_bulk_fail()
was scheduled.
This lead to instant 100% state leak after first bulk update
request.
- After above fix, it appeared that pfsync_bulk_update() lacks
locking. To fix this, sc_bulk_tmo callout was converted to an
mtx one. Eventually, all pf/pfsync callouts should be converted
to mtx version, since it isn't possible to stop or drain a
non-mtx callout without risk of race.
- Add comment that callout_stop() in pfsync_clone_destroy() lacks
locking. Since pfsync0 can't be destroyed (yet), let it be here.
The root of problem is re-locking at the end of pfsync_sendout().
Several functions are calling pfsync_sendout() holding pointers
to pf data on stack, and these functions expect this data to be
consistent.
To fix this, the following approach was taken:
- The pfsync_sendout() doesn't call ip_output() directly, but
enqueues the mbuf on sc->sc_ifp's interfaces queue, that
is currently unused. Then pfsync netisr is scheduled. PF_LOCK
isn't dropped in pfsync_sendout().
- The netisr runs through queue and ip_output()s packets
on it.
Apart from fixing race, this also decouples stack, fixing
potential issues, that may happen, when sending pfsync(4)
packets on input path.
Reviewed by: eri (a quick review)
number of packets can be queued on sc, while we are in ip_output(), and then
we wipe the accumulated sc_len. On next pfsync_sendout() that would lead to
writing beyond our mbuf cluster.
to document where we are expecting to be called with a lock held to
more easily catch unnoticed code paths.
This does not neccessarily improve locking in pfsync, it just tries
to avoid the panics reported.
PR: kern/159390, kern/158873
Submitted by: pluknet (at least something that partly resembles
my patch ignoring other cleanup, which I only saw
too late on the 2nd PR)
MFC After: 3 days
and virtualization it is not helpful but complicates things.
Current state of art is to not virtualize these kinds of locks -
inp_group/hash/info/.. are all not virtualized either.
MFC after: 3 days
pfsync also depends on pf to be initialized already so pf goes at
FIRST and the interfaces go at ANY.
Then the (VNET_)SYSINIT startups for pf stays at SI_SUB_PROTO_BEGIN
and for pfsync we move to the later SI_SUB_PROTO_IF.
This is not ideal either but at least an order that should work for
the moment and can be re-fined with the VIMAGE merge, once this will
actually work with more than one network stack.
MFC after: 3 days
and never remove state.
This fixes the problem some people are seeing that state is removed when pf
is loaded as a module but not in situations when compiled into the kernel.
Reported by: many on freebsd-pf
Tested by: flo
MFC after: 3 days
hash install, etc. For now, these are arguments are unused, but as we add
RSS support, we will want to use hashes extracted from mbufs, rather than
manually calculated hashes of header fields, due to the expensive of the
software version of Toeplitz (and similar hashes).
Add notes that it would be nice to be able to pass mbufs into lookup
routines in pf(4), optimising firewall lookup in the same way, but the
code structure there doesn't facilitate that currently.
(In principle there is no reason this couldn't be MFCed -- the change
extends rather than modifies the KBI. However, it won't be useful without
other previous possibly less MFCable changes.)
Reviewed by: bz
Sponsored by: Juniper Networks, Inc.
to not only compile bu load as well for testing with IPv6-only kernels.
For the moment we ignore the csum change in pf_ioctl.c given the
pending update to pf45.
Reported by: dru
Sponsored by: The FreeBSD Foundation
Sponsored by: iXsystems
MFC after: 20 days
- The existing ipi_lock continues to protect the global inpcb list and
inpcb counter. This lock is now relegated to a small number of
allocation and free operations, and occasional operations that walk
all connections (including, awkwardly, certain UDP multicast receive
operations -- something to revisit).
- A new ipi_hash_lock protects the two inpcbinfo hash tables for
looking up connections and bound sockets, manipulated using new
INP_HASH_*() macros. This lock, combined with inpcb locks, protects
the 4-tuple address space.
Unlike the current ipi_lock, ipi_hash_lock follows the individual inpcb
connection locks, so may be acquired while manipulating a connection on
which a lock is already held, avoiding the need to acquire the inpcbinfo
lock preemptively when a binding change might later be required. As a
result, however, lookup operations necessarily go through a reference
acquire while holding the lookup lock, later acquiring an inpcb lock --
if required.
A new function in_pcblookup() looks up connections, and accepts flags
indicating how to return the inpcb. Due to lock order changes, callers
no longer need acquire locks before performing a lookup: the lookup
routine will acquire the ipi_hash_lock as needed. In the future, it will
also be able to use alternative lookup and locking strategies
transparently to callers, such as pcbgroup lookup. New lookup flags are,
supplementing the existing INPLOOKUP_WILDCARD flag:
INPLOOKUP_RLOCKPCB - Acquire a read lock on the returned inpcb
INPLOOKUP_WLOCKPCB - Acquire a write lock on the returned inpcb
Callers must pass exactly one of these flags (for the time being).
Some notes:
- All protocols are updated to work within the new regime; especially,
TCP, UDPv4, and UDPv6. pcbinfo ipi_lock acquisitions are largely
eliminated, and global hash lock hold times are dramatically reduced
compared to previous locking.
- The TCP syncache still relies on the pcbinfo lock, something that we
may want to revisit.
- Support for reverting to the FreeBSD 7.x locking strategy in TCP input
is no longer available -- hash lookup locks are now held only very
briefly during inpcb lookup, rather than for potentially extended
periods. However, the pcbinfo ipi_lock will still be acquired if a
connection state might change such that a connection is added or
removed.
- Raw IP sockets continue to use the pcbinfo ipi_lock for protection,
due to maintaining their own hash tables.
- The interface in6_pcblookup_hash_locked() is maintained, which allows
callers to acquire hash locks and perform one or more lookups atomically
with 4-tuple allocation: this is required only for TCPv6, as there is no
in6_pcbconnect_setup(), which there should be.
- UDPv6 locking remains significantly more conservative than UDPv4
locking, which relates to source address selection. This needs
attention, as it likely significantly reduces parallelism in this code
for multithreaded socket use (such as in BIND).
- In the UDPv4 and UDPv6 multicast cases, we need to revisit locking
somewhat, as they relied on ipi_lock to stablise 4-tuple matches, which
is no longer sufficient. A second check once the inpcb lock is held
should do the trick, keeping the general case from requiring the inpcb
lock for every inpcb visited.
- This work reminds us that we need to revisit locking of the v4/v6 flags,
which may be accessed lock-free both before and after this change.
- Right now, a single lock name is used for the pcbhash lock -- this is
undesirable, and probably another argument is required to take care of
this (or a char array name field in the pcbinfo?).
This is not an MFC candidate for 8.x due to its impact on lookup and
locking semantics. It's possible some of these issues could be worked
around with compatibility wrappers, if necessary.
Reviewed by: bz
Sponsored by: Juniper Networks, Inc.
safer for i386 because it can be easily over 4 GHz now. More worse, it can
be easily changed by user with 'machdep.tsc_freq' tunable (directly) or
cpufreq(4) (indirectly). Note it is intentionally not used in performance
critical paths to avoid performance regression (but we should, in theory).
Alternatively, we may add "virtual TSC" with lower frequency if maximum
frequency overflows 32 bits (and ignore possible incoherency as we do now).