take a timeval indicating when the packet was captured. Move
microtime() to the calling functions and grab the timestamp as soon
as we know that we're going to call catchpacket at least once.
This means that we call microtime() once per matched packet, as
opposed to once per matched packet per bpf listener. It also means
that we return the same timestamp to all bpf listeners, rather than
slightly different ones.
It would be more accurate to call microtime() even earlier for all
packets, as you have to grab (1+#listener) locks before you can
determine if the packet will be logged. You could always grab a
timestamp before the locks, but microtime() can be costly, so this
didn't seem like a good idea.
(I guess most ethernet interfaces will have a bpf listener these
days because of dhclient. That means that we could be doing two bpf
locks on most packets going through the interface.)
PR: 71711
function, pru_close, to notify protocols that the file descriptor or
other consumer of a socket is closing the socket. pru_abort is now a
notification of close also, and no longer detaches. pru_detach is no
longer used to notify of close, and will be called during socket
tear-down by sofree() when all references to a socket evaporate after
an earlier call to abort or close the socket. This means detach is now
an unconditional teardown of a socket, whereas previously sockets could
persist after detach of the protocol retained a reference.
This faciliates sharing mutexes between layers of the network stack as
the mutex is required during the checking and removal of references at
the head of sofree(). With this change, pru_detach can now assume that
the mutex will no longer be required by the socket layer after
completion, whereas before this was not necessarily true.
Reviewed by: gnn
parameter that can specify configuration parameters:
o rev cloner api's to add optional parameter block
o add SIOCCREATE2 that accepts parameter data
o rev vlan support to use new api (maintain old code)
Reviewed by: arch@
already locked. The reason to do this is to avoid two lock+unlock operations
in a row. We need the lock here to serialize access to bd_pid for stats
collection purposes.
Drop the locks all together on detach, as they will be picked up by
knlist_remove.
This should fix a failed locking assertion when kqueue is being used with bpf
descriptors.
Discussed with: jmg
except in places dealing with ifaddr creation or destruction; and
in such special places incomplete ifaddrs should never be linked
to system-wide data structures. Therefore we can eliminate all the
superfluous checks for "ifa->ifa_addr != NULL" and get ready
to the system crashing honestly instead of masking possible bugs.
Suggested by: glebius, jhb, ru
Previously, another thread could get a pointer to the
interface by scanning the system-wide list and sleep
on the global vlan mutex held by vlan_unconfig().
The interface was gone by the time the other thread
woke up.
In order to be able to call vlan_unconfig() on a detached
interface, remove the purely cosmetic bzero'ing of IF_LLADDR
from the function because a detached interface has no addresses.
Noticed by: a stress-testing script by maxim
Reviewed by: glebius
tested and then set. [1]
Reorganise things to eliminate this, we now ensure that enc0 can not be
destroyed which as the benefit of no longer needing to lock in
ipsec_filter and ipsec_bpf. The cloner will create one interface during the
init so we can guarantee that encif will be valid before any SPD entries are
added to ipsec.
Spotted by: glebius [1]
encryption. There are two functions, a bpf tap which has a basic header with
the SPI number which our current tcpdump knows how to display, and handoff to
pfil(9) for packet filtering.
Obtained from: OpenBSD
Based on: kern/94829
No objections: arch, net
MFC after: 1 month
departing trunk so that we don't get into trouble later
by dereferencing a stale pointer to dead trunk's things.
Prodded by: oleg
Sponsored by: RiNet (Cronyx Plus LLC)
MFC after: 1 week
list.
- First remove from global list, then start destroying.
PR: kern/97679
Submitted by: Alex Lyashkov <shadow itt.net.ru>
Reviewed by: rwatson, brooks
order to - for example - apply firewall rules to a whole group of
interfaces. This is required for importing pf from OpenBSD 3.9
Obtained from: OpenBSD (with changes)
Discussed on: -net (back in April)
pointer to a zeroed, statically allocated bpf_if structure. This way the
LIST_EMPTY() macro will always return true. This allows us to remove the
additional unconditional memory reference for each packet in the fast path.
Discussed with: sam
255.255.255.0, and a default route with gateway x.x.x.1. Now if
the address mask is changed to something more specific, e.g.,
255.255.255.128, then after the mask change the default gateway
is no longer reachable.
Since the default route is still present in the routing table,
when the output code tries to resolve the address of the default
gateway in function rt_check(), again, the default route will be
returned by rtalloc1(). Because the lock is currently held on the
rtentry structure, one more attempt to hold the lock will trigger
a crash due to "lock recursed on non-recursive mutex ..."
This is a general problem. The fix checks for the above condition
so that an existing route entry is not mistaken for a new cloned
route. Approriately, an ENETUNREACH error is returned back to the
caller
Approved by: andre
resulting in some build failures. Instead, to fix the problem of bpf not
being present, check the pointer before dereferencing it.
This is a temporary bandaid until we can decide on how we want to handle
the bpf code not being present. This will be fixed shortly.
(1) bpf peer attaches to interface netif0
(2) Packet is received by netif0
(3) ifp->if_bpf pointer is checked and handed off to bpf
(4) bpf peer detaches from netif0 resulting in ifp->if_bpf being
initialized to NULL.
(5) ifp->if_bpf is dereferenced by bpf machinery
(6) Kaboom
This race condition likely explains the various different kernel panics
reported around sending SIGINT to tcpdump or dhclient processes. But really
this race can result in kernel panics anywhere you have frequent bpf attach
and detach operations with high packet per second load.
Summary of changes:
- Remove the bpf interface's "driverp" member
- When we attach bpf interfaces, we now set the ifp->if_bpf member to the
bpf interface structure. Once this is done, ifp->if_bpf should never be
NULL. [1]
- Introduce bpf_peers_present function, an inline operation which will do
a lockless read bpf peer list associated with the interface. It should
be noted that the bpf code will pickup the bpf_interface lock before adding
or removing bpf peers. This should serialize the access to the bpf descriptor
list, removing the race.
- Expose the bpf_if structure in bpf.h so that the bpf_peers_present function
can use it. This also removes the struct bpf_if; hack that was there.
- Adjust all consumers of the raw if_bpf structure to use bpf_peers_present
Now what happens is:
(1) Packet is received by netif0
(2) Check to see if bpf descriptor list is empty
(3) Pickup the bpf interface lock
(4) Hand packet off to process
From the attach/detach side:
(1) Pickup the bpf interface lock
(2) Add/remove from bpf descriptor list
Now that we are storing the bpf interface structure with the ifnet, there is
is no need to walk the bpf interface list to locate the correct bpf interface.
We now simply look up the interface, and initialize the pointer. This has a
nice side effect of changing a bpf interface attach operation from O(N) (where
N is the number of bpf interfaces), to O(1).
[1] From now on, we can no longer check ifp->if_bpf to tell us whether or
not we have any bpf peers that might be interested in receiving packets.
In collaboration with: sam@
MFC after: 1 month
result, raw_uabort() now needs to call raw_detach() directly. As
raw_uabort() is never called, and raw_disconnect() is probably not ever
actually called in practice, this is likely not a functional change, but
improves congruence between protocols, and avoids a NULL raw cb pointer
after disconnect, which could result in a panic.
MFC after: 1 month
notification so all interfaces including pseudo are reported. When netif
creates the clones at startup devctl_disable has not been turned off yet so the
interfaces will not be initialised twice, enforce this by adding an explicit
order between rc.d/netif and rc.d/devd.
This change allows actions to taken in userland when an interface is cloned
and the pseudo interface will be automatically configured if a ifconfig_<int>=""
line exists in rc.conf.
Reviewed by: brooks
No objections on: net
for IOCTLs where casting data to intptr_t * isn't the right thing to do
as _IO() isn't used for them but _IOR(..., int)/_IOW(..., int) are (i.e.
for all IOCTLs except VMIO_SIOCSIFFLAGS), fixing tap(4) on big-endian
LP64 machines.
PR: sparc64/98084
OK'ed by: emax
MFC after: 1 week