Commit Graph

316 Commits

Author SHA1 Message Date
Jung-uk Kim
a36599cce7 Always embed pointer to BPF JIT function in BPF descriptor
to avoid inconsistency when opt_bpf.h is not included.

Reviewed by:	rwatson
Approved by:	re (rwatson)
2009-08-12 17:28:53 +00:00
Robert Watson
530c006014 Merge the remainder of kern_vimage.c and vimage.h into vnet.c and
vnet.h, we now use jails (rather than vimages) as the abstraction
for virtualization management, and what remained was specific to
virtual network stacks.  Minor cleanups are done in the process,
and comments updated to reflect these changes.

Reviewed by:	bz
Approved by:	re (vimage blanket)
2009-08-01 19:26:27 +00:00
Christian S.J. Peron
0e37f3e196 Implement the -z (zero counters) option for the various bpf counters.
Add necessary changes to the kernel for this (basically introduce a
bpf_zero_counters() function).  As well, update the man page.

MFC after:	1 month
Discussed with:	rwatson
2009-06-19 20:31:44 +00:00
Bjoern A. Zeeb
ebd8672cc3 Add explicit includes for jail.h to the files that need them and
remove the "hidden" one from vimage.h.
2009-06-17 15:01:01 +00:00
Konstantin Belousov
d8b0556c6d Adapt vfs kqfilter to the shared vnode lock used by zfs write vop. Use
vnode interlock to protect the knote fields [1]. The locking assumes
that shared vnode lock is held, thus we get exclusive access to knote
either by exclusive vnode lock protection, or by shared vnode lock +
vnode interlock.

Do not use kl_locked() method to assert either lock ownership or the
fact that curthread does not own the lock. For shared locks, ownership
is not recorded, e.g. VOP_ISLOCKED can return LK_SHARED for the shared
lock not owned by curthread, causing false positives in kqueue subsystem
assertions about knlist lock.

Remove kl_locked method from knlist lock vector, and add two separate
assertion methods kl_assert_locked and kl_assert_unlocked, that are
supposed to use proper asserts. Change knlist_init accordingly.

Add convenience function knlist_init_mtx to reduce number of arguments
for typical knlist initialization.

Submitted by:	jhb [1]
Noted by:	jhb [2]
Reviewed by:	jhb
Tested by:	rnoland
2009-06-10 20:59:32 +00:00
Robert Watson
bcf11e8d00 Move "options MAC" from opt_mac.h to opt_global.h, as it's now in GENERIC
and used in a large number of files, but also because an increasing number
of incorrect uses of MAC calls were sneaking in due to copy-and-paste of
MAC-aware code without the associated opt_mac.h include.

Discussed with:	pjd
2009-06-05 14:55:22 +00:00
Sam Leffler
5ce8d9708c rev bpf attach/detach event api to include the dlt 2009-05-25 16:34:35 +00:00
Sam Leffler
b743c31009 add bpf_track eventhandler for monitoring bpf taps attached/detached
Reviewed by:	csjp
2009-05-18 17:18:40 +00:00
Marko Zec
21ca7b57bd Change the curvnet variable from a global const struct vnet *,
previously always pointing to the default vnet context, to a
dynamically changing thread-local one.  The currvnet context
should be set on entry to networking code via CURVNET_SET() macros,
and reverted to previous state via CURVNET_RESTORE().  Recursions
on curvnet are permitted, though strongly discuouraged.

This change should have no functional impact on nooptions VIMAGE
kernel builds, where CURVNET_* macros expand to whitespace.

The curthread->td_vnet (aka curvnet) variable's purpose is to be an
indicator of the vnet context in which the current network-related
operation takes place, in case we cannot deduce the current vnet
context from any other source, such as by looking at mbuf's
m->m_pkthdr.rcvif->if_vnet, sockets's so->so_vnet etc.  Moreover, so
far curvnet has turned out to be an invaluable consistency checking
aid: it helps to catch cases when sockets, ifnets or any other
vnet-aware structures may have leaked from one vnet to another.

The exact placement of the CURVNET_SET() / CURVNET_RESTORE() macros
was a result of an empirical iterative process, whith an aim to
reduce recursions on CURVNET_SET() to a minimum, while still reducing
the scope of CURVNET_SET() to networking only operations - the
alternative would be calling CURVNET_SET() on each system call entry.
In general, curvnet has to be set in three typicall cases: when
processing socket-related requests from userspace or from within the
kernel; when processing inbound traffic flowing from device drivers
to upper layers of the networking stack, and when executing
timer-driven networking functions.

This change also introduces a DDB subcommand to show the list of all
vnet instances.

Approved by:	julian (mentor)
2009-05-05 10:56:12 +00:00
Christian S.J. Peron
ffeeb9241a Disable zerocopy by default for now. It's causing some problems in pcap
consumers which fork after the shared pages have been setup.  pflogd(8)
is an example.  The problem is understood and there is a fix coming in
shortly.

Folks who want to continue using it can do so by setting

net.bpf.zerocopy_enable

to 1.

Discussed with:	rwatson
2009-03-10 14:28:19 +00:00
Robert Watson
e82669d99b When resetting a BPF descriptor, properly check that zero-copy buffers
are not currently owned by userspace before clearing or rotating them.

Otherwise we may not play by the rules of the shared memory protocol,
potentially corrupting packet data or causing userspace applications
that are playing by the rules to spin due to being notified that a
buffer is complete but the shared memory header not reflecting that.

This behavior was seen with pflogd by a number of reporters; note that
this fix is not sufficient to get pflogd properly working with
zero-copy BPF, due to pflogd opening the BPF device before forking,
leading to the shared memory buffer not being propery inherited in the
privilege-separated child.  We're still deciding how to fix that
problem.

This change exposes buffer-model specific strategy information in
reset_d(), which will be fixed at a later date once we've decided how
best to improve the BPF buffer abstraction.

Reviewed by:	csjp
Reported by:	keramida
2009-03-07 22:17:44 +00:00
Christian S.J. Peron
927094113e Mark the bpf stats sysctl as being mpsafe. We do not require
Giant here.
2009-03-07 17:07:29 +00:00
Christian S.J. Peron
f32b457b6e Switch the default buffer mode in bpf(4) to zero-copy buffers.
Discussed with:	rwatson
2009-03-02 19:42:01 +00:00
Marko Zec
97021c2464 Merge more of currently non-functional (i.e. resolving to
whitespace) macros from p4/vimage branch.

Do a better job at enclosing all instantiations of globals
scheduled for virtualization in #ifdef VIMAGE_GLOBALS blocks.

De-virtualize and mark as const saorder_state_alive and
saorder_state_any arrays from ipsec code, given that they are never
updated at runtime, so virtualizing them would be pointless.

Reviewed by:  bz, julian
Approved by:  julian (mentor)
Obtained from:        //depot/projects/vimage-commit2/...
X-MFC after:  never
Sponsored by: NLnet Foundation, The FreeBSD Foundation
2008-11-26 22:32:07 +00:00
Dag-Erling Smørgrav
1ede983cc9 Retire the MALLOC and FREE macros. They are an abomination unto style(9).
MFC after:	3 months
2008-10-23 15:53:51 +00:00
Jung-uk Kim
12dc95824b Make bpf_maxinsns visible from ng_bpf.c.
Pass me the pointyhat, please.
2008-08-29 20:34:06 +00:00
Ed Schouten
136600fe59 Change bpf(4) to use the cdevpriv API.
Right now the bpf(4) driver uses the cloning API to generate /dev/bpf%u.
When an application such as tcpdump needs a BPF, it opens /dev/bpf0,
/dev/bpf1, etc. until it opens the first available device node. We used
this approach, because our devfs implementation didn't allow
per-descriptor data.

Now that we can, make it use devfs_get_cdevpriv() to obtain the private
data. To remain compatible with the existing implementation, add a
symlink from /dev/bpf0 to /dev/bpf. I've already changed libpcap to
compile with HAVE_CLONING_BPF, which makes it use /dev/bpf. There may be
other applications in the base system (dhclient) that use the loop to
obtain a valid bpf.

Discussed on:	src-committers
Approved by:	csjp
2008-08-13 15:41:21 +00:00
Christian S.J. Peron
a05cf8c6db Annotate why we do not call BPF_CHECK_DIRECTION() in this tapping routine.
There is no way for the caller to tell us which direction this packet is
going.  With the bpf_mtap{2} routines, we can check the interface pointer.

MFC after:	2 weeks
2008-08-01 21:38:46 +00:00
Jung-uk Kim
968c88bc75 Allow injecting big packets via bpf(4) up to min(MTU, 16K-byte).
MFC after:	1 week
2008-07-14 22:41:48 +00:00
David Malone
f11c35082b Add a new ioctl for changing the read filter (BIOCSETFNR). This is
just like BIOCSETF but it doesn't drop all the packets buffered on
the discriptor and reset the statistics.

Also, when setting the write filter, don't drop packets waiting to
be read or reset the statistics.

PR:		118486
Submitted by:	Matthew Luckie <mluckie@cs.waikato.ac.nz>
MFC after:	1 month
2008-07-07 09:25:49 +00:00
Christian S.J. Peron
29f612ec71 Make sure we are clearing the ZBUF_FLAG_IMMUTABLE any time a free buffer
is reclaimed by the kernel.  This fixes a bug resulted in the kernel
over writing packet data while user-space was still processing it when
zerocopy is enabled.  (Or a panic if invariants was enabled).

Discussed with:	rwatson
2008-07-05 20:11:28 +00:00
John Baldwin
7fb547c7f5 Set D_TRACKCLOSE to avoid a race in devfs that could lead to orphaned bpf
devices never getting fully closed.

MFC after:	3 days
2008-05-09 19:29:08 +00:00
Jung-uk Kim
f81a2a4956 Check packet directions more properly instead of just checking received
interface is null.

PR:		kern/123138
Submitted by:	Dmitry (hanabana at mail dot ru)
MFC after:	1 week
2008-04-28 19:42:11 +00:00
Jung-uk Kim
8cd892f752 Revert the previous commit and use M_PROMISC flag instead.
It is safer because it will never be used for outgoing packets.
2008-04-15 17:08:24 +00:00
Jung-uk Kim
9a3a0f9278 Remove M_SKIP_FIREWALL abuse and add more appropriate check.
Pointyhat to:	jkim
Reported by:	Eugene Grosbein (eugen at kuzbass dot ru)
MFC after:	3 days
2008-04-15 00:50:01 +00:00
Robert Watson
a7a91e6592 Maintain and observe a ZBUF_FLAG_IMMUTABLE flag on zero-copy BPF
buffer kernel descriptors, which is used to allow the buffer
currently in the BPF "store" position to be assigned to userspace
when it fills, even if userspace hasn't acknowledged the buffer
in the "hold" position yet.  To implement this, notify the buffer
model when a buffer becomes full, and check that the store buffer
is writable, not just for it being full, before trying to append
new packet data.  Shared memory buffers will be assigned to
userspace at most once per fill, be it in the store or in the
hold position.

This removes the restriction that at most one shared memory can
by owned by userspace, reducing the chances that userspace will
need to call select() after acknowledging one buffer in order to
wait for the next buffer when under high load.  This more fully
realizes the goal of zero system calls in order to process a
high-speed packet stream from BPF.

Update bpf.4 to reflect that both buffers may be owned by userspace
at once; caution against assuming this.
2008-04-07 02:51:00 +00:00
Ruslan Ermilov
ea26d58729 Replaced the misleading uses of a historical artefact M_TRYWAIT with M_WAIT.
Removed dead code that assumed that M_TRYWAIT can return NULL; it's not true
since the advent of MBUMA.

Reviewed by:	arch

There are ongoing disputes as to whether we want to switch to directly using
UMA flags M_WAITOK/M_NOWAIT for mbuf(9) allocation.
2008-03-25 09:39:02 +00:00
Robert Watson
fa0c2b3474 Check for a NULL free buffer pointer in BPF before invoking
bpf_canfreebuf() in order to avoid potentially calling a non-inlinable
but trivial function in zero-copy buffer mode for every packet
received when we couldn't free the buffer anyway.

MFC after:	4 months
2008-03-25 07:41:33 +00:00
Christian S.J. Peron
4d621040ff Introduce support for zero-copy BPF buffering, which reduces the
overhead of packet capture by allowing a user process to directly "loan"
buffer memory to the kernel rather than using read(2) to explicitly copy
data from kernel address space.

The user process will issue new BPF ioctls to set the shared memory
buffer mode and provide pointers to buffers and their size. The kernel
then wires and maps the pages into kernel address space using sf_buf(9),
which on supporting architectures will use the direct map region. The
current "buffered" access mode remains the default, and support for
zero-copy buffers must, for the time being, be explicitly enabled using
a sysctl for the kernel to accept requests to use it.

The kernel and user process synchronize use of the buffers with atomic
operations, avoiding the need for system calls under load; the user
process may use select()/poll()/kqueue() to manage blocking while
waiting for network data if the user process is able to consume data
faster than the kernel generates it. Patchs to libpcap are available
to allow libpcap applications to transparently take advantage of this
support. Detailed information on the new API may be found in bpf(4),
including specific atomic operations and memory barriers required to
synchronize buffer use safely.

These changes modify the base BPF implementation to (roughly) abstrac
the current buffer model, allowing the new shared memory model to be
added, and add new monitoring statistics for netstat to print. The
implementation, with the exception of some monitoring hanges that break
the netstat monitoring ABI for BPF, will be MFC'd.

Zerocopy bpf buffers are still considered experimental are disabled
by default. To experiment with this new facility, adjust the
net.bpf.zerocopy_enable sysctl variable to 1.

Changes to libpcap will be made available as a patch for the time being,
and further refinements to the implementation are expected.

Sponsored by:		Seccuris Inc.
In collaboration with:	rwatson
Tested by:		pwood, gallatin
MFC after:		4 months [1]

[1] Certain portions will probably not be MFCed, specifically things
    that can break the monitoring ABI.
2008-03-24 13:49:17 +00:00
Robert Watson
237fdd787b In keeping with style(9)'s recommendations on macros, use a ';'
after each SYSINIT() macro invocation.  This makes a number of
lightweight C parsers much happier with the FreeBSD kernel
source, including cflow's prcc and lxr.

MFC after:	1 month
Discussed with:	imp, rink
2008-03-16 10:58:09 +00:00
Robert Watson
31b32e6dc3 Add comment that bpfread() has multi-threading issues.
Fix minor white space nit.
2008-02-02 20:35:05 +00:00
Robert Watson
c786600793 Use __FBSDID() in the kernel BPF implementation.
MFC after:	3 days
2007-12-25 13:24:02 +00:00
Robert Watson
2a0a392e1c Remove trailing whitespace from lines in BPF.
MFC after:	3 days
2007-12-23 14:10:33 +00:00
Robert Watson
30d239bc4c Merge first in a series of TrustedBSD MAC Framework KPI changes
from Mac OS X Leopard--rationalize naming for entry points to
the following general forms:

  mac_<object>_<method/action>
  mac_<object>_check_<method/action>

The previous naming scheme was inconsistent and mostly
reversed from the new scheme.  Also, make object types more
consistent and remove spaces from object types that contain
multiple parts ("posix_sem" -> "posixsem") to make mechanical
parsing easier.  Introduce a new "netinet" object type for
certain IPv4/IPv6-related methods.  Also simplify, slightly,
some entry point names.

All MAC policy modules will need to be recompiled, and modules
not updates as part of this commit will need to be modified to
conform to the new KPI.

Sponsored by:	SPARTA (original patches against Mac OS X)
Obtained from:	TrustedBSD Project, Apple Computer
2007-10-24 19:04:04 +00:00
Christian S.J. Peron
50ed6e0713 Make sure that we refresh the PID on read(2) and write(2) operations.
This fixes the process portion of the bpf(4) stats if the peer forks
into the background after it's opened the descriptor.  This bug
results in the following behavior for netstat -B:

# netstat -B
  Pid  Netif  Flags      Recv      Drop     Match Sblen Hblen Command
netstat: kern.proc.pid failed: No such process
78023    em0 p--s--   2237404     43119   2237404 13986     0 ??????

MFC after:	1 week
2007-10-12 14:58:34 +00:00
Andrew Thompson
cb44b6dfe8 Check for multicast destination on bpf injected packets and update the M_*CAST
flags, the absense of these flags causes problems in other areas such as
bridging which expect them to be correct.

At the moment only Ethernet DLTs are checked.

Reviewed by:	bms, csjp, sam
Approved by:	re (bmah)
2007-09-10 00:03:06 +00:00
Robert Watson
0bf686c125 Remove the now-unused NET_{LOCK,UNLOCK,ASSERT}_GIANT() macros, which
previously conditionally acquired Giant based on debug.mpsafenet.  As that
has now been removed, they are no longer required.  Removing them
significantly simplifies error-handling in the socket layer, eliminated
quite a bit of unwinding of locking in error cases.

While here clean up the now unneeded opt_net.h, which previously was used
for the NET_WITH_GIANT kernel option.  Clean up some related gotos for
consistency.

Reviewed by:	bz, csjp
Tested by:	kris
Approved by:	re (kensmith)
2007-08-06 14:26:03 +00:00
Robert Watson
c6b2899785 Replace references to NET_CALLOUT_MPSAFE with CALLOUT_MPSAFE, and remove
definition of NET_CALLOUT_MPSAFE, which is no longer required now that
debug.mpsafenet has been removed.

The once over:	bz
Approved by:	re (kensmith)
2007-07-28 07:31:30 +00:00
Christian S.J. Peron
d83e603ac7 Silence some gcc 4 warnings. It is expected that the bpf_movein() routine
will intialize the the header length and re-initialize the mbuf pointer
to reference the mbuf that is allocated after moving user supplied packet
data in.
2007-06-17 21:51:43 +00:00
Christian S.J. Peron
5632c9822a - Conditionally pickup Giant around the network interface
ioctl routines if we are running with !mpsafenet
- Change un-conditional Giant acquisition around ifpromisc
  to occur only if we are running with !mpsafenet

With these locking bits in place, we can now remove the Giant
requirement from BPF, so drop the D_NEEDGIANT device flag.
This change removes Giant acquisitions around BPF device
handlers (read, write, ioctl etc).

MFC after:	1 month
Discussed with:	rwatson
2007-06-15 02:53:51 +00:00
Jung-uk Kim
560a54e10c Add three new ioctl(2) commands for bpf(4).
- BIOCGDIRECTION and BIOCSDIRECTION get or set the setting determining
whether incoming, outgoing, or all packets on the interface should be
returned by BPF.  Set to BPF_D_IN to see only incoming packets on the
interface.  Set to BPF_D_INOUT to see packets originating locally and
remotely on the interface.  Set to BPF_D_OUT to see only outgoing
packets on the interface.  This setting is initialized to BPF_D_INOUT
by default.  BIOCGSEESENT and BIOCSSEESENT are obsoleted by these but
kept for backward compatibility.

- BIOCFEEDBACK sets packet feedback mode.  This allows injected packets
to be fed back as input to the interface when output via the interface is
successful.  When BPF_D_INOUT direction is set, injected outgoing packet
is not returned by BPF to avoid duplication.  This flag is initialized to
zero by default.

Note that libpcap has been modified to support BPF_D_OUT direction for
pcap_setdirection(3) and PCAP_D_OUT direction is functional now.

Reviewed by:	rwatson
2007-02-26 22:24:14 +00:00
Robert Watson
5d1f828354 Remove slightly dubious comment; add descriptive strings for several
sysctls.

MFC after:	3 days
2007-01-28 16:38:44 +00:00
Robert Watson
acd3428b7d Sweep kernel replacing suser(9) calls with priv(9) calls, assigning
specific privilege names to a broad range of privileges.  These may
require some future tweaking.

Sponsored by:           nCircle Network Security, Inc.
Obtained from:          TrustedBSD Project
Discussed on:           arch@
Reviewed (at least in part) by: mlaier, jmg, pjd, bde, ceri,
                        Alex Lyashkov <umka at sevcity dot net>,
                        Skip Ford <skip dot ford at verizon dot net>,
                        Antoine Brodin <antoine dot brodin at laposte dot net>
2006-11-06 13:42:10 +00:00
Robert Watson
aed5570872 Complete break-out of sys/sys/mac.h into sys/security/mac/mac_framework.h
begun with a repo-copy of mac.h to mac_framework.h.  sys/mac.h now
contains the userspace and user<->kernel API and definitions, with all
in-kernel interfaces moved to mac_framework.h, which is now included
across most of the kernel instead.

This change is the first step in a larger cleanup and sweep of MAC
Framework interfaces in the kernel, and will not be MFC'd.

Obtained from:	TrustedBSD Project
Sponsored by:	SPARTA
2006-10-22 11:52:19 +00:00
Robert Watson
a359443290 Since bpf_allocbufs() uses malloc() with M_WAITOK, don't check return
values for NULL or return an error state.  Assert that all three bpf
buffer pointers are NULL before starting.

MFC after:	1 week
2006-08-09 16:30:26 +00:00
Sam Leffler
246b546762 add support for 802.11 packet injection via bpf
Together with:	Andrea Bittau <a.bittau@cs.ucl.ac.uk>
Reviewed by:	arch@
MFC after:	1 month
2006-07-26 03:15:16 +00:00
David Malone
91433904b5 Rather than calling mircotime() in catchpacket(), make catchpacket()
take a timeval indicating when the packet was captured. Move
microtime() to the calling functions and grab the timestamp as soon
as we know that we're going to call catchpacket at least once.

This means that we call microtime() once per matched packet, as
opposed to once per matched packet per bpf listener. It also means
that we return the same timestamp to all bpf listeners, rather than
slightly different ones.

It would be more accurate to call microtime() even earlier for all
packets, as you have to grab (1+#listener) locks before you can
determine if the packet will be logged. You could always grab a
timestamp before the locks, but microtime() can be costly, so this
didn't seem like a good idea.

(I guess most ethernet interfaces will have a bpf listener these
days because of dhclient. That means that we could be doing two bpf
locks on most packets going through the interface.)

PR:		71711
2006-07-24 15:42:04 +00:00
Christian S.J. Peron
4b19419ee7 Adjust descriptor locking to tell the kqueue subsystem that our descriptor is
already locked. The reason to do this is to avoid two lock+unlock operations
in a row. We need the lock here to serialize access to bd_pid for stats
collection purposes.

Drop the locks all together on detach, as they will be picked up by
knlist_remove.

This should fix a failed locking assertion when kqueue is being used with bpf
descriptors.

Discussed with:	jmg
2006-07-03 20:02:06 +00:00
Christian S.J. Peron
19ba8395e1 Since we are doing some bpf(4) clean up, change a couple of function prototypes
to be consistent. Also, ANSI'fy function definitions. There is no functional
change here.
2006-06-15 15:39:12 +00:00
Christian S.J. Peron
7eae78a419 If bpf(4) has not been compiled into the kernel, initialize the bpf interface
pointer to a zeroed, statically allocated bpf_if structure. This way the
LIST_EMPTY() macro will always return true. This allows us to remove the
additional unconditional memory reference for each packet in the fast path.

Discussed with:	sam
2006-06-14 02:23:28 +00:00
Christian S.J. Peron
16d878cc99 Fix the following bpf(4) race condition which can result in a panic:
(1) bpf peer attaches to interface netif0
	(2) Packet is received by netif0
	(3) ifp->if_bpf pointer is checked and handed off to bpf
	(4) bpf peer detaches from netif0 resulting in ifp->if_bpf being
	    initialized to NULL.
	(5) ifp->if_bpf is dereferenced by bpf machinery
	(6) Kaboom

This race condition likely explains the various different kernel panics
reported around sending SIGINT to tcpdump or dhclient processes. But really
this race can result in kernel panics anywhere you have frequent bpf attach
and detach operations with high packet per second load.

Summary of changes:

- Remove the bpf interface's "driverp" member
- When we attach bpf interfaces, we now set the ifp->if_bpf member to the
  bpf interface structure. Once this is done, ifp->if_bpf should never be
  NULL. [1]
- Introduce bpf_peers_present function, an inline operation which will do
  a lockless read bpf peer list associated with the interface. It should
  be noted that the bpf code will pickup the bpf_interface lock before adding
  or removing bpf peers. This should serialize the access to the bpf descriptor
  list, removing the race.
- Expose the bpf_if structure in bpf.h so that the bpf_peers_present function
  can use it. This also removes the struct bpf_if; hack that was there.
- Adjust all consumers of the raw if_bpf structure to use bpf_peers_present

Now what happens is:

	(1) Packet is received by netif0
	(2) Check to see if bpf descriptor list is empty
	(3) Pickup the bpf interface lock
	(4) Hand packet off to process

From the attach/detach side:

	(1) Pickup the bpf interface lock
	(2) Add/remove from bpf descriptor list

Now that we are storing the bpf interface structure with the ifnet, there is
is no need to walk the bpf interface list to locate the correct bpf interface.
We now simply look up the interface, and initialize the pointer. This has a
nice side effect of changing a bpf interface attach operation from O(N) (where
N is the number of bpf interfaces), to O(1).

[1] From now on, we can no longer check ifp->if_bpf to tell us whether or
    not we have any bpf peers that might be interested in receiving packets.

In collaboration with:	sam@
MFC after:	1 month
2006-06-02 19:59:33 +00:00
Ruslan Ermilov
293c06a186 Fix -Wundef warnings. 2006-05-30 19:24:01 +00:00
Christian S.J. Peron
1fc9e38706 Pickup locks for the BPF interface structure. It's quite possible that
bpf(4) descriptors can be added and removed on this interface while we
are processing stats.

MFC after:	2 weeks
2006-05-07 03:21:43 +00:00
Jung-uk Kim
848c454cc1 Add BPF Just-In-Time compiler support for ng_bpf(4).
The sysctl is changed from net.bpf.jitter.enable to net.bpf_jitter.enable
and this controls both bpf(4) and ng_bpf(4) now.
2005-12-07 21:30:47 +00:00
Jung-uk Kim
ae275efcae Add experimental BPF Just-In-Time compiler for amd64 and i386.
Use the following kernel configuration option to enable:

	options BPF_JITTER

If you want to use bpf_filter() instead (e. g., debugging), do:

	sysctl net.bpf.jitter.enable=0

to turn it off.

Currently BIOCSETWF and bpf_mtap2() are unsupported, and bpf_mtap() is
partially supported because 1) no need, 2) avoid expensive m_copydata(9).

Obtained from:	WinPcap 3.1 (for i386)
2005-12-06 02:58:12 +00:00
Christian S.J. Peron
cb1d4f92ec Protect PID initializations for statistics by the bpf descriptor
locks. Also while we are here, protect the bpf descriptor during
knlist_remove{add} operations.

Discussed with:	rwatson
2005-10-04 15:06:10 +00:00
Andre Oppermann
035ba19027 Undo a tad little optimization to bpf_mtap() introduced in rev. 1.95
which broke the correct handling of the BIOCGSEESENT flag in the bpf
listener.

PR:		kern/56441
Submitted by:	<vys at renet.ru>
MFC after:	3 days
2005-09-14 16:37:05 +00:00
Christian S.J. Peron
b75a24a075 Instead of caching the PID which opened the bpf descriptor, continuously
refresh the PID which has the descriptor open. The PID is refreshed in various
operations like ioctl(2), kevent(2) or poll(2). This produces more accurate
information about current bpf consumers. While we are here remove the bd_pcomm
member of the bpf stats structure because now that we have an accurate PID we
can lookup the via the kern.proc.pid sysctl variable. This is the trick that
NetBSD decided to use to deal with this issue.

Special care needs to be taken when MFC'ing this change, as we have made a
change to the bpf stats structure. What will end up happening is we will leave
the pcomm structure but just mark it as being un-used. This way we keep the ABI
in tact.

MFC after:	1 month
Discussed with:	Rui Paulo < rpaulo at NetBSD dot org >
2005-09-05 23:08:04 +00:00
Christian S.J. Peron
93e39f0b93 Introduce two new ioctl(2) commands, BIOCLOCK and BIOCSETWF. These commands
enhance the security of bpf(4) by further relinquishing the privilege of
the bpf(4) consumer (assuming the ioctl commands are being implemented).

Once BIOCLOCK is executed, the device becomes locked which prevents the
execution of ioctl(2) commands which can change the underly parameters of the
bpf(4) device. An example might be the setting of bpf(4) filter programs or
attaching to different network interfaces.

BIOCSETWF can be used to set write filters for outgoing packets. Currently if
a bpf(4) consumer is compromised, the bpf(4) descriptor can essentially be used
as a raw socket, regardless of consumer's UID. Write filters give users the
ability to constrain which packets can be sent through the bpf(4) descriptor.

These features are currently implemented by a couple programs which came from
OpenBSD, such as the new dhclient and pflogd.

-Modify bpf_setf(9) to accept a "cmd" parameter. This will be used to specify
 whether a read or write filter is to be set.
-Add a bpf(4) filter program as a parameter to bpf_movein(9) as we will run the
 filter program on the mbuf data once we move the packet in from user-space.
-Rather than execute two uiomove operations, (one for the link header and the
 other for the packet data), execute one and manually copy the linker header
 into the sockaddr structure via bcopy.
-Restructure bpf_setf to compensate for write filters, as well as read.
-Adjust bpf(4) stats structures to include a bd_locked member.

It should be noted that the FreeBSD and OpenBSD implementations differ a bit in
the sense that we unconditionally enforce the lock, where OpenBSD enforces it
only if the calling credential is not root.

Idea from:	OpenBSD
Reviewed by:	mlaier
2005-08-22 19:35:48 +00:00
Christian S.J. Peron
4ddfb5312a Add missing braces around bpf_filter which were missed when I
merged the bpfstat code.

Pointed out by:	iedowse
Pointy hat to:	csjp
MFC after:	3 days
2005-08-18 22:30:52 +00:00
Robert Watson
6a113b3de7 Merge the dev_clone and dev_clone_cred event handlers into a single
event handler, dev_clone, which accepts a credential argument.
Implementors of the event can ignore it if they're not interested,
and most do.  This avoids having multiple event handler types and
fall-back/precedence logic in devfs.

This changes the kernel API for /dev cloning, and may affect third
party packages containg cloning kernel modules.

Requested by:	phk
MFC after:	3 days
2005-08-08 19:55:32 +00:00
Christian S.J. Peron
422a63da6e Rather than hold a mutex over calls to SYSCTL_OUT allocate a
temporary buffer then pass the array to user-space once we have
dropped the lock.

While we are here, drop an assertion which could result in a
kernel panic under certain race conditions.

Pointed out by:	rwatson
2005-07-26 17:21:56 +00:00
Christian S.J. Peron
69f7644bc9 Introduce new sysctl variable: net.bpf.stats. This sysctl variable can
be used to pass statistics regarding dropped, matched and received
packet counts from the kernel to user-space. While we are here
introduce a new counter for filtered or matched packets. We currently
keep track of packets received or dropped by the bpf device, but not
how many packets actually matched the bpf filter.

-Introduce net.bpf.stats sysctl OID
-Move sysctl variables after the function prototypes so we can
 reference bpf_stats_sysctl(9) without build errors.
-Introduce bpf descriptor counter which is used mainly for sizing
 of the xbpf_d array.
-Introduce a xbpf_d structure which will act as an external
 representation of the bpf_d structure.
-Add a the following members to the bpfd structure:

	bd_fcount	- Number of packets which matched bpf filter
	bd_pid		- PID which opened the bpf device
	bd_pcomm	- Process name which opened the device.

It should be noted that it's possible that the process which opened
the device could be long gone at the time of stats collection. An
example might be a process that opens the bpf device forks then exits
leaving the child process with the bpf fd.

Reviewed by:	mdodd
2005-07-24 17:21:17 +00:00
Suleiman Souhlal
571dcd15e2 Fix the recent panics/LORs/hangs created by my kqueue commit by:
- Introducing the possibility of using locks different than mutexes
for the knlist locking. In order to do this, we add three arguments to
knlist_init() to specify the functions to use to lock, unlock and
check if the lock is owned. If these arguments are NULL, we assume
mtx_lock, mtx_unlock and mtx_owned, respectively.

- Using the vnode lock for the knlist locking, when doing kqueue operations
on a vnode. This way, we don't have to lock the vnode while holding a
mutex, in filt_vfsread.

Reviewed by:	jmg
Approved by:	re (scottl), scottl (mentor override)
Pointyhat to:	ssouhlal
Will be happy:	everyone
2005-07-01 16:28:32 +00:00
David Malone
01399f34a5 Fix some long standing bugs in writing to the BPF device attached to
a DLT_NULL interface. In particular:

        1) Consistently use type u_int32_t for the header of a
           DLT_NULL device - it continues to represent the address
           family as always.
        2) In the DLT_NULL case get bpf_movein to store the u_int32_t
           in a sockaddr rather than in the mbuf, to be consistent
           with all the DLT types.
        3) Consequently fix a bug in bpf_movein/bpfwrite which
           only permitted packets up to 4 bytes less than the MTU
           to be written.
        4) Fix all DLT_NULL devices to have the code required to
           allow writing to their bpf devices.
        5) Move the code to allow writing to if_lo from if_simloop
           to looutput, because it only applies to DLT_NULL devices
           but was being applied to other devices that use if_simloop
           possibly incorrectly.

PR:		82157
Submitted by:	Matthew Luckie <mjl@luckie.org.nz>
Approved by:	re (scottl)
2005-06-26 18:11:11 +00:00
Brooks Davis
fc74a9f93a Stop embedding struct ifnet at the top of driver softcs. Instead the
struct ifnet or the layer 2 common structure it was embedded in have
been replaced with a struct ifnet pointer to be filled by a call to the
new function, if_alloc(). The layer 2 common structure is also allocated
via if_alloc() based on the interface type. It is hung off the new
struct ifnet member, if_l2com.

This change removes the size of these structures from the kernel ABI and
will allow us to better manage them as interfaces come and go.

Other changes of note:
 - Struct arpcom is no longer referenced in normal interface code.
   Instead the Ethernet address is accessed via the IFP2ENADDR() macro.
   To enforce this ac_enaddr has been renamed to _ac_enaddr.
 - The second argument to ether_ifattach is now always the mac address
   from driver private storage rather than sometimes being ac_enaddr.

Reviewed by:	sobomax, sam
2005-06-10 16:49:24 +00:00
Christian S.J. Peron
0eb206049e Change the maximum bpf program instruction limitation from being hard-
coded at 512 (BPF_MAXINSNS) to being tunable. This is useful for users
who wish to use complex or large bpf programs when filtering traffic.
For now we will default it to BPF_MAXINSNS. I have tested bpf programs
with well over 21,000 instructions without any problems.

Discussed with:	phk
2005-06-06 22:19:59 +00:00
Christian S.J. Peron
a3272e3ce3 -introduce net.bpf sysctl instead of the less intuitive debug.*
debug.bpf_bufsize is now net.bpf.bufsize
    debug.bpf_maxbufsize is now net.bpf.maxbufsize

-move function prototypes for bpf_drvinit and bpf_clone up to the
 top of the file with the others
-assert bpfd lock in catchpacket() and bpf_wakeup()

MFC after:	2 weeks
2005-05-04 03:09:28 +00:00
Poul-Henning Kamp
f4f6abcb4e Explicitly hold a reference to the cdev we have just cloned. This
closes the race where the cdev was reclaimed before it ever made it
back to devfs lookup.
2005-03-31 12:19:44 +00:00
Brian Feldman
4549709fb5 You must selwakeup{,pri}() when closing a selectable object or the
td->td_sel will get trashed and crash the system.  Fix BPF's mistake
in this area.

MFC after:	1 day
2005-03-27 23:16:17 +00:00
John-Mark Gurney
7819da7944 fix a bug where bpf would try to wakeup before updating the state.. This
was causing kqueue not to see the correct state and not wake up a process
that is waiting...

Submitted by:	nCircle Network Security, Inc.
2005-03-02 21:59:39 +00:00
Gleb Smirnoff
31199c8463 Use NET_CALLOUT_MPSAFE macro. 2005-03-01 12:01:17 +00:00
Robert Watson
a8e93fb7ec In bpf_setf(), protect against races between multiple user threads
attempting to change the BPF filter on a BPF descriptor at the same
time: retrieve the old filter pointer under the same locked region
as setting the new pointer.

MFC after:	3 days
2005-02-28 14:04:09 +00:00
Robert Watson
d1a67300e2 Update a comment describing bpf_iflist to indicate that the BPF interface
structures correspond to specific link layers, so the same network
interface may appear more than once.

MFC after:	3 days
2005-02-28 12:35:52 +00:00
Warner Losh
c398230b64 /* -> /*- for license, minor formatting changes 2005-01-07 01:45:51 +00:00
Pawel Jakub Dawidek
77fc70c1ef Fix mbuf leak.
Submitted by:	Johnny Eriksson <bygg@cafax.se>
MFC after:	5 days
2004-12-27 15:53:44 +00:00
Poul-Henning Kamp
e76eee5562 Include fcntl.h
Check O_NONBLOCK instead of IO_NDELAY
Include uio.h
Don't include vnode.h
Don't include filedesc.h
2004-12-22 17:37:57 +00:00
John-Mark Gurney
86c9a45388 don't try to recurse on the bpf lock.. kqueue already locks the bpf lock
now...

Submitted by:	Ed Maste of Sandvine Inc.
MFC after:	1 week
2004-12-17 03:21:46 +00:00
Sam Leffler
3518d22073 Don't require a device to be marked up when issuing BIOCSETIF. 2004-12-08 05:40:02 +00:00
Brian Feldman
93daabdd83 Don't recurse the BPF descriptor lock during the BIOCSDLT operation
(and panic).  To try to finish making BPF safe, at the very least,
the BPF descriptor lock really needs to change into a reader/writer
lock that controls access to "settings," and a mutex that controls
access to the selinfo/knote/callout.  Also, use of callout_drain()
instead of callout_stop() (which is really a much more widespread
issue).
2004-10-06 04:25:37 +00:00
Robert Watson
46448b5a1b Reformulate bpf_dettachd() to acquire the BIF_LOCK() as well as
BPFD_LOCK() when removing a descriptor from an interface descriptor
list.  Hold both over the operation, and do a better job at
maintaining the invariant that you can't find partially connected
descriptors on an active interface descriptor list.

This appears to close a race that resulted in the kernel performing
a NULL pointer dereference when BPF sessions are detached during
heavy network activity on SMP systems.

RELENG_5 candidate.
2004-09-09 04:11:12 +00:00
Robert Watson
4a3feeaa86 Reformulate use of linked lists in 'struct bpf_d' and 'struct bpf_if'
to use queue(3) list macros rather than hand-crafted lists.  While
here, move to doubly linked lists to eliminate iterating lists in
order to remove entries.  This change simplifies and clarifies the
list logic in the BPF descriptor code as a first step towards revising
the locking strategy.

RELENG_5 candidate.

Reviewed by:	fenner
2004-09-09 00:19:27 +00:00
Robert Watson
d17d818425 Compare/set pointers using NULL not 0. 2004-09-09 00:11:50 +00:00
John-Mark Gurney
ad3b9257c2 Add locking to the kqueue subsystem. This also makes the kqueue subsystem
a more complete subsystem, and removes the knowlege of how things are
implemented from the drivers.  Include locking around filter ops, so a
module like aio will know when not to be unloaded if there are outstanding
knotes using it's filter ops.

Currently, it uses the MTX_DUPOK even though it is not always safe to
aquire duplicate locks.  Witness currently doesn't support the ability
to discover if a dup lock is ok (in some cases).

Reviewed by:	green, rwatson (both earlier versions)
2004-08-15 06:24:42 +00:00
Robert Watson
46691dd8d7 Do a lockless read of the BPF interface structure descriptor list head
before grabbing BPF locks to see if there are any entries in order to
avoid the cost of locking if there aren't any.  Avoids a mutex lock/
unlock for each packet received if there are no BPF listeners.
2004-08-05 02:37:36 +00:00
Robert Watson
572bde2aea Prefer NULL to '0' when checking a pointer value. 2004-07-24 16:58:56 +00:00
Robert Watson
28b8605232 In the BPF and ethernet bridging code, don't allow callouts to execute
without Giant if we're not debug.mpsafenet=1.
2004-07-05 16:28:31 +00:00
Poul-Henning Kamp
f3732fd15b Second half of the dev_t cleanup.
The big lines are:
	NODEV -> NULL
	NOUDEV -> NODEV
	udev_t -> dev_t
	udev2dev() -> findcdev()

Various minor adjustments including handling of userland access to kernel
space struct cdev etc.
2004-06-17 17:16:53 +00:00
Poul-Henning Kamp
89c9c53da0 Do the dreaded s/dev_t/struct cdev */
Bump __FreeBSD_version accordingly.
2004-06-16 09:47:26 +00:00
Robert Watson
b8f9429d55 Switch to conditionally acquiring and dropping Giant around calls into
ifp->if_output() basedd on debug.mpsafenet.  That way once bpfwrite()
can be called without Giant, it will acquire Giant (if desired) before
entering the network stack.
2004-06-11 03:47:21 +00:00
Robert Watson
8240bf1e04 Un-staticize 'dst' sockaddr in the stack of bpfwrite() to prevent
the need to synchronize access to the structure.  I believe this
should fit into the stack under the necessary circumstances, but
if not we can either add synchronization or use a thread-local
malloc for the duration.
2004-06-11 03:45:42 +00:00
Warner Losh
f36cfd49ad Remove advertising clause from University of California Regent's
license, per letter dated July 22, 1999 and email from Peter Wemm,
Alan Cox and Robert Watson.

Approved by: core, peter, alc, rwatson
2004-04-07 20:46:16 +00:00
Robert Watson
f747d2dd90 Grab Giant after MAC processing on outgoing packets being sent via
BPF.  Grab the BPF descriptor lock before entering MAC since the MAC
Framework references BPF descriptor fields, including the BPF
descriptor label.

Submitted by:	sam
2004-02-29 15:32:33 +00:00
Poul-Henning Kamp
dc08ffec87 Device megapatch 4/6:
Introduce d_version field in struct cdevsw, this must always be
initialized to D_VERSION.

Flip sense of D_NOGIANT flag to D_NEEDGIANT, this involves removing
four D_NOGIANT flags and adding 145 D_NEEDGIANT flags.
2004-02-21 21:10:55 +00:00
Poul-Henning Kamp
c9c7976f7f Device megapatch 1/6:
Free approx 86 major numbers with a mostly automatically generated patch.

A number of strategic drivers have been left behind by caution, and a few
because they still (ab)use their major number.
2004-02-21 19:42:58 +00:00
Dag-Erling Smørgrav
9e6108885c Random style fixes and a comment update. No functional changes. 2004-02-16 18:19:15 +00:00
Tim J. Robbins
c1dae2f08f Unbreak build of bpf-free kernels. 2003-12-29 08:23:11 +00:00
Sam Leffler
437ffe1823 o eliminate widespread on-stack mbuf use for bpf by introducing
a new bpf_mtap2 routine that does the right thing for an mbuf
  and a variable-length chunk of data that should be prepended.
o while we're sweeping the drivers, use u_int32_t uniformly when
  when prepending the address family (several places were assuming
  sizeof(int) was 4)
o return M_ASSERTVALID to BPF_MTAP* now that all stack-allocated
  mbufs have been eliminated; this may better be moved to the bpf
  routines

Reviewed by:	arch@ and several others
2003-12-28 03:56:00 +00:00
Seigo Tanimura
512824f8f7 - Implement selwakeuppri() which allows raising the priority of a
thread being waken up.  The thread waken up can run at a priority as
  high as after tsleep().

- Replace selwakeup()s with selwakeuppri()s and pass appropriate
  priorities.

- Add cv_broadcastpri() which raises the priority of the broadcast
  threads.  Used by selwakeuppri() if collision occurs.

Not objected in:	-arch, -current
2003-11-09 09:17:26 +00:00
Brooks Davis
9bf40ede4a Replace the if_name and if_unit members of struct ifnet with new members
if_xname, if_dname, and if_dunit. if_xname is the name of the interface
and if_dname/unit are the driver name and instance.

This change paves the way for interface renaming and enhanced pseudo
device creation and configuration symantics.

Approved By:	re (in principle)
Reviewed By:	njl, imp
Tested On:	i386, amd64, sparc64
Obtained From:	NetBSD (if_xname)
2003-10-31 18:32:15 +00:00
Sam Leffler
5f7a7923ea add a stub for bpfattach2 so bpf is not required with the 802.11
module or related drivers

Spotted by:	Dan Lukes <dan@obluda.cz>
2003-10-04 01:32:28 +00:00
Sam Leffler
e0111e4de5 Reduce window during which a race can occur when detaching
an interface from each descriptor that references it. This
is just a bandaid; the locking here needs to be redone.
2003-09-04 22:27:45 +00:00
Sam Leffler
c06eb4e293 Change instances of callout_init that specify MPSAFE behaviour to
use CALLOUT_MPSAFE instead of "1" for the second parameter.  This
does not change the behaviour; it just makes the intent more clear.
2003-08-19 17:51:11 +00:00
John-Mark Gurney
95aab9cc49 add support for using kqueue to watch bpf sockets.
Submitted by:	Brian Buchanan of nCircle, Inc.
Tested on:	i386 and sparc64
2003-08-05 07:12:49 +00:00
Matthew N. Dodd
d79bf33783 Assignment could be NULL, check. 2003-03-21 15:13:29 +00:00
Poul-Henning Kamp
7ac40f5f59 Gigacommit to improve device-driver source compatibility between
branches:

Initialize struct cdevsw using C99 sparse initializtion and remove
all initializations to default values.

This patch is automatically generated and has been tested by compiling
LINT with all the fields in struct cdevsw in reverse order on alpha,
sparc64 and i386.

Approved by:    re(scottl)
2003-03-03 12:15:54 +00:00
Matthew N. Dodd
797f247b51 sizeof(struct llc) -> LLC_SNAPFRAMELEN
sizeof(struct ether_header) -> ETHER_HDR_LEN
 sizeof(struct fddi_header) -> FDDI_HDR_LEN
2003-03-03 05:04:57 +00:00
Dag-Erling Smørgrav
521f364b80 More low-hanging fruit: kill caddr_t in calls to wakeup(9) / [mt]sleep(9). 2003-03-02 16:54:40 +00:00
Dag-Erling Smørgrav
8994a245e0 Clean up whitespace, s/register //, refrain from strong urge to ANSIfy. 2003-03-02 15:56:49 +00:00
Dag-Erling Smørgrav
c952458814 uiomove-related caddr_t -> void * (just the low-hanging fruit) 2003-03-02 15:50:23 +00:00
Warner Losh
a163d034fa Back out M_* changes, per decision of the TRB.
Approved by: trb
2003-02-19 05:47:46 +00:00
Alfred Perlstein
44956c9863 Remove M_TRYWAIT/M_WAITOK/M_WAIT. Callers should use 0.
Merge M_NOWAIT/M_DONTWAIT into a single flag M_NOWAIT.
2003-01-21 08:56:16 +00:00
Sam Leffler
8eab61f3de o add BIOCGDLTLIST and BIOCSDLT ioctls to get the data link type list
and set the link type for use by libpcap and tcpdump
o move mtx unlock in bpfdetach up; it doesn't need to be held so long
o change printf in bpf_detach to distinguish it from the same one in bpfsetdlt

Note there are locking issues here related to ioctl processing; they
have not been addressed here.

Submitted by:	Guy Harris <guy@alum.mit.edu>
Obtained from:	NetBSD (w/ locking modifications)
2003-01-20 19:08:46 +00:00
Poul-Henning Kamp
c5ec6754d5 Remove cdevw_add() calls, they are deprecated. 2002-12-28 21:40:20 +00:00
Sam Leffler
e5562bee60 correct function declarations of stubs used for building w/o device bpf 2002-11-19 02:50:46 +00:00
Sam Leffler
24a229f466 o add support for multiple link types per interface (e.g. 802.11 and Ethernet)
o introduce BPF_TAP and BPF_MTAP macros to hide implementation details and
  ease code portability
o use m_getcl where appropriate

Reviewed by:	many
Approved by:	re
Obtained from:	NetBSD (multiple link type support)
2002-11-14 23:24:13 +00:00
Brooks Davis
29e1b85f97 Use if_printf(ifp, "blah") instead of
printf("%s%d: blah", ifp->if_name, ifp->if_xname).
2002-10-21 02:51:56 +00:00
Don Lewis
91e97a8266 In an SMP environment post-Giant it is no longer safe to blindly
dereference the struct sigio pointer without any locking.  Change
fgetown() to take a reference to the pointer instead of a copy of the
pointer and call SIGIO_LOCK() before copying the pointer and
dereferencing it.

Reviewed by:	rwatson
2002-10-03 02:13:00 +00:00
Poul-Henning Kamp
37c841831f Be consistent about "static" functions: if the function is marked
static in its prototype, mark it static at the definition too.

Inspired by:    FlexeLint warning #512
2002-09-28 17:15:38 +00:00
Poul-Henning Kamp
7b83124255 Don't return(foo(bla)) when foo returns void. 2002-09-28 14:03:27 +00:00
Robert Watson
0c7fb5347c Insert a missing call to MAC protection check for delivering an
mbuf to a bpf device.

Obtained from:	TrustedBSD Project
Sponsored by:	DARPA, Network Associates Laboratories
Submitted by:	phk
2002-09-21 00:59:56 +00:00
Poul-Henning Kamp
f0e2422b1b Use m_length() instead of home-rolled.
In bpf_mtap(), if the entire packet is in one mbuf, call bpf_tap()
instead since it is a tad faster.

Sponsored by:	http://www.babeltech.dk/
2002-09-18 19:48:59 +00:00
Robert Watson
ec272d8708 Introduce support for Mandatory Access Control and extensible
kernel access control.

Invoke a MAC framework entry point to authorize reception of an
incoming mbuf by the BPF descriptor, permitting MAC policies to
limit the visibility of packets delivered to particular BPF
descriptors.

Obtained from:	TrustedBSD Project
Sponsored by:	DARPA, NAI Labs
2002-07-31 16:11:32 +00:00
Robert Watson
82f4445d4c Introduce support for Mandatory Access Control and extensible
kernel access control.

Instrument BPF so that MAC labels are properly maintained on BPF
descriptors.  MAC framework entry points are invoked at BPF
instantiation and allocation, permitting the MAC framework to
derive the BPF descriptor label from the credential authorizing
the device open.  Also enter the MAC framework to label mbufs
created using the BPF device.

Obtained from:	TrustedBSD Project
Sponsored by:	DARPA, NAI Labs
2002-07-31 16:09:38 +00:00
Alfred Perlstein
e649887b1e Make funsetown() take a 'struct sigio **' so that the locking can
be done internally.

Ensure that no one can fsetown() to a dying process/pgrp.  We need
to check the process for P_WEXIT to see if it's exiting.  Process
groups are already safe because there is no such thing as a pgrp
zombie, therefore the proctree lock completely protects the pgrp
from having sigio structures associated with it after it runs
funsetownlst.

Add sigio lock to witness list under proctree and allproc, but over
proc and pgrp.

Seigo Tanimura helped with this.
2002-05-06 19:31:28 +00:00
Alfred Perlstein
f132072368 Redo the sigio locking.
Turn the sigio sx into a mutex.

Sigio lock is really only needed to protect interrupts from dereferencing
the sigio pointer in an object when the sigio itself is being destroyed.

In order to do this in the most unintrusive manner change pgsigio's
sigio * argument into a **, that way we can lock internally to the
function.
2002-05-01 20:44:46 +00:00
John Baldwin
6008862bc2 Change callers of mtx_init() to pass in an appropriate lock type name. In
most cases NULL is passed, but in some cases such as network driver locks
(which use the MTX_NETWORK_LOCK macro) and UMA zone locks, a name is used.

Tested on:	i386, alpha, sparc64
2002-04-04 21:03:38 +00:00
Luigi Rizzo
d722be5487 Replace (deprecated ?) FREE() macro with direct calls to free() 2002-04-04 06:03:17 +00:00
Alfred Perlstein
929ddbbb89 Remove __P. 2002-03-19 21:54:18 +00:00
Alfred Perlstein
d16160657d Missed this file for select SMP fixes associated with rev 1.93 of
kern/sys_generic.c
2002-03-14 04:47:08 +00:00
John Polstra
81bda851db Make bpf's read timeout feature work more correctly with
select/poll, and therefore with pthreads.  I doubt there is any way
to make this 100% semantically identical to the way it behaves in
unthreaded programs with blocking reads, but the solution here
should do the right thing for all reasonable usage patterns.

The basic idea is to schedule a callout for the read timeout when a
select/poll is done.  When the callout fires, it ends the select if
it is still in progress, or marks the state as "timed out" if the
select has already ended for some other reason.  Additional logic in
bpfread then does the right thing in the case where the timeout has
fired.

Note, I co-opted the bd_state member of the bpf_d structure.  It has
been present in the structure since the initial import of 4.4-lite,
but as far as I can tell it has never been used.

PR:		kern/22063 and bin/31649
MFC after:	3 days
2001-12-14 22:17:54 +00:00
Andrew R. Reiter
0f6db47fb3 - M_ZERO already sets bif_dlist to zero; there is no need to
do it again.
2001-11-18 03:41:20 +00:00
Ruslan Ermilov
4f252c4dd5 Record the fact that revision 1.39 corresponded to CSRG revision 8.4,
and first hunk of revision 1.76 corresponded to CSRG revision 8.3.
2001-10-17 10:18:42 +00:00
John Baldwin
6a40eccec3 Malloc mutexes pre-zero'd as random garbage (including 0xdeadcode) my
trigget the check to make sure we don't initalize a mutex twice.
2001-10-10 20:43:50 +00:00
John Baldwin
ed01445d8f Use the passed in thread to selrecord() instead of curthread. 2001-09-21 22:46:54 +00:00
Julian Elischer
b40ce4165d KSE Milestone 2
Note ALL MODULES MUST BE RECOMPILED
make the kernel aware that there are smaller units of scheduling than the
process. (but only allow one thread per process at this time).
This is functionally equivalent to teh previousl -current except
that there is a thread associated with each process.

Sorry john! (your next MFC will be a doosie!)

Reviewed by: peter@freebsd.org, dillon@freebsd.org

X-MFC after:    ha ha ha ha
2001-09-12 08:38:13 +00:00
Dima Dorfman
98ec4706ee Correct the comment about bpfattach() to match reality.
PR:		29967
Submitted by:	Joseph Mallett <jmallett@xMach.org>
2001-08-23 22:38:08 +00:00
Garance A Drosehn
0832fc6494 Fix bpf devices so select() recognizes that they are always writable.
PR:		9355
Submitted by:	Bruce Evans <bde@zeta.org.au>
Reviewed by:	Garrett Rooney <rooneg@electricjellyfish.net>  (see pr :-)
2001-04-04 23:27:35 +00:00
Poul-Henning Kamp
f83880518b Send the remains (such as I have located) of "block major numbers" to
the bit-bucket.
2001-03-26 12:41:29 +00:00
Robert Watson
5be30b375e o Remove unnecessary jail() check in bpfopen() -- we limit device access
in jail using /dev namespace limits and mknod() limits, not by explicit
  checks in the device open code.
2001-02-21 05:34:34 +00:00
Jonathan Lemon
e7bb21b3df Add mutexes to the entire bpf subsystem to make it MPSAFE.
Previously reviewed by: jhb, bde
2001-02-16 17:10:28 +00:00
Peter Wemm
5bb5f2c942 Supply a stub bpf_validate() (always returning false - the script is not
valid) if BPF is missing.
The netgraph_bpf node forced bpf to be present, reflect that in the
options.
Stop doing a 'count bpf' - we provide stubs.
Since a handful of drivers still refer to "bpf.h", provide a more accurate
indication that the API is present always. (eg: netinet6)
2001-01-29 13:26:14 +00:00
Bosko Milekic
e3b4e866a5 Small fix for bpf compat:
Make malloc() use M_NOWAIT istead of M_DONTWAIT and in the
bpf_compat case, define M_NOWAIT to be M_DONTWAIT.
2000-12-27 22:20:13 +00:00
Bosko Milekic
2a0c503e7a * Rename M_WAIT mbuf subsystem flag to M_TRYWAIT.
This is because calls with M_WAIT (now M_TRYWAIT) may not wait
  forever when nothing is available for allocation, and may end up
  returning NULL. Hopefully we now communicate more of the right thing
  to developers and make it very clear that it's necessary to check whether
  calls with M_(TRY)WAIT also resulted in a failed allocation.
  M_TRYWAIT basically means "try harder, block if necessary, but don't
  necessarily wait forever." The time spent blocking is tunable with
  the kern.ipc.mbuf_wait sysctl.
  M_WAIT is now deprecated but still defined for the next little while.

* Fix a typo in a comment in mbuf.h

* Fix some code that was actually passing the mbuf subsystem's M_WAIT to
  malloc(). Made it pass M_WAITOK instead. If we were ever to redefine the
  value of the M_WAIT flag, this could have became a big problem.
2000-12-21 21:44:31 +00:00
John Polstra
fba3cfdef2 Fix bug: a read() on a bpf device which was in non-blocking mode
and had no data available returned 0.  Now it returns -1 with errno
set to EWOULDBLOCK (== EAGAIN) as it should.  This fix makes the bpf
device usable in threaded programs.

Reviewed by:	bde
2000-12-17 20:50:22 +00:00
David Malone
7cc0979fd6 Convert more malloc+bzero to malloc+M_ZERO.
Submitted by:	josh@zipperup.org
Submitted by:	Robert Drehmel <robd@gmx.net>
2000-12-08 21:51:06 +00:00
Poul-Henning Kamp
959b7375ed Staticize some malloc M_ instances. 2000-12-08 20:09:00 +00:00
John Baldwin
d1d74c2886 Fix an order of operations buglet. ! has higher precedence than &. This
should fix the warnings about bpf not calling make_dev().
2000-11-03 00:51:41 +00:00
Poul-Henning Kamp
6ab7b8286d Don't make_dev() in bpfopen() unless we need to. 2000-10-09 14:19:09 +00:00
Poul-Henning Kamp
b0d17ba69e Rename lminor() to dev2unit(). This function gives a linear unit number
which hides the 'hole' in the minor bits.

Introduce unit2minor() to do the reverse operation.

Fix some some make_dev() calls which didn't use UID_* or GID_* macros.

Kill the v_hashchain alias macro, it hides the real relationship.

Introduce experimental SI_CHEAPCLONE flag set it on cloned bpfs.
2000-09-19 10:28:44 +00:00