freebsd-nq/share/man/man4/bpf.4

1012 lines
29 KiB
Groff
Raw Normal View History

Introduce support for zero-copy BPF buffering, which reduces the overhead of packet capture by allowing a user process to directly "loan" buffer memory to the kernel rather than using read(2) to explicitly copy data from kernel address space. The user process will issue new BPF ioctls to set the shared memory buffer mode and provide pointers to buffers and their size. The kernel then wires and maps the pages into kernel address space using sf_buf(9), which on supporting architectures will use the direct map region. The current "buffered" access mode remains the default, and support for zero-copy buffers must, for the time being, be explicitly enabled using a sysctl for the kernel to accept requests to use it. The kernel and user process synchronize use of the buffers with atomic operations, avoiding the need for system calls under load; the user process may use select()/poll()/kqueue() to manage blocking while waiting for network data if the user process is able to consume data faster than the kernel generates it. Patchs to libpcap are available to allow libpcap applications to transparently take advantage of this support. Detailed information on the new API may be found in bpf(4), including specific atomic operations and memory barriers required to synchronize buffer use safely. These changes modify the base BPF implementation to (roughly) abstrac the current buffer model, allowing the new shared memory model to be added, and add new monitoring statistics for netstat to print. The implementation, with the exception of some monitoring hanges that break the netstat monitoring ABI for BPF, will be MFC'd. Zerocopy bpf buffers are still considered experimental are disabled by default. To experiment with this new facility, adjust the net.bpf.zerocopy_enable sysctl variable to 1. Changes to libpcap will be made available as a patch for the time being, and further refinements to the implementation are expected. Sponsored by: Seccuris Inc. In collaboration with: rwatson Tested by: pwood, gallatin MFC after: 4 months [1] [1] Certain portions will probably not be MFCed, specifically things that can break the monitoring ABI.
2008-03-24 13:49:17 +00:00
.\" Copyright (c) 2007 Seccuris Inc.
.\" All rights reserved.
.\"
.\" This sofware was developed by Robert N. M. Watson under contract to
.\" Seccuris Inc.
.\"
.\" Redistribution and use in source and binary forms, with or without
.\" modification, are permitted provided that the following conditions
.\" are met:
.\" 1. Redistributions of source code must retain the above copyright
.\" notice, this list of conditions and the following disclaimer.
.\" 2. Redistributions in binary form must reproduce the above copyright
.\" notice, this list of conditions and the following disclaimer in the
.\" documentation and/or other materials provided with the distribution.
.\"
.\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
.\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
.\" SUCH DAMAGE.
.\"
1995-01-25 08:46:06 +00:00
.\" Copyright (c) 1990 The Regents of the University of California.
.\" All rights reserved.
.\"
.\" Redistribution and use in source and binary forms, with or without
.\" modification, are permitted provided that: (1) source code distributions
.\" retain the above copyright notice and this paragraph in its entirety, (2)
.\" distributions including binary code include the above copyright notice and
.\" this paragraph in its entirety in the documentation or other materials
.\" provided with the distribution, and (3) all advertising materials mentioning
.\" features or use of this software display the following acknowledgement:
.\" ``This product includes software developed by the University of California,
.\" Lawrence Berkeley Laboratory and its contributors.'' Neither the name of
.\" the University nor the names of its contributors may be used to endorse
.\" or promote products derived from this software without specific prior
.\" written permission.
.\" THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED
.\" WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF
.\" MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
.\"
.\" This document is derived in part from the enet man page (enet.4)
.\" distributed with 4.3BSD Unix.
.\"
1999-08-28 00:22:10 +00:00
.\" $FreeBSD$
.\"
.Dd February 26, 2007
1996-01-16 20:01:05 +00:00
.Dt BPF 4
.Os
1996-01-16 20:01:05 +00:00
.Sh NAME
.Nm bpf
.Nd Berkeley Packet Filter
.Sh SYNOPSIS
.Cd device bpf
1996-01-16 20:01:05 +00:00
.Sh DESCRIPTION
1995-01-25 08:46:06 +00:00
The Berkeley Packet Filter
1996-01-16 20:01:05 +00:00
provides a raw interface to data link layers in a protocol
1995-01-25 08:46:06 +00:00
independent fashion.
All packets on the network, even those destined for other hosts,
are accessible through this mechanism.
1996-01-16 20:01:05 +00:00
.Pp
1995-01-25 08:46:06 +00:00
The packet filter appears as a character special device,
1996-01-16 20:01:05 +00:00
.Pa /dev/bpf0 ,
.Pa /dev/bpf1 ,
1995-01-25 08:46:06 +00:00
etc.
1996-01-16 20:01:05 +00:00
After opening the device, the file descriptor must be bound to a
specific network interface with the
1997-01-05 22:15:59 +00:00
.Dv BIOCSETIF
1996-01-16 20:01:05 +00:00
ioctl.
A given interface can be shared by multiple listeners, and the filter
1995-01-25 08:46:06 +00:00
underlying each descriptor will see an identical packet stream.
1996-01-16 20:01:05 +00:00
.Pp
A separate device file is required for each minor device.
If a file is in use, the open will fail and
.Va errno
will be set to
.Er EBUSY .
.Pp
1995-01-25 08:46:06 +00:00
Associated with each open instance of a
.Nm
1995-01-25 08:46:06 +00:00
file is a user-settable packet filter.
1996-01-16 20:01:05 +00:00
Whenever a packet is received by an interface,
1995-01-25 08:46:06 +00:00
all file descriptors listening on that interface apply their filter.
Each descriptor that accepts the packet receives its own copy.
1996-01-16 20:01:05 +00:00
.Pp
1995-01-25 08:46:06 +00:00
The packet filter will support any link level protocol that has fixed length
headers.
Currently, only Ethernet,
1996-01-16 20:01:05 +00:00
.Tn SLIP ,
and
.Tn PPP
drivers have been modified to interact with
.Nm .
1996-01-16 20:01:05 +00:00
.Pp
1995-01-25 08:46:06 +00:00
Since packet data is in network byte order, applications should use the
1996-01-16 20:01:05 +00:00
.Xr byteorder 3
1995-01-25 08:46:06 +00:00
macros to extract multi-byte values.
1996-01-16 20:01:05 +00:00
.Pp
A packet can be sent out on the network by writing to a
.Nm
file descriptor.
The writes are unbuffered, meaning only one packet can be processed per write.
1996-01-16 20:01:05 +00:00
Currently, only writes to Ethernets and
.Tn SLIP
links are supported.
Introduce support for zero-copy BPF buffering, which reduces the overhead of packet capture by allowing a user process to directly "loan" buffer memory to the kernel rather than using read(2) to explicitly copy data from kernel address space. The user process will issue new BPF ioctls to set the shared memory buffer mode and provide pointers to buffers and their size. The kernel then wires and maps the pages into kernel address space using sf_buf(9), which on supporting architectures will use the direct map region. The current "buffered" access mode remains the default, and support for zero-copy buffers must, for the time being, be explicitly enabled using a sysctl for the kernel to accept requests to use it. The kernel and user process synchronize use of the buffers with atomic operations, avoiding the need for system calls under load; the user process may use select()/poll()/kqueue() to manage blocking while waiting for network data if the user process is able to consume data faster than the kernel generates it. Patchs to libpcap are available to allow libpcap applications to transparently take advantage of this support. Detailed information on the new API may be found in bpf(4), including specific atomic operations and memory barriers required to synchronize buffer use safely. These changes modify the base BPF implementation to (roughly) abstrac the current buffer model, allowing the new shared memory model to be added, and add new monitoring statistics for netstat to print. The implementation, with the exception of some monitoring hanges that break the netstat monitoring ABI for BPF, will be MFC'd. Zerocopy bpf buffers are still considered experimental are disabled by default. To experiment with this new facility, adjust the net.bpf.zerocopy_enable sysctl variable to 1. Changes to libpcap will be made available as a patch for the time being, and further refinements to the implementation are expected. Sponsored by: Seccuris Inc. In collaboration with: rwatson Tested by: pwood, gallatin MFC after: 4 months [1] [1] Certain portions will probably not be MFCed, specifically things that can break the monitoring ABI.
2008-03-24 13:49:17 +00:00
.Sh BUFFER MODES
.Nm
devices deliver packet data to the application via memory buffers provided by
the application.
The buffer mode is set using the
.Dv BIOCSETBUFMODE
ioctl, and read using the
.Dv BIOCGETBUFMODE
ioctl.
.Ss Buffered read mode
By default,
.Nm
devices operate in the
.Dv BPF_BUFMODE_BUFFER
mode, in which packet data is copied explicitly from kernel to user memory
using the
.Xr read 2
system call.
The user process will declare a fixed buffer size that will be used both for
sizing internal buffers and for all
.Xr read 2
operations on the file.
This size is queried using the
.Dv BIOCGBLEN
ioctl, and is set using the
.Dv BIOCSBLEN
ioctl.
Note that an individual packet larger than the buffer size is necessarily
truncated.
.Ss Zero-copy buffer mode
.Nm
devices may also operate in the
.Dv BPF_BUFMODE_ZEROCOPY
mode, in which packet data is written directly into two user memory buffers
by the kernel, avoiding both system call and copying overhead.
Buffers are of fixed (and equal) size, page-aligned, and an even multiple of
the page size.
The maximum zero-copy buffer size is returned by the
.Dv BIOCGETZMAX
ioctl.
Note that an individual packet larger than the buffer size is necessarily
truncated.
.Pp
The user process registers two memory buffers using the
.Dv BIOCSETZBUF
ioctl, which accepts a
.Vt struct bpf_zbuf
pointer as an argument:
.Bd -literal
struct bpf_zbuf {
void *bz_bufa;
void *bz_bufb;
size_t bz_buflen;
};
.Ed
.Pp
.Vt bz_bufa
is a pointer to the userspace address of the first buffer that will be
filled, and
.Vt bz_bufb
is a pointer to the second buffer.
.Nm
will then cycle between the two buffers as they fill and are acknowledged.
.Pp
Each buffer begins with a fixed-length header to hold synchronization and
data length information for the buffer:
.Bd -literal
struct bpf_zbuf_header {
volatile u_int bzh_kernel_gen; /* Kernel generation number. */
volatile u_int bzh_kernel_len; /* Length of data in the buffer. */
volatile u_int bzh_user_gen; /* User generation number. */
/* ...padding for future use... */
};
.Ed
.Pp
The header structure of each buffer, including all padding, should be zeroed
before it is configured using
.Dv BIOCSETZBUF .
Remaining space in the buffer will be used by the kernel to store packet
data, laid out in the same format as with buffered read mode.
.Pp
The kernel and the user process follow a simple acknowledgement protocol via
the buffer header to synchronize access to the buffer: when the header
generation numbers,
.Vt bzh_kernel_gen
and
.Vt bzh_user_gen ,
hold the same value, the kernel owns the buffer, and when they differ,
userspace owns the buffer.
.Pp
While the kernel owns the buffer, the contents are unstable and may change
asynchronously; while the user process owns the buffer, its contents are
stable and will not be changed until the buffer has been acknowledged.
.Pp
Initializing the buffer headers to all 0's before registering the buffer has
the effect of assigning initial ownership of both buffers to the kernel.
The kernel signals that a buffer has been assigned to userspace by modifying
.Vt bzh_kernel_gen ,
and userspace acknowledges the buffer and returns it to the kernel by setting
the value of
.Vt bzh_user_gen
to the value of
.Vt bzh_kernel_gen .
.Pp
In order to avoid caching and memory re-ordering effects, the user process
must use atomic operations and memory barriers when checking for and
acknowledging buffers:
.Bd -literal
#include <machine/atomic.h>
/*
* Return ownership of a buffer to the kernel for reuse.
*/
static void
buffer_acknowledge(struct bpf_zbuf_header *bzh)
{
atomic_store_rel_int(&bzh->bzh_user_gen, bzh->bzh_kernel_gen);
}
/*
* Check whether a buffer has been assigned to userspace by the kernel.
* Return true if userspace owns the buffer, and false otherwise.
*/
static int
buffer_check(struct bpf_zbuf_header *bzh)
{
return (bzh->bzh_user_gen !=
atomic_load_acq_int(&bzh->bzh_kernel_gen));
}
.Ed
.Pp
The user process may force the assignment of the next buffer, if any data
is pending, to userspace using the
.Dv BIOCROTZBUF
ioctl.
This allows the user process to retrieve data in a partially filled buffer
before the buffer is full, such as following a timeout; the process must
recheck for buffer ownership using the header generation numbers, as the
buffer will not be assigned to userspace if no data was present.
.Pp
As in the buffered read mode,
.Xr kqueue 2 ,
.Xr poll 2 ,
and
.Xr select 2
may be used to sleep awaiting the availbility of a completed buffer.
They will return a readable file descriptor when ownership of the next buffer
is assigned to user space.
.Pp
In the current implementation, the kernel may assign zero, one, or both
buffers to the user process; however, an earlier implementation maintained
the invariant that at most one buffer could be assigned to the user process
at a time.
In order to both ensure progress and high performance, user processes should
acknowledge a completely processed buffer as quickly as possible, returning
it for reuse, and not block waiting on a second buffer while holding another
buffer.
1996-01-16 20:01:05 +00:00
.Sh IOCTLS
The
.Xr ioctl 2
1999-10-30 15:12:25 +00:00
command codes below are defined in
.In net/bpf.h .
1999-10-30 15:12:25 +00:00
All commands require
1995-01-25 08:46:06 +00:00
these includes:
1996-01-16 20:01:05 +00:00
.Bd -literal
1995-01-25 08:46:06 +00:00
#include <sys/types.h>
#include <sys/time.h>
#include <sys/ioctl.h>
#include <net/bpf.h>
1996-01-16 20:01:05 +00:00
.Ed
.Pp
Additionally,
1996-01-16 20:01:05 +00:00
.Dv BIOCGETIF
and
.Dv BIOCSETIF
require
.In sys/socket.h
and
.In net/if.h .
2000-12-29 09:18:45 +00:00
.Pp
1996-01-16 20:01:05 +00:00
In addition to
.Dv FIONREAD
and
.Dv SIOCGIFADDR ,
the following commands may be applied to any open
.Nm
1995-01-25 08:46:06 +00:00
file.
1996-01-16 20:01:05 +00:00
The (third) argument to
.Xr ioctl 2
1995-01-25 08:46:06 +00:00
should be a pointer to the type indicated.
Introduce support for zero-copy BPF buffering, which reduces the overhead of packet capture by allowing a user process to directly "loan" buffer memory to the kernel rather than using read(2) to explicitly copy data from kernel address space. The user process will issue new BPF ioctls to set the shared memory buffer mode and provide pointers to buffers and their size. The kernel then wires and maps the pages into kernel address space using sf_buf(9), which on supporting architectures will use the direct map region. The current "buffered" access mode remains the default, and support for zero-copy buffers must, for the time being, be explicitly enabled using a sysctl for the kernel to accept requests to use it. The kernel and user process synchronize use of the buffers with atomic operations, avoiding the need for system calls under load; the user process may use select()/poll()/kqueue() to manage blocking while waiting for network data if the user process is able to consume data faster than the kernel generates it. Patchs to libpcap are available to allow libpcap applications to transparently take advantage of this support. Detailed information on the new API may be found in bpf(4), including specific atomic operations and memory barriers required to synchronize buffer use safely. These changes modify the base BPF implementation to (roughly) abstrac the current buffer model, allowing the new shared memory model to be added, and add new monitoring statistics for netstat to print. The implementation, with the exception of some monitoring hanges that break the netstat monitoring ABI for BPF, will be MFC'd. Zerocopy bpf buffers are still considered experimental are disabled by default. To experiment with this new facility, adjust the net.bpf.zerocopy_enable sysctl variable to 1. Changes to libpcap will be made available as a patch for the time being, and further refinements to the implementation are expected. Sponsored by: Seccuris Inc. In collaboration with: rwatson Tested by: pwood, gallatin MFC after: 4 months [1] [1] Certain portions will probably not be MFCed, specifically things that can break the monitoring ABI.
2008-03-24 13:49:17 +00:00
.Bl -tag -width BIOCGETBUFMODE
1996-01-16 20:01:05 +00:00
.It Dv BIOCGBLEN
.Pq Li u_int
1995-01-25 08:46:06 +00:00
Returns the required buffer length for reads on
1996-01-16 20:01:05 +00:00
.Nm
1995-01-25 08:46:06 +00:00
files.
1996-01-16 20:01:05 +00:00
.It Dv BIOCSBLEN
.Pq Li u_int
Sets the buffer length for reads on
.Nm
files.
The buffer must be set before the file is attached to an interface
1996-01-16 20:01:05 +00:00
with
.Dv BIOCSETIF .
If the requested buffer size cannot be accommodated, the closest
1995-01-25 08:46:06 +00:00
allowable size will be set and returned in the argument.
A read call will result in
1996-01-16 20:01:05 +00:00
.Er EIO
if it is passed a buffer that is not this size.
.It Dv BIOCGDLT
.Pq Li u_int
Returns the type of the data link layer underlying the attached interface.
1996-01-16 20:01:05 +00:00
.Er EINVAL
is returned if no interface has been specified.
The device types, prefixed with
.Dq Li DLT_ ,
are defined in
.In net/bpf.h .
1996-01-16 20:01:05 +00:00
.It Dv BIOCPROMISC
1995-01-25 08:46:06 +00:00
Forces the interface into promiscuous mode.
All packets, not just those destined for the local host, are processed.
Since more than one file can be listening on a given interface,
a listener that opened its interface non-promiscuously may receive
packets promiscuously.
This problem can be remedied with an appropriate filter.
1996-01-16 20:01:05 +00:00
.It Dv BIOCFLUSH
1995-01-25 08:46:06 +00:00
Flushes the buffer of incoming packets,
and resets the statistics that are returned by BIOCGSTATS.
1996-01-16 20:01:05 +00:00
.It Dv BIOCGETIF
.Pq Li "struct ifreq"
1995-01-25 08:46:06 +00:00
Returns the name of the hardware interface that the file is listening on.
The name is returned in the ifr_name field of
1996-01-16 20:01:05 +00:00
the
.Li ifreq
structure.
1995-01-25 08:46:06 +00:00
All other fields are undefined.
1996-01-16 20:01:05 +00:00
.It Dv BIOCSETIF
.Pq Li "struct ifreq"
Sets the hardware interface associate with the file.
This
1995-01-25 08:46:06 +00:00
command must be performed before any packets can be read.
The device is indicated by name using the
.Li ifr_name
1996-01-16 20:01:05 +00:00
field of the
.Li ifreq
structure.
Additionally, performs the actions of
1996-01-16 20:01:05 +00:00
.Dv BIOCFLUSH .
.It Dv BIOCSRTIMEOUT
.It Dv BIOCGRTIMEOUT
.Pq Li "struct timeval"
1995-01-25 08:46:06 +00:00
Set or get the read timeout parameter.
1996-01-16 20:01:05 +00:00
The argument
1995-01-25 08:46:06 +00:00
specifies the length of time to wait before timing
out on a read request.
This parameter is initialized to zero by
1996-01-16 20:01:05 +00:00
.Xr open 2 ,
1995-01-25 08:46:06 +00:00
indicating no timeout.
1996-01-16 20:01:05 +00:00
.It Dv BIOCGSTATS
.Pq Li "struct bpf_stat"
1995-01-25 08:46:06 +00:00
Returns the following structure of packet statistics:
1996-01-16 20:01:05 +00:00
.Bd -literal
1995-01-25 08:46:06 +00:00
struct bpf_stat {
u_int bs_recv; /* number of packets received */
u_int bs_drop; /* number of packets dropped */
1995-01-25 08:46:06 +00:00
};
1996-01-16 20:01:05 +00:00
.Ed
.Pp
1995-01-25 08:46:06 +00:00
The fields are:
1996-01-16 20:01:05 +00:00
.Bl -hang -offset indent
.It Li bs_recv
1995-01-25 08:46:06 +00:00
the number of packets received by the descriptor since opened or reset
(including any buffered since the last read call);
and
1996-01-16 20:01:05 +00:00
.It Li bs_drop
the number of packets which were accepted by the filter but dropped by the
1995-01-25 08:46:06 +00:00
kernel because of buffer overflows
2005-02-13 22:25:33 +00:00
(i.e., the application's reads are not keeping up with the packet traffic).
1996-01-16 20:01:05 +00:00
.El
.It Dv BIOCIMMEDIATE
.Pq Li u_int
1999-10-30 15:12:25 +00:00
Enable or disable
.Dq immediate mode ,
based on the truth value of the argument.
1996-01-16 20:01:05 +00:00
When immediate mode is enabled, reads return immediately upon packet
reception.
Otherwise, a read will block until either the kernel buffer
1995-01-25 08:46:06 +00:00
becomes full or a timeout occurs.
This is useful for programs like
1996-01-16 20:01:05 +00:00
.Xr rarpd 8
1995-01-25 08:46:06 +00:00
which must respond to messages in real time.
The default for a new file is off.
1996-01-16 20:01:05 +00:00
.It Dv BIOCSETF
.It Dv BIOCSETFNR
1996-01-16 20:01:05 +00:00
.Pq Li "struct bpf_program"
Sets the read filter program used by the kernel to discard uninteresting
packets.
An array of instructions and its length is passed in using
1995-01-25 08:46:06 +00:00
the following structure:
1996-01-16 20:01:05 +00:00
.Bd -literal
1995-01-25 08:46:06 +00:00
struct bpf_program {
int bf_len;
struct bpf_insn *bf_insns;
1995-01-25 08:46:06 +00:00
};
1996-01-16 20:01:05 +00:00
.Ed
2000-12-29 09:18:45 +00:00
.Pp
1995-01-25 08:46:06 +00:00
The filter program is pointed to by the
1996-01-16 20:01:05 +00:00
.Li bf_insns
field while its length in units of
1996-01-16 20:01:05 +00:00
.Sq Li struct bpf_insn
is given by the
.Li bf_len
1995-01-25 08:46:06 +00:00
field.
1996-01-16 20:01:05 +00:00
See section
.Sx "FILTER MACHINE"
for an explanation of the filter language.
The only difference between
.Dv BIOCSETF
and
.Dv BIOCSETFNR
is
.Dv BIOCSETF
performs the actions of
.Dv BIOCFLUSH
while
.Dv BIOCSETFNR
does not.
.It Dv BIOCSETWF
.Pq Li "struct bpf_program"
Sets the write filter program used by the kernel to control what type of
2005-11-18 10:56:28 +00:00
packets can be written to the interface.
See the
.Dv BIOCSETF
command for more
information on the
.Nm
filter program.
1996-01-16 20:01:05 +00:00
.It Dv BIOCVERSION
.Pq Li "struct bpf_version"
Returns the major and minor version numbers of the filter language currently
recognized by the kernel.
Before installing a filter, applications must check
that the current version is compatible with the running kernel.
Version numbers are compatible if the major numbers match and the application minor
is less than or equal to the kernel minor.
The kernel version number is returned in the following structure:
1996-01-16 20:01:05 +00:00
.Bd -literal
1995-01-25 08:46:06 +00:00
struct bpf_version {
1996-01-16 20:01:05 +00:00
u_short bv_major;
u_short bv_minor;
1995-01-25 08:46:06 +00:00
};
1996-01-16 20:01:05 +00:00
.Ed
.Pp
1996-01-16 20:01:05 +00:00
The current version numbers are given by
.Dv BPF_MAJOR_VERSION
and
.Dv BPF_MINOR_VERSION
from
.In net/bpf.h .
1995-01-25 08:46:06 +00:00
An incompatible filter
may result in undefined behavior (most likely, an error returned by
1996-01-16 20:01:05 +00:00
.Fn ioctl
1995-01-25 08:46:06 +00:00
or haphazard packet matching).
.It Dv BIOCSHDRCMPLT
.It Dv BIOCGHDRCMPLT
.Pq Li u_int
1999-10-30 15:12:25 +00:00
Set or get the status of the
.Dq header complete
flag.
Set to zero if the link level source address should be filled in automatically
by the interface output routine.
Set to one if the link level source
address will be written, as provided, to the wire.
This flag is initialized to zero by default.
.It Dv BIOCSSEESENT
.It Dv BIOCGSEESENT
.Pq Li u_int
These commands are obsolete but left for compatibility.
Use
.Dv BIOCSDIRECTION
and
.Dv BIOCGDIRECTION
instead.
Set or get the flag determining whether locally generated packets on the
interface should be returned by BPF.
Set to zero to see only incoming packets on the interface.
Set to one to see packets originating locally and remotely on the interface.
This flag is initialized to one by default.
.It Dv BIOCSDIRECTION
.It Dv BIOCGDIRECTION
.Pq Li u_int
Set or get the setting determining whether incoming, outgoing, or all packets
on the interface should be returned by BPF.
Set to
.Dv BPF_D_IN
to see only incoming packets on the interface.
Set to
.Dv BPF_D_INOUT
to see packets originating locally and remotely on the interface.
Set to
.Dv BPF_D_OUT
to see only outgoing packets on the interface.
This setting is initialized to
.Dv BPF_D_INOUT
by default.
.It Dv BIOCFEEDBACK
.Pq Li u_int
Set packet feedback mode.
This allows injected packets to be fed back as input to the interface when
output via the interface is successful.
When
.Dv BPF_D_INOUT
direction is set, injected outgoing packet is not returned by BPF to avoid
duplication. This flag is initialized to zero by default.
.It Dv BIOCLOCK
2005-11-18 10:56:28 +00:00
Set the locked flag on the
.Nm
descriptor.
This prevents the execution of
ioctl commands which could change the underlying operating parameters of
the device.
Introduce support for zero-copy BPF buffering, which reduces the overhead of packet capture by allowing a user process to directly "loan" buffer memory to the kernel rather than using read(2) to explicitly copy data from kernel address space. The user process will issue new BPF ioctls to set the shared memory buffer mode and provide pointers to buffers and their size. The kernel then wires and maps the pages into kernel address space using sf_buf(9), which on supporting architectures will use the direct map region. The current "buffered" access mode remains the default, and support for zero-copy buffers must, for the time being, be explicitly enabled using a sysctl for the kernel to accept requests to use it. The kernel and user process synchronize use of the buffers with atomic operations, avoiding the need for system calls under load; the user process may use select()/poll()/kqueue() to manage blocking while waiting for network data if the user process is able to consume data faster than the kernel generates it. Patchs to libpcap are available to allow libpcap applications to transparently take advantage of this support. Detailed information on the new API may be found in bpf(4), including specific atomic operations and memory barriers required to synchronize buffer use safely. These changes modify the base BPF implementation to (roughly) abstrac the current buffer model, allowing the new shared memory model to be added, and add new monitoring statistics for netstat to print. The implementation, with the exception of some monitoring hanges that break the netstat monitoring ABI for BPF, will be MFC'd. Zerocopy bpf buffers are still considered experimental are disabled by default. To experiment with this new facility, adjust the net.bpf.zerocopy_enable sysctl variable to 1. Changes to libpcap will be made available as a patch for the time being, and further refinements to the implementation are expected. Sponsored by: Seccuris Inc. In collaboration with: rwatson Tested by: pwood, gallatin MFC after: 4 months [1] [1] Certain portions will probably not be MFCed, specifically things that can break the monitoring ABI.
2008-03-24 13:49:17 +00:00
.It Dv BIOCGETBUFMODE
.It Dv BIOCSETBUFMODE
.Pq Li u_int
Get or set the current
.Nm
buffering mode; possible values are
.Dv BPF_BUFMODE_BUFFER ,
buffered read mode, and
.Dv BPF_BUFMODE_ZBUF ,
zero-copy buffer mode.
.It Dv BIOCSETZBUF
.Pq Li struct bpf_zbuf
Set the current zero-copy buffer locations; buffer locations may be
set only once zero-copy buffer mode has been selected, and prior to attaching
to an interface.
Buffers must be of identical size, page-aligned, and an integer multiple of
pages in size.
The three fields
.Vt bz_bufa ,
.Vt bz_bufb ,
and
.Vt bz_buflen
must be filled out.
If buffers have already been set for this device, the ioctl will fail.
.It Dv BIOCGETZMAX
.Pq Li size_t
Get the largest individual zero-copy buffer size allowed.
As two buffers are used in zero-copy buffer mode, the limit (in practice) is
twice the returned size.
As zero-copy buffers consume kernel address space, conservative selection of
buffer size is suggested, especially when there are multiple
.Nm
descriptors in use on 32-bit systems.
.It Dv BIOCROTZBUF
Force ownership of the next buffer to be assigned to userspace, if any data
present in the buffer.
If no data is present, the buffer will remain owned by the kernel.
This allows consumers of zero-copy buffering to implement timeouts and
retrieve partially filled buffers.
In order to handle the case where no data is present in the buffer and
therefore ownership is not assigned, the user process must check
.Vt bzh_kernel_gen
against
.Vt bzh_user_gen .
2000-12-29 09:18:45 +00:00
.El
1996-01-16 20:01:05 +00:00
.Sh BPF HEADER
The following structure is prepended to each packet returned by
Introduce support for zero-copy BPF buffering, which reduces the overhead of packet capture by allowing a user process to directly "loan" buffer memory to the kernel rather than using read(2) to explicitly copy data from kernel address space. The user process will issue new BPF ioctls to set the shared memory buffer mode and provide pointers to buffers and their size. The kernel then wires and maps the pages into kernel address space using sf_buf(9), which on supporting architectures will use the direct map region. The current "buffered" access mode remains the default, and support for zero-copy buffers must, for the time being, be explicitly enabled using a sysctl for the kernel to accept requests to use it. The kernel and user process synchronize use of the buffers with atomic operations, avoiding the need for system calls under load; the user process may use select()/poll()/kqueue() to manage blocking while waiting for network data if the user process is able to consume data faster than the kernel generates it. Patchs to libpcap are available to allow libpcap applications to transparently take advantage of this support. Detailed information on the new API may be found in bpf(4), including specific atomic operations and memory barriers required to synchronize buffer use safely. These changes modify the base BPF implementation to (roughly) abstrac the current buffer model, allowing the new shared memory model to be added, and add new monitoring statistics for netstat to print. The implementation, with the exception of some monitoring hanges that break the netstat monitoring ABI for BPF, will be MFC'd. Zerocopy bpf buffers are still considered experimental are disabled by default. To experiment with this new facility, adjust the net.bpf.zerocopy_enable sysctl variable to 1. Changes to libpcap will be made available as a patch for the time being, and further refinements to the implementation are expected. Sponsored by: Seccuris Inc. In collaboration with: rwatson Tested by: pwood, gallatin MFC after: 4 months [1] [1] Certain portions will probably not be MFCed, specifically things that can break the monitoring ABI.
2008-03-24 13:49:17 +00:00
.Xr read 2
or via a zero-copy buffer:
1996-01-16 20:01:05 +00:00
.Bd -literal
1995-01-25 08:46:06 +00:00
struct bpf_hdr {
struct timeval bh_tstamp; /* time stamp */
u_long bh_caplen; /* length of captured portion */
u_long bh_datalen; /* original length of packet */
u_short bh_hdrlen; /* length of bpf header (this struct
plus alignment padding */
1995-01-25 08:46:06 +00:00
};
1996-01-16 20:01:05 +00:00
.Ed
.Pp
1995-01-25 08:46:06 +00:00
The fields, whose values are stored in host order, and are:
1996-01-16 20:01:05 +00:00
.Pp
.Bl -tag -compact -width bh_datalen
.It Li bh_tstamp
1995-01-25 08:46:06 +00:00
The time at which the packet was processed by the packet filter.
1996-01-16 20:01:05 +00:00
.It Li bh_caplen
The length of the captured portion of the packet.
This is the minimum of
1995-01-25 08:46:06 +00:00
the truncation amount specified by the filter and the length of the packet.
1996-01-16 20:01:05 +00:00
.It Li bh_datalen
1995-01-25 08:46:06 +00:00
The length of the packet off the wire.
This value is independent of the truncation amount specified by the filter.
1996-01-16 20:01:05 +00:00
.It Li bh_hdrlen
The length of the
.Nm
header, which may not be equal to
.\" XXX - not really a function call
.Fn sizeof "struct bpf_hdr" .
.El
.Pp
The
1996-01-16 20:01:05 +00:00
.Li bh_hdrlen
1995-01-25 08:46:06 +00:00
field exists to account for
padding between the header and the link level protocol.
The purpose here is to guarantee proper alignment of the packet
data structures, which is required on alignment sensitive
architectures and improves performance on many other architectures.
The packet filter insures that the
1996-01-16 20:01:05 +00:00
.Li bpf_hdr
and the network layer
header will be word aligned.
Suitable precautions
1995-01-25 08:46:06 +00:00
must be taken when accessing the link layer protocol fields on alignment
restricted machines.
2005-02-13 22:25:33 +00:00
(This is not a problem on an Ethernet, since
1995-01-25 08:46:06 +00:00
the type field is a short falling on an even offset,
and the addresses are probably accessed in a bytewise fashion).
1996-01-16 20:01:05 +00:00
.Pp
1995-01-25 08:46:06 +00:00
Additionally, individual packets are padded so that each starts
on a word boundary.
This requires that an application
1995-01-25 08:46:06 +00:00
has some knowledge of how to get from packet to packet.
1996-01-16 20:01:05 +00:00
The macro
.Dv BPF_WORDALIGN
is defined in
.In net/bpf.h
1996-01-16 20:01:05 +00:00
to facilitate
this process.
It rounds up its argument to the nearest word aligned value (where a word is
1996-01-16 20:01:05 +00:00
.Dv BPF_ALIGNMENT
bytes wide).
.Pp
For example, if
.Sq Li p
points to the start of a packet, this expression
1995-01-25 08:46:06 +00:00
will advance it to the next packet:
1996-01-16 20:01:05 +00:00
.Dl p = (char *)p + BPF_WORDALIGN(p->bh_hdrlen + p->bh_caplen)
.Pp
1995-01-25 08:46:06 +00:00
For the alignment mechanisms to work properly, the
buffer passed to
1996-01-16 20:01:05 +00:00
.Xr read 2
must itself be word aligned.
1996-01-16 20:01:05 +00:00
The
.Xr malloc 3
function
1995-01-25 08:46:06 +00:00
will always return an aligned buffer.
1996-01-16 20:01:05 +00:00
.Sh FILTER MACHINE
1995-01-25 08:46:06 +00:00
A filter program is an array of instructions, with all branches forwardly
directed, terminated by a
1996-01-16 20:01:05 +00:00
.Em return
instruction.
1995-01-25 08:46:06 +00:00
Each instruction performs some action on the pseudo-machine state,
which consists of an accumulator, index register, scratch memory store,
and implicit program counter.
2000-12-29 09:18:45 +00:00
.Pp
1995-01-25 08:46:06 +00:00
The following structure defines the instruction format:
1996-01-16 20:01:05 +00:00
.Bd -literal
1995-01-25 08:46:06 +00:00
struct bpf_insn {
u_short code;
u_char jt;
u_char jf;
u_long k;
1995-01-25 08:46:06 +00:00
};
1996-01-16 20:01:05 +00:00
.Ed
2000-12-29 09:18:45 +00:00
.Pp
The
1996-01-16 20:01:05 +00:00
.Li k
field is used in different ways by different instructions,
1996-01-16 20:01:05 +00:00
and the
.Li jt
and
1996-01-16 20:01:05 +00:00
.Li jf
fields are used as offsets
by the branch instructions.
1995-01-25 08:46:06 +00:00
The opcodes are encoded in a semi-hierarchical fashion.
There are eight classes of instructions:
.Dv BPF_LD ,
1996-01-16 20:01:05 +00:00
.Dv BPF_LDX ,
.Dv BPF_ST ,
.Dv BPF_STX ,
.Dv BPF_ALU ,
.Dv BPF_JMP ,
.Dv BPF_RET ,
and
.Dv BPF_MISC .
Various other mode and
1995-01-25 08:46:06 +00:00
operator bits are or'd into the class to give the actual instructions.
1996-01-16 20:01:05 +00:00
The classes and modes are defined in
.In net/bpf.h .
2000-12-29 09:18:45 +00:00
.Pp
1996-01-16 20:01:05 +00:00
Below are the semantics for each defined
.Nm
instruction.
1995-01-25 08:46:06 +00:00
We use the convention that A is the accumulator, X is the index register,
P[] packet data, and M[] scratch memory store.
1999-10-30 15:12:25 +00:00
P[i:n] gives the data at byte offset
.Dq i
in the packet,
1995-01-25 08:46:06 +00:00
interpreted as a word (n=4),
unsigned halfword (n=2), or unsigned byte (n=1).
M[i] gives the i'th word in the scratch memory store, which is only
addressed in word units.
The memory store is indexed from 0 to
1996-01-16 20:01:05 +00:00
.Dv BPF_MEMWORDS
- 1.
.Li k ,
.Li jt ,
and
.Li jf
are the corresponding fields in the
1999-10-30 15:12:25 +00:00
instruction definition.
.Dq len
refers to the length of the packet.
1996-01-16 20:01:05 +00:00
.Pp
2000-12-29 09:18:45 +00:00
.Bl -tag -width BPF_STXx
1996-01-16 20:01:05 +00:00
.It Dv BPF_LD
These instructions copy a value into the accumulator.
The type of the source operand is specified by an
1999-10-30 15:12:25 +00:00
.Dq addressing mode
and can be a constant
1996-01-16 20:01:05 +00:00
.Pq Dv BPF_IMM ,
packet data at a fixed offset
.Pq Dv BPF_ABS ,
packet data at a variable offset
.Pq Dv BPF_IND ,
the packet length
.Pq Dv BPF_LEN ,
or a word in the scratch memory store
.Pq Dv BPF_MEM .
For
1996-01-16 20:01:05 +00:00
.Dv BPF_IND
and
.Dv BPF_ABS ,
1996-01-16 20:01:05 +00:00
the data size must be specified as a word
.Pq Dv BPF_W ,
halfword
.Pq Dv BPF_H ,
or byte
.Pq Dv BPF_B .
The semantics of all the recognized
1996-01-16 20:01:05 +00:00
.Dv BPF_LD
instructions follow.
.Pp
.Bd -literal
BPF_LD+BPF_W+BPF_ABS A <- P[k:4]
BPF_LD+BPF_H+BPF_ABS A <- P[k:2]
BPF_LD+BPF_B+BPF_ABS A <- P[k:1]
BPF_LD+BPF_W+BPF_IND A <- P[X+k:4]
BPF_LD+BPF_H+BPF_IND A <- P[X+k:2]
BPF_LD+BPF_B+BPF_IND A <- P[X+k:1]
BPF_LD+BPF_W+BPF_LEN A <- len
BPF_LD+BPF_IMM A <- k
BPF_LD+BPF_MEM A <- M[k]
.Ed
1996-01-16 20:01:05 +00:00
.It Dv BPF_LDX
These instructions load a value into the index register.
Note that
the addressing modes are more restrictive than those of the accumulator loads,
1995-01-25 08:46:06 +00:00
but they include
1996-01-16 20:01:05 +00:00
.Dv BPF_MSH ,
1995-01-25 08:46:06 +00:00
a hack for efficiently loading the IP header length.
2000-12-29 09:18:45 +00:00
.Pp
.Bd -literal
BPF_LDX+BPF_W+BPF_IMM X <- k
BPF_LDX+BPF_W+BPF_MEM X <- M[k]
BPF_LDX+BPF_W+BPF_LEN X <- len
BPF_LDX+BPF_B+BPF_MSH X <- 4*(P[k:1]&0xf)
.Ed
1996-01-16 20:01:05 +00:00
.It Dv BPF_ST
1995-01-25 08:46:06 +00:00
This instruction stores the accumulator into the scratch memory.
We do not need an addressing mode since there is only one possibility
for the destination.
2000-12-29 09:18:45 +00:00
.Pp
.Bd -literal
BPF_ST M[k] <- A
.Ed
1996-01-16 20:01:05 +00:00
.It Dv BPF_STX
1995-01-25 08:46:06 +00:00
This instruction stores the index register in the scratch memory store.
2000-12-29 09:18:45 +00:00
.Pp
.Bd -literal
BPF_STX M[k] <- X
.Ed
1996-01-16 20:01:05 +00:00
.It Dv BPF_ALU
1995-01-25 08:46:06 +00:00
The alu instructions perform operations between the accumulator and
index register or constant, and store the result back in the accumulator.
For binary operations, a source mode is required
.Dv ( BPF_K
1996-01-16 20:01:05 +00:00
or
.Dv BPF_X ) .
2000-12-29 09:18:45 +00:00
.Pp
.Bd -literal
BPF_ALU+BPF_ADD+BPF_K A <- A + k
BPF_ALU+BPF_SUB+BPF_K A <- A - k
BPF_ALU+BPF_MUL+BPF_K A <- A * k
BPF_ALU+BPF_DIV+BPF_K A <- A / k
BPF_ALU+BPF_AND+BPF_K A <- A & k
BPF_ALU+BPF_OR+BPF_K A <- A | k
BPF_ALU+BPF_LSH+BPF_K A <- A << k
BPF_ALU+BPF_RSH+BPF_K A <- A >> k
BPF_ALU+BPF_ADD+BPF_X A <- A + X
BPF_ALU+BPF_SUB+BPF_X A <- A - X
BPF_ALU+BPF_MUL+BPF_X A <- A * X
BPF_ALU+BPF_DIV+BPF_X A <- A / X
BPF_ALU+BPF_AND+BPF_X A <- A & X
BPF_ALU+BPF_OR+BPF_X A <- A | X
BPF_ALU+BPF_LSH+BPF_X A <- A << X
BPF_ALU+BPF_RSH+BPF_X A <- A >> X
BPF_ALU+BPF_NEG A <- -A
.Ed
1996-01-16 20:01:05 +00:00
.It Dv BPF_JMP
The jump instructions alter flow of control.
Conditional jumps
1996-01-16 20:01:05 +00:00
compare the accumulator against a constant
.Pq Dv BPF_K
or the index register
.Pq Dv BPF_X .
If the result is true (or non-zero),
1995-01-25 08:46:06 +00:00
the true branch is taken, otherwise the false branch is taken.
Jump offsets are encoded in 8 bits so the longest jump is 256 instructions.
1996-01-16 20:01:05 +00:00
However, the jump always
.Pq Dv BPF_JA
opcode uses the 32 bit
.Li k
1995-01-25 08:46:06 +00:00
field as the offset, allowing arbitrarily distant destinations.
All conditionals use unsigned comparison conventions.
2000-12-29 09:18:45 +00:00
.Pp
.Bd -literal
BPF_JMP+BPF_JA pc += k
BPF_JMP+BPF_JGT+BPF_K pc += (A > k) ? jt : jf
BPF_JMP+BPF_JGE+BPF_K pc += (A >= k) ? jt : jf
BPF_JMP+BPF_JEQ+BPF_K pc += (A == k) ? jt : jf
BPF_JMP+BPF_JSET+BPF_K pc += (A & k) ? jt : jf
BPF_JMP+BPF_JGT+BPF_X pc += (A > X) ? jt : jf
BPF_JMP+BPF_JGE+BPF_X pc += (A >= X) ? jt : jf
BPF_JMP+BPF_JEQ+BPF_X pc += (A == X) ? jt : jf
BPF_JMP+BPF_JSET+BPF_X pc += (A & X) ? jt : jf
.Ed
1996-01-16 20:01:05 +00:00
.It Dv BPF_RET
1995-01-25 08:46:06 +00:00
The return instructions terminate the filter program and specify the amount
of packet to accept (i.e., they return the truncation amount).
A return value of zero indicates that the packet should be ignored.
1996-01-16 20:01:05 +00:00
The return value is either a constant
.Pq Dv BPF_K
or the accumulator
.Pq Dv BPF_A .
2000-12-29 09:18:45 +00:00
.Pp
.Bd -literal
BPF_RET+BPF_A accept A bytes
BPF_RET+BPF_K accept k bytes
.Ed
1996-01-16 20:01:05 +00:00
.It Dv BPF_MISC
2005-02-13 22:25:33 +00:00
The miscellaneous category was created for anything that does not
1995-01-25 08:46:06 +00:00
fit into the above classes, and for any new instructions that might need to
be added.
Currently, these are the register transfer instructions
1995-01-25 08:46:06 +00:00
that copy the index register to the accumulator or vice versa.
2000-12-29 09:18:45 +00:00
.Pp
.Bd -literal
BPF_MISC+BPF_TAX X <- A
BPF_MISC+BPF_TXA A <- X
.Ed
2000-12-29 09:18:45 +00:00
.El
1996-01-16 20:01:05 +00:00
.Pp
The
.Nm
interface provides the following macros to facilitate
1995-01-25 08:46:06 +00:00
array initializers:
1996-01-16 20:01:05 +00:00
.Fn BPF_STMT opcode operand
and
1996-01-16 20:01:05 +00:00
.Fn BPF_JUMP opcode operand true_offset false_offset .
2005-01-21 08:36:40 +00:00
.Sh FILES
.Bl -tag -compact -width /dev/bpfXXX
.It Pa /dev/bpf Ns Sy n
the packet filter device
.El
1996-01-16 20:01:05 +00:00
.Sh EXAMPLES
The following filter is taken from the Reverse ARP Daemon.
It accepts only Reverse ARP requests.
1996-01-16 20:01:05 +00:00
.Bd -literal
1995-01-25 08:46:06 +00:00
struct bpf_insn insns[] = {
BPF_STMT(BPF_LD+BPF_H+BPF_ABS, 12),
BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, ETHERTYPE_REVARP, 0, 3),
BPF_STMT(BPF_LD+BPF_H+BPF_ABS, 20),
BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, REVARP_REQUEST, 0, 1),
BPF_STMT(BPF_RET+BPF_K, sizeof(struct ether_arp) +
1995-01-25 08:46:06 +00:00
sizeof(struct ether_header)),
BPF_STMT(BPF_RET+BPF_K, 0),
};
1996-01-16 20:01:05 +00:00
.Ed
.Pp
1995-01-25 08:46:06 +00:00
This filter accepts only IP packets between host 128.3.112.15 and
128.3.112.35.
1996-01-16 20:01:05 +00:00
.Bd -literal
1995-01-25 08:46:06 +00:00
struct bpf_insn insns[] = {
BPF_STMT(BPF_LD+BPF_H+BPF_ABS, 12),
BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, ETHERTYPE_IP, 0, 8),
BPF_STMT(BPF_LD+BPF_W+BPF_ABS, 26),
1995-01-25 08:46:06 +00:00
BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, 0x8003700f, 0, 2),
BPF_STMT(BPF_LD+BPF_W+BPF_ABS, 30),
1995-01-25 08:46:06 +00:00
BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, 0x80037023, 3, 4),
BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, 0x80037023, 0, 3),
BPF_STMT(BPF_LD+BPF_W+BPF_ABS, 30),
1995-01-25 08:46:06 +00:00
BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, 0x8003700f, 0, 1),
BPF_STMT(BPF_RET+BPF_K, (u_int)-1),
BPF_STMT(BPF_RET+BPF_K, 0),
};
1996-01-16 20:01:05 +00:00
.Ed
.Pp
Finally, this filter returns only TCP finger packets.
We must parse the IP header to reach the TCP header.
The
1996-01-16 20:01:05 +00:00
.Dv BPF_JSET
instruction
1995-01-25 08:46:06 +00:00
checks that the IP fragment offset is 0 so we are sure
that we have a TCP header.
1996-01-16 20:01:05 +00:00
.Bd -literal
1995-01-25 08:46:06 +00:00
struct bpf_insn insns[] = {
BPF_STMT(BPF_LD+BPF_H+BPF_ABS, 12),
BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, ETHERTYPE_IP, 0, 10),
BPF_STMT(BPF_LD+BPF_B+BPF_ABS, 23),
BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, IPPROTO_TCP, 0, 8),
BPF_STMT(BPF_LD+BPF_H+BPF_ABS, 20),
BPF_JUMP(BPF_JMP+BPF_JSET+BPF_K, 0x1fff, 6, 0),
BPF_STMT(BPF_LDX+BPF_B+BPF_MSH, 14),
BPF_STMT(BPF_LD+BPF_H+BPF_IND, 14),
BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, 79, 2, 0),
BPF_STMT(BPF_LD+BPF_H+BPF_IND, 16),
BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, 79, 0, 1),
BPF_STMT(BPF_RET+BPF_K, (u_int)-1),
BPF_STMT(BPF_RET+BPF_K, 0),
};
1996-01-16 20:01:05 +00:00
.Ed
.Sh SEE ALSO
.Xr tcpdump 1 ,
1998-01-02 19:22:52 +00:00
.Xr ioctl 2 ,
Introduce support for zero-copy BPF buffering, which reduces the overhead of packet capture by allowing a user process to directly "loan" buffer memory to the kernel rather than using read(2) to explicitly copy data from kernel address space. The user process will issue new BPF ioctls to set the shared memory buffer mode and provide pointers to buffers and their size. The kernel then wires and maps the pages into kernel address space using sf_buf(9), which on supporting architectures will use the direct map region. The current "buffered" access mode remains the default, and support for zero-copy buffers must, for the time being, be explicitly enabled using a sysctl for the kernel to accept requests to use it. The kernel and user process synchronize use of the buffers with atomic operations, avoiding the need for system calls under load; the user process may use select()/poll()/kqueue() to manage blocking while waiting for network data if the user process is able to consume data faster than the kernel generates it. Patchs to libpcap are available to allow libpcap applications to transparently take advantage of this support. Detailed information on the new API may be found in bpf(4), including specific atomic operations and memory barriers required to synchronize buffer use safely. These changes modify the base BPF implementation to (roughly) abstrac the current buffer model, allowing the new shared memory model to be added, and add new monitoring statistics for netstat to print. The implementation, with the exception of some monitoring hanges that break the netstat monitoring ABI for BPF, will be MFC'd. Zerocopy bpf buffers are still considered experimental are disabled by default. To experiment with this new facility, adjust the net.bpf.zerocopy_enable sysctl variable to 1. Changes to libpcap will be made available as a patch for the time being, and further refinements to the implementation are expected. Sponsored by: Seccuris Inc. In collaboration with: rwatson Tested by: pwood, gallatin MFC after: 4 months [1] [1] Certain portions will probably not be MFCed, specifically things that can break the monitoring ABI.
2008-03-24 13:49:17 +00:00
.Xr kqueue 2 ,
.Xr poll 2 ,
.Xr select 2 ,
2000-01-25 20:33:25 +00:00
.Xr byteorder 3 ,
.Xr ng_bpf 4 ,
.Xr bpf 9
1996-01-16 20:01:05 +00:00
.Rs
.%A McCanne, S.
.%A Jacobson V.
.%T "An efficient, extensible, and portable network monitor"
.Re
.Sh HISTORY
1995-01-25 08:46:06 +00:00
The Enet packet filter was created in 1980 by Mike Accetta and
Rick Rashid at Carnegie-Mellon University.
Jeffrey Mogul, at
Stanford, ported the code to
.Bx
and continued its development from
1983 on.
Since then, it has evolved into the Ultrix Packet Filter at
1996-01-16 20:01:05 +00:00
.Tn DEC ,
a
.Tn STREAMS
.Tn NIT
module under
.Tn SunOS 4.1 ,
and
.Tn BPF .
.Sh AUTHORS
.An -nosplit
.An Steven McCanne ,
of Lawrence Berkeley Laboratory, implemented BPF in
Summer 1990.
Much of the design is due to
.An Van Jacobson .
Introduce support for zero-copy BPF buffering, which reduces the overhead of packet capture by allowing a user process to directly "loan" buffer memory to the kernel rather than using read(2) to explicitly copy data from kernel address space. The user process will issue new BPF ioctls to set the shared memory buffer mode and provide pointers to buffers and their size. The kernel then wires and maps the pages into kernel address space using sf_buf(9), which on supporting architectures will use the direct map region. The current "buffered" access mode remains the default, and support for zero-copy buffers must, for the time being, be explicitly enabled using a sysctl for the kernel to accept requests to use it. The kernel and user process synchronize use of the buffers with atomic operations, avoiding the need for system calls under load; the user process may use select()/poll()/kqueue() to manage blocking while waiting for network data if the user process is able to consume data faster than the kernel generates it. Patchs to libpcap are available to allow libpcap applications to transparently take advantage of this support. Detailed information on the new API may be found in bpf(4), including specific atomic operations and memory barriers required to synchronize buffer use safely. These changes modify the base BPF implementation to (roughly) abstrac the current buffer model, allowing the new shared memory model to be added, and add new monitoring statistics for netstat to print. The implementation, with the exception of some monitoring hanges that break the netstat monitoring ABI for BPF, will be MFC'd. Zerocopy bpf buffers are still considered experimental are disabled by default. To experiment with this new facility, adjust the net.bpf.zerocopy_enable sysctl variable to 1. Changes to libpcap will be made available as a patch for the time being, and further refinements to the implementation are expected. Sponsored by: Seccuris Inc. In collaboration with: rwatson Tested by: pwood, gallatin MFC after: 4 months [1] [1] Certain portions will probably not be MFCed, specifically things that can break the monitoring ABI.
2008-03-24 13:49:17 +00:00
.Pp
Support for zero-copy buffers was added by
.An Robert N. M. Watson
under contract to Seccuris Inc.
2005-01-21 08:36:40 +00:00
.Sh BUGS
The read buffer must be of a fixed size (returned by the
.Dv BIOCGBLEN
ioctl).
.Pp
A file that does not request promiscuous mode may receive promiscuously
received packets as a side effect of another file requesting this
mode on the same hardware interface.
This could be fixed in the kernel with additional processing overhead.
However, we favor the model where
all files must assume that the interface is promiscuous, and if
so desired, must utilize a filter to reject foreign packets.
.Pp
Data link protocols with variable length headers are not currently supported.
.Pp
The
.Dv SEESENT ,
.Dv DIRECTION ,
and
.Dv FEEDBACK
settings have been observed to work incorrectly on some interface
2005-01-21 08:36:40 +00:00
types, including those with hardware loopback rather than software loopback,
and point-to-point interfaces.
They appear to function correctly on a
2005-01-21 08:36:40 +00:00
broad range of Ethernet-style interfaces.