2005-01-07 01:45:51 +00:00
|
|
|
/*-
|
1997-01-03 19:50:26 +00:00
|
|
|
* Copyright (c) 1982, 1986, 1989, 1993
|
|
|
|
* The Regents of the University of California. All rights reserved.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
* 4. Neither the name of the University nor the names of its contributors
|
|
|
|
* may be used to endorse or promote products derived from this software
|
|
|
|
* without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*
|
|
|
|
* From: @(#)if.h 8.1 (Berkeley) 6/10/93
|
1999-08-28 01:08:13 +00:00
|
|
|
* $FreeBSD$
|
1997-01-03 19:50:26 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef _NET_IF_VAR_H_
|
|
|
|
#define _NET_IF_VAR_H_
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Structures defining a network interface, providing a packet
|
|
|
|
* transport mechanism (ala level 0 of the PUP protocols).
|
|
|
|
*
|
|
|
|
* Each interface accepts output datagrams of a specified maximum
|
|
|
|
* length, and provides higher level routines with input datagrams
|
|
|
|
* received from its medium.
|
|
|
|
*
|
|
|
|
* Output occurs when the routine if_output is called, with three parameters:
|
|
|
|
* (*ifp->if_output)(ifp, m, dst, rt)
|
|
|
|
* Here m is the mbuf chain to be sent and dst is the destination address.
|
|
|
|
* The output routine encapsulates the supplied datagram if necessary,
|
|
|
|
* and then transmits it on its medium.
|
|
|
|
*
|
|
|
|
* On input, each interface unwraps the data received by it, and either
|
2003-01-01 18:49:04 +00:00
|
|
|
* places it on the input queue of an internetwork datagram routine
|
1997-01-03 19:50:26 +00:00
|
|
|
* and posts the associated software interrupt, or passes the datagram to a raw
|
|
|
|
* packet input routine.
|
|
|
|
*
|
|
|
|
* Routines exist for locating interfaces by their addresses
|
2003-01-01 18:49:04 +00:00
|
|
|
* or for locating an interface on a certain network, as well as more general
|
1997-01-03 19:50:26 +00:00
|
|
|
* routing and gateway routines maintaining information used to locate
|
|
|
|
* interfaces. These routines live in the files if.c and route.c
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifdef __STDC__
|
|
|
|
/*
|
|
|
|
* Forward structure declarations for function prototypes [sic].
|
|
|
|
*/
|
|
|
|
struct mbuf;
|
2001-09-12 08:38:13 +00:00
|
|
|
struct thread;
|
1997-01-03 19:50:26 +00:00
|
|
|
struct rtentry;
|
2001-10-17 18:07:05 +00:00
|
|
|
struct rt_addrinfo;
|
1997-01-03 19:50:26 +00:00
|
|
|
struct socket;
|
|
|
|
struct ether_header;
|
2005-02-22 13:04:05 +00:00
|
|
|
struct carp_if;
|
A major overhaul of the CARP implementation. The ip_carp.c was started
from scratch, copying needed functionality from the old implemenation
on demand, with a thorough review of all code. The main change is that
interface layer has been removed from the CARP. Now redundant addresses
are configured exactly on the interfaces, they run on.
The CARP configuration itself is, as before, configured and read via
SIOCSVH/SIOCGVH ioctls. A new prefix created with SIOCAIFADDR or
SIOCAIFADDR_IN6 may now be configured to a particular virtual host id,
which makes the prefix redundant.
ifconfig(8) semantics has been changed too: now one doesn't need
to clone carpXX interface, he/she should directly configure a vhid
on a Ethernet interface.
To supply vhid data from the kernel to an application the getifaddrs(8)
function had been changed to pass ifam_data with each address. [1]
The new implementation definitely closes all PRs related to carp(4)
being an interface, and may close several others. It also allows
to run a single redundant IP per interface.
Big thanks to Bjoern Zeeb for his help with inet6 part of patch, for
idea on using ifam_data and for several rounds of reviewing!
PR: kern/117000, kern/126945, kern/126714, kern/120130, kern/117448
Reviewed by: bz
Submitted by: bz [1]
2011-12-16 12:16:56 +00:00
|
|
|
struct carp_softc;
|
Merge the //depot/user/yar/vlan branch into CVS. It contains some collective
work by yar, thompsa and myself. The checksum offloading part also involves
work done by Mihail Balikov.
The most important changes:
o Instead of global linked list of all vlan softc use a per-trunk
hash. The size of hash is dynamically adjusted, depending on
number of entries. This changes struct ifnet, replacing counter
of vlans with a pointer to trunk structure. This change is an
improvement for setups with big number of VLANs, several interfaces
and several CPUs. It is a small regression for a setup with a single
VLAN interface.
An alternative to dynamic hash is a per-trunk static array with
4096 entries, which is a compile time option - VLAN_ARRAY. In my
experiments the array is not an improvement, probably because such
a big trunk structure doesn't fit into CPU cache.
o Introduce an UMA zone for VLAN tags. Since drivers depend on it,
the zone is declared in kern_mbuf.c, not in optional vlan(4) driver.
This change is a big improvement for any setup utilizing vlan(4).
o Use rwlock(9) instead of mutex(9) for locking. We are the first
ones to do this! :)
o Some drivers can do hardware VLAN tagging + hardware checksum
offloading. Add an infrastructure for this. Whenever vlan(4) is
attached to a parent or parent configuration is changed, the flags
on vlan(4) interface are updated.
In collaboration with: yar, thompsa
In collaboration with: Mihail Balikov <mihail.balikov interbgc.com>
2006-01-30 13:45:15 +00:00
|
|
|
struct ifvlantrunk;
|
2009-04-16 20:30:28 +00:00
|
|
|
struct route;
|
Introduce an infrastructure for dismantling vnet instances.
Vnet modules and protocol domains may now register destructor
functions to clean up and release per-module state. The destructor
mechanisms can be triggered by invoking "vimage -d", or a future
equivalent command which will be provided via the new jail framework.
While this patch introduces numerous placeholder destructor functions,
many of those are currently incomplete, thus leaking memory or (even
worse) failing to stop all running timers. Many of such issues are
already known and will be incrementaly fixed over the next weeks in
smaller incremental commits.
Apart from introducing new fields in structs ifnet, domain, protosw
and vnet_net, which requires the kernel and modules to be rebuilt, this
change should have no impact on nooptions VIMAGE builds, since vnet
destructors can only be called in VIMAGE kernels. Moreover,
destructor functions should be in general compiled in only in
options VIMAGE builds, except for kernel modules which can be safely
kldunloaded at run time.
Bump __FreeBSD_version to 800097.
Reviewed by: bz, julian
Approved by: rwatson, kib (re), julian (mentor)
2009-06-08 17:15:40 +00:00
|
|
|
struct vnet;
|
1997-01-03 19:50:26 +00:00
|
|
|
#endif
|
|
|
|
|
|
|
|
#include <sys/queue.h> /* get TAILQ macros */
|
|
|
|
|
2000-11-26 21:47:01 +00:00
|
|
|
#ifdef _KERNEL
|
Lock down the network interface queues. The queue mutex must be obtained
before adding/removing packets from the queue. Also, the if_obytes and
if_omcasts fields should only be manipulated under protection of the mutex.
IF_ENQUEUE, IF_PREPEND, and IF_DEQUEUE perform all necessary locking on
the queue. An IF_LOCK macro is provided, as well as the old (mutex-less)
versions of the macros in the form _IF_ENQUEUE, _IF_QFULL, for code which
needs them, but their use is discouraged.
Two new macros are introduced: IF_DRAIN() to drain a queue, and IF_HANDOFF,
which takes care of locking/enqueue, and also statistics updating/start
if necessary.
2000-11-25 07:35:38 +00:00
|
|
|
#include <sys/mbuf.h>
|
2004-02-26 04:27:55 +00:00
|
|
|
#include <sys/eventhandler.h>
|
2008-12-17 04:00:43 +00:00
|
|
|
#include <sys/buf_ring.h>
|
Build on Jeff Roberson's linker-set based dynamic per-CPU allocator
(DPCPU), as suggested by Peter Wemm, and implement a new per-virtual
network stack memory allocator. Modify vnet to use the allocator
instead of monolithic global container structures (vinet, ...). This
change solves many binary compatibility problems associated with
VIMAGE, and restores ELF symbols for virtualized global variables.
Each virtualized global variable exists as a "reference copy", and also
once per virtual network stack. Virtualized global variables are
tagged at compile-time, placing the in a special linker set, which is
loaded into a contiguous region of kernel memory. Virtualized global
variables in the base kernel are linked as normal, but those in modules
are copied and relocated to a reserved portion of the kernel's vnet
region with the help of a the kernel linker.
Virtualized global variables exist in per-vnet memory set up when the
network stack instance is created, and are initialized statically from
the reference copy. Run-time access occurs via an accessor macro, which
converts from the current vnet and requested symbol to a per-vnet
address. When "options VIMAGE" is not compiled into the kernel, normal
global ELF symbols will be used instead and indirection is avoided.
This change restores static initialization for network stack global
variables, restores support for non-global symbols and types, eliminates
the need for many subsystem constructors, eliminates large per-subsystem
structures that caused many binary compatibility issues both for
monitoring applications (netstat) and kernel modules, removes the
per-function INIT_VNET_*() macros throughout the stack, eliminates the
need for vnet_symmap ksym(2) munging, and eliminates duplicate
definitions of virtualized globals under VIMAGE_GLOBALS.
Bump __FreeBSD_version and update UPDATING.
Portions submitted by: bz
Reviewed by: bz, zec
Discussed with: gnn, jamie, jeff, jhb, julian, sam
Suggested by: peter
Approved by: re (kensmith)
2009-07-14 22:48:30 +00:00
|
|
|
#include <net/vnet.h>
|
2000-11-26 21:47:01 +00:00
|
|
|
#endif /* _KERNEL */
|
2001-03-28 09:17:56 +00:00
|
|
|
#include <sys/lock.h> /* XXX */
|
|
|
|
#include <sys/mutex.h> /* XXX */
|
2008-12-17 00:11:56 +00:00
|
|
|
#include <sys/rwlock.h> /* XXX */
|
2009-08-23 20:40:19 +00:00
|
|
|
#include <sys/sx.h> /* XXX */
|
2001-09-06 02:40:43 +00:00
|
|
|
#include <sys/event.h> /* XXX */
|
2004-07-27 23:20:45 +00:00
|
|
|
#include <sys/_task.h>
|
Lock down the network interface queues. The queue mutex must be obtained
before adding/removing packets from the queue. Also, the if_obytes and
if_omcasts fields should only be manipulated under protection of the mutex.
IF_ENQUEUE, IF_PREPEND, and IF_DEQUEUE perform all necessary locking on
the queue. An IF_LOCK macro is provided, as well as the old (mutex-less)
versions of the macros in the form _IF_ENQUEUE, _IF_QFULL, for code which
needs them, but their use is discouraged.
Two new macros are introduced: IF_DRAIN() to drain a queue, and IF_HANDOFF,
which takes care of locking/enqueue, and also statistics updating/start
if necessary.
2000-11-25 07:35:38 +00:00
|
|
|
|
2003-10-31 18:32:15 +00:00
|
|
|
#define IF_DUNIT_NONE -1
|
|
|
|
|
2004-06-13 17:29:10 +00:00
|
|
|
#include <altq/if_altq.h>
|
|
|
|
|
2000-05-26 02:09:24 +00:00
|
|
|
TAILQ_HEAD(ifnethead, ifnet); /* we use TAILQs so that the order of */
|
|
|
|
TAILQ_HEAD(ifaddrhead, ifaddr); /* instantiation is preserved in the list */
|
2001-02-06 10:12:15 +00:00
|
|
|
TAILQ_HEAD(ifmultihead, ifmultiaddr);
|
2006-06-19 22:20:45 +00:00
|
|
|
TAILQ_HEAD(ifgrouphead, ifg_group);
|
1997-01-03 19:50:26 +00:00
|
|
|
|
2012-09-04 22:17:33 +00:00
|
|
|
#ifdef _KERNEL
|
2012-09-04 19:43:26 +00:00
|
|
|
VNET_DECLARE(struct pfil_head, link_pfil_hook); /* packet filter hooks */
|
|
|
|
#define V_link_pfil_hook VNET(link_pfil_hook)
|
2012-09-04 22:17:33 +00:00
|
|
|
#endif /* _KERNEL */
|
2012-09-04 19:43:26 +00:00
|
|
|
|
1997-01-03 19:50:26 +00:00
|
|
|
/*
|
|
|
|
* Structure defining a queue for a network interface.
|
|
|
|
*/
|
|
|
|
struct ifqueue {
|
|
|
|
struct mbuf *ifq_head;
|
|
|
|
struct mbuf *ifq_tail;
|
|
|
|
int ifq_len;
|
|
|
|
int ifq_maxlen;
|
|
|
|
int ifq_drops;
|
Lock down the network interface queues. The queue mutex must be obtained
before adding/removing packets from the queue. Also, the if_obytes and
if_omcasts fields should only be manipulated under protection of the mutex.
IF_ENQUEUE, IF_PREPEND, and IF_DEQUEUE perform all necessary locking on
the queue. An IF_LOCK macro is provided, as well as the old (mutex-less)
versions of the macros in the form _IF_ENQUEUE, _IF_QFULL, for code which
needs them, but their use is discouraged.
Two new macros are introduced: IF_DRAIN() to drain a queue, and IF_HANDOFF,
which takes care of locking/enqueue, and also statistics updating/start
if necessary.
2000-11-25 07:35:38 +00:00
|
|
|
struct mtx ifq_mtx;
|
1997-01-03 19:50:26 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Structure defining a network interface.
|
|
|
|
*
|
|
|
|
* (Would like to call this struct ``if'', but C isn't PL/1.)
|
|
|
|
*/
|
2001-10-02 18:08:34 +00:00
|
|
|
|
1997-01-03 19:50:26 +00:00
|
|
|
struct ifnet {
|
|
|
|
void *if_softc; /* pointer to driver state */
|
2005-06-10 16:49:24 +00:00
|
|
|
void *if_l2com; /* pointer to protocol bits */
|
Permit buiding kernels with options VIMAGE, restricted to only a single
active network stack instance. Turning on options VIMAGE at compile
time yields the following changes relative to default kernel build:
1) V_ accessor macros for virtualized variables resolve to structure
fields via base pointers, instead of being resolved as fields in global
structs or plain global variables. As an example, V_ifnet becomes:
options VIMAGE: ((struct vnet_net *) vnet_net)->_ifnet
default build: vnet_net_0._ifnet
options VIMAGE_GLOBALS: ifnet
2) INIT_VNET_* macros will declare and set up base pointers to be used
by V_ accessor macros, instead of resolving to whitespace:
INIT_VNET_NET(ifp->if_vnet); becomes
struct vnet_net *vnet_net = (ifp->if_vnet)->mod_data[VNET_MOD_NET];
3) Memory for vnet modules registered via vnet_mod_register() is now
allocated at run time in sys/kern/kern_vimage.c, instead of per vnet
module structs being declared as globals. If required, vnet modules
can now request the framework to provide them with allocated bzeroed
memory by filling in the vmi_size field in their vmi_modinfo structures.
4) structs socket, ifnet, inpcbinfo, tcpcb and syncache_head are
extended to hold a pointer to the parent vnet. options VIMAGE builds
will fill in those fields as required.
5) curvnet is introduced as a new global variable in options VIMAGE
builds, always pointing to the default and only struct vnet.
6) struct sysctl_oid has been extended with additional two fields to
store major and minor virtualization module identifiers, oid_v_subs and
oid_v_mod. SYSCTL_V_* family of macros will fill in those fields
accordingly, and store the offset in the appropriate vnet container
struct in oid_arg1.
In sysctl handlers dealing with virtualized sysctls, the
SYSCTL_RESOLVE_V_ARG1() macro will compute the address of the target
variable and make it available in arg1 variable for further processing.
Unused fields in structs vnet_inet, vnet_inet6 and vnet_ipfw have
been deleted.
Reviewed by: bz, rwatson
Approved by: julian (mentor)
2009-04-30 13:36:26 +00:00
|
|
|
struct vnet *if_vnet; /* pointer to network stack instance */
|
2000-05-26 02:09:24 +00:00
|
|
|
TAILQ_ENTRY(ifnet) if_link; /* all struct ifnets are chained */
|
2003-10-31 18:32:15 +00:00
|
|
|
char if_xname[IFNAMSIZ]; /* external name (name + unit) */
|
|
|
|
const char *if_dname; /* driver name */
|
|
|
|
int if_dunit; /* unit or IF_DUNIT_NONE */
|
Start to address a number of races relating to use of ifnet pointers
after the corresponding interface has been destroyed:
(1) Add an ifnet refcount, ifp->if_refcount. Initialize it to 1 in
if_alloc(), and modify if_free_type() to decrement and check the
refcount.
(2) Add new if_ref() and if_rele() interfaces to allow kernel code
walking global interface lists to release IFNET_[RW]LOCK() yet
keep the ifnet stable. Currently, if_rele() is a no-op wrapper
around if_free(), but this may change in the future.
(3) Add new ifnet field, if_alloctype, which caches the type passed
to if_alloc(), but unlike if_type, won't be changed by drivers.
This allows asynchronous free's of the interface after the
driver has released it to still use the right type. Use that
instead of the type passed to if_free_type(), but assert that
they are the same (might have to rethink this if that doesn't
work out).
(4) Add a new ifnet_byindex_ref(), which looks up an interface by
index and returns a reference rather than a pointer to it.
(5) Fix if_alloc() to fully initialize the if_addr_mtx before hooking
up the ifnet to global lists.
(6) Modify sysctls in if_mib.c to use ifnet_byindex_ref() and release
the ifnet when done.
When this change is MFC'd, it will need to replace if_ispare fields
rather than adding new fields in order to avoid breaking the binary
interface. Once this change is MFC'd, if_free_type() should be
removed, as its 'type' argument is now optional.
This refcount is not appropriate for counting mbuf pkthdr references,
and also not for counting entry into the device driver via ifnet
function pointers. An rmlock may be appropriate for the latter.
Rather, this is about ensuring data structure stability when reaching
an ifnet via global ifnet lists and tables followed by copy in or out
of userspace.
MFC after: 3 weeks
Reported by: mdtancsa
Reviewed by: brooks
2009-04-21 22:43:32 +00:00
|
|
|
u_int if_refcount; /* reference count */
|
1997-01-03 19:50:26 +00:00
|
|
|
struct ifaddrhead if_addrhead; /* linked list of addresses per if */
|
2004-04-15 19:45:59 +00:00
|
|
|
/*
|
|
|
|
* if_addrhead is the list of all addresses associated to
|
2004-04-16 10:28:54 +00:00
|
|
|
* an interface.
|
|
|
|
* Some code in the kernel assumes that first element
|
|
|
|
* of the list has type AF_LINK, and contains sockaddr_dl
|
|
|
|
* addresses which store the link-level address and the name
|
2004-04-15 19:45:59 +00:00
|
|
|
* of the interface.
|
2004-04-16 10:28:54 +00:00
|
|
|
* However, access to the AF_LINK address through this
|
2005-11-11 16:04:59 +00:00
|
|
|
* field is deprecated. Use if_addr or ifaddr_byindex() instead.
|
2004-04-15 19:45:59 +00:00
|
|
|
*/
|
2001-09-06 02:40:43 +00:00
|
|
|
int if_pcount; /* number of promiscuous listeners */
|
2005-03-01 10:59:14 +00:00
|
|
|
struct carp_if *if_carp; /* carp interface structure */
|
1997-01-03 19:50:26 +00:00
|
|
|
struct bpf_if *if_bpf; /* packet filter structure */
|
|
|
|
u_short if_index; /* numeric abbreviation for this if */
|
2009-11-30 21:25:57 +00:00
|
|
|
short if_index_reserved; /* spare space to grow if_index */
|
Merge the //depot/user/yar/vlan branch into CVS. It contains some collective
work by yar, thompsa and myself. The checksum offloading part also involves
work done by Mihail Balikov.
The most important changes:
o Instead of global linked list of all vlan softc use a per-trunk
hash. The size of hash is dynamically adjusted, depending on
number of entries. This changes struct ifnet, replacing counter
of vlans with a pointer to trunk structure. This change is an
improvement for setups with big number of VLANs, several interfaces
and several CPUs. It is a small regression for a setup with a single
VLAN interface.
An alternative to dynamic hash is a per-trunk static array with
4096 entries, which is a compile time option - VLAN_ARRAY. In my
experiments the array is not an improvement, probably because such
a big trunk structure doesn't fit into CPU cache.
o Introduce an UMA zone for VLAN tags. Since drivers depend on it,
the zone is declared in kern_mbuf.c, not in optional vlan(4) driver.
This change is a big improvement for any setup utilizing vlan(4).
o Use rwlock(9) instead of mutex(9) for locking. We are the first
ones to do this! :)
o Some drivers can do hardware VLAN tagging + hardware checksum
offloading. Add an infrastructure for this. Whenever vlan(4) is
attached to a parent or parent configuration is changed, the flags
on vlan(4) interface are updated.
In collaboration with: yar, thompsa
In collaboration with: Mihail Balikov <mihail.balikov interbgc.com>
2006-01-30 13:45:15 +00:00
|
|
|
struct ifvlantrunk *if_vlantrunk; /* pointer to 802.1q data */
|
2002-08-18 07:05:00 +00:00
|
|
|
int if_flags; /* up/down, broadcast, etc. */
|
2006-09-06 18:06:04 +00:00
|
|
|
int if_capabilities; /* interface features & capabilities */
|
|
|
|
int if_capenable; /* enabled features & capabilities */
|
1997-01-03 19:50:26 +00:00
|
|
|
void *if_linkmib; /* link-type-specific MIB data */
|
|
|
|
size_t if_linkmiblen; /* length of above data */
|
|
|
|
struct if_data if_data;
|
1997-01-07 19:15:32 +00:00
|
|
|
struct ifmultihead if_multiaddrs; /* multicast addresses configured */
|
|
|
|
int if_amcount; /* number of all-multicast requests */
|
1997-01-03 19:50:26 +00:00
|
|
|
/* procedure handles */
|
|
|
|
int (*if_output) /* output routine (enqueue) */
|
2013-04-26 12:50:32 +00:00
|
|
|
(struct ifnet *, struct mbuf *, const struct sockaddr *,
|
2009-04-16 20:30:28 +00:00
|
|
|
struct route *);
|
2002-11-14 23:36:28 +00:00
|
|
|
void (*if_input) /* input routine (from h/w driver) */
|
|
|
|
(struct ifnet *, struct mbuf *);
|
1997-01-03 19:50:26 +00:00
|
|
|
void (*if_start) /* initiate output routine */
|
2002-03-19 21:54:18 +00:00
|
|
|
(struct ifnet *);
|
1997-01-03 19:50:26 +00:00
|
|
|
int (*if_ioctl) /* ioctl routine */
|
2002-03-19 21:54:18 +00:00
|
|
|
(struct ifnet *, u_long, caddr_t);
|
1997-01-03 19:50:26 +00:00
|
|
|
void (*if_init) /* Init routine */
|
2002-03-19 21:54:18 +00:00
|
|
|
(void *);
|
1997-01-07 19:15:32 +00:00
|
|
|
int (*if_resolvemulti) /* validate/resolve multicast */
|
2002-03-19 21:54:18 +00:00
|
|
|
(struct ifnet *, struct sockaddr **, struct sockaddr *);
|
2009-03-01 12:42:54 +00:00
|
|
|
void (*if_qflush) /* flush any queues */
|
|
|
|
(struct ifnet *);
|
|
|
|
int (*if_transmit) /* initiate output routine */
|
|
|
|
(struct ifnet *, struct mbuf *);
|
Introduce an infrastructure for dismantling vnet instances.
Vnet modules and protocol domains may now register destructor
functions to clean up and release per-module state. The destructor
mechanisms can be triggered by invoking "vimage -d", or a future
equivalent command which will be provided via the new jail framework.
While this patch introduces numerous placeholder destructor functions,
many of those are currently incomplete, thus leaking memory or (even
worse) failing to stop all running timers. Many of such issues are
already known and will be incrementaly fixed over the next weeks in
smaller incremental commits.
Apart from introducing new fields in structs ifnet, domain, protosw
and vnet_net, which requires the kernel and modules to be rebuilt, this
change should have no impact on nooptions VIMAGE builds, since vnet
destructors can only be called in VIMAGE kernels. Moreover,
destructor functions should be in general compiled in only in
options VIMAGE builds, except for kernel modules which can be safely
kldunloaded at run time.
Bump __FreeBSD_version to 800097.
Reviewed by: bz, julian
Approved by: rwatson, kib (re), julian (mentor)
2009-06-08 17:15:40 +00:00
|
|
|
void (*if_reassign) /* reassign to vnet routine */
|
|
|
|
(struct ifnet *, struct vnet *, char *);
|
|
|
|
struct vnet *if_home_vnet; /* where this ifnet originates from */
|
2005-11-11 16:04:59 +00:00
|
|
|
struct ifaddr *if_addr; /* pointer to link-level address */
|
2007-12-07 01:46:13 +00:00
|
|
|
void *if_llsoftc; /* link layer softc */
|
2005-07-21 22:01:06 +00:00
|
|
|
int if_drv_flags; /* driver-managed status flags */
|
2004-06-13 17:29:10 +00:00
|
|
|
struct ifaltq if_snd; /* output queue (includes altq) */
|
2003-12-07 05:49:21 +00:00
|
|
|
const u_int8_t *if_broadcastaddr; /* linklevel broadcast bytestring */
|
2004-04-04 06:14:55 +00:00
|
|
|
|
2005-06-05 03:13:13 +00:00
|
|
|
void *if_bridge; /* bridge glue */
|
|
|
|
|
Modify the MAC Framework so that instead of embedding a (struct label)
in various kernel objects to represent security data, we embed a
(struct label *) pointer, which now references labels allocated using
a UMA zone (mac_label.c). This allows the size and shape of struct
label to be varied without changing the size and shape of these kernel
objects, which become part of the frozen ABI with 5-STABLE. This opens
the door for boot-time selection of the number of label slots, and hence
changes to the bound on the number of simultaneous labeled policies
at boot-time instead of compile-time. This also makes it easier to
embed label references in new objects as required for locking/caching
with fine-grained network stack locking, such as inpcb structures.
This change also moves us further in the direction of hiding the
structure of kernel objects from MAC policy modules, not to mention
dramatically reducing the number of '&' symbols appearing in both the
MAC Framework and MAC policy modules, and improving readability.
While this results in minimal performance change with MAC enabled, it
will observably shrink the size of a number of critical kernel data
structures for the !MAC case, and should have a small (but measurable)
performance benefit (i.e., struct vnode, struct socket) do to memory
conservation and reduced cost of zeroing memory.
NOTE: Users of MAC must recompile their kernel and all MAC modules as a
result of this change. Because this is an API change, third party
MAC modules will also need to be updated to make less use of the '&'
symbol.
Suggestions from: bmilekic
Obtained from: TrustedBSD Project
Sponsored by: DARPA, Network Associates Laboratories
2003-11-12 03:14:31 +00:00
|
|
|
struct label *if_label; /* interface MAC label */
|
2003-10-17 15:46:31 +00:00
|
|
|
|
2004-04-04 06:14:55 +00:00
|
|
|
/* these are only used by IPv6 */
|
2012-02-08 22:05:26 +00:00
|
|
|
void *if_unused[2];
|
2003-10-17 15:46:31 +00:00
|
|
|
void *if_afdata[AF_MAX];
|
2003-10-24 16:57:59 +00:00
|
|
|
int if_afdata_initialized;
|
2005-04-20 09:30:54 +00:00
|
|
|
struct task if_linktask; /* task for link change events */
|
2013-04-09 21:02:20 +00:00
|
|
|
struct rwlock_padalign if_afdata_lock;
|
|
|
|
struct rwlock_padalign if_addr_lock; /* lock to protect address lists */
|
This main goals of this project are:
1. separating L2 tables (ARP, NDP) from the L3 routing tables
2. removing as much locking dependencies among these layers as
possible to allow for some parallelism in the search operations
3. simplify the logic in the routing code,
The most notable end result is the obsolescent of the route
cloning (RTF_CLONING) concept, which translated into code reduction
in both IPv4 ARP and IPv6 NDP related modules, and size reduction in
struct rtentry{}. The change in design obsoletes the semantics of
RTF_CLONING, RTF_WASCLONE and RTF_LLINFO routing flags. The userland
applications such as "arp" and "ndp" have been modified to reflect
those changes. The output from "netstat -r" shows only the routing
entries.
Quite a few developers have contributed to this project in the
past: Glebius Smirnoff, Luigi Rizzo, Alessandro Cerri, and
Andre Oppermann. And most recently:
- Kip Macy revised the locking code completely, thus completing
the last piece of the puzzle, Kip has also been conducting
active functional testing
- Sam Leffler has helped me improving/refactoring the code, and
provided valuable reviews
- Julian Elischer setup the perforce tree for me and has helped
me maintaining that branch before the svn conversion
2008-12-15 06:10:57 +00:00
|
|
|
|
2005-11-08 20:08:34 +00:00
|
|
|
LIST_ENTRY(ifnet) if_clones; /* interfaces of a cloner */
|
2006-06-19 22:20:45 +00:00
|
|
|
TAILQ_HEAD(, ifg_list) if_groups; /* linked list of groups per if */
|
2012-01-09 19:34:12 +00:00
|
|
|
/* protected by if_addr_lock */
|
2006-06-19 22:20:45 +00:00
|
|
|
void *if_pf_kif;
|
2007-04-17 00:35:11 +00:00
|
|
|
void *if_lagg; /* lagg glue */
|
2011-07-17 21:15:20 +00:00
|
|
|
char *if_description; /* interface description */
|
2011-07-03 12:22:02 +00:00
|
|
|
u_int if_fib; /* interface FIB */
|
2011-07-17 21:15:20 +00:00
|
|
|
u_char if_alloctype; /* if_type at time of allocation */
|
2009-03-01 12:42:54 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Spare fields are added so that we can modify sensitive data
|
|
|
|
* structures without changing the kernel binary interface, and must
|
|
|
|
* be used with care where binary compatibility is required.
|
|
|
|
*/
|
2011-07-03 15:34:09 +00:00
|
|
|
char if_cspare[3];
|
2009-03-01 12:42:54 +00:00
|
|
|
int if_ispare[4];
|
2011-07-17 21:15:20 +00:00
|
|
|
void *if_pspare[8]; /* 1 netmap, 7 TDB */
|
1997-01-03 19:50:26 +00:00
|
|
|
};
|
Lock down the network interface queues. The queue mutex must be obtained
before adding/removing packets from the queue. Also, the if_obytes and
if_omcasts fields should only be manipulated under protection of the mutex.
IF_ENQUEUE, IF_PREPEND, and IF_DEQUEUE perform all necessary locking on
the queue. An IF_LOCK macro is provided, as well as the old (mutex-less)
versions of the macros in the form _IF_ENQUEUE, _IF_QFULL, for code which
needs them, but their use is discouraged.
Two new macros are introduced: IF_DRAIN() to drain a queue, and IF_HANDOFF,
which takes care of locking/enqueue, and also statistics updating/start
if necessary.
2000-11-25 07:35:38 +00:00
|
|
|
|
2002-03-19 21:54:18 +00:00
|
|
|
typedef void if_init_f_t(void *);
|
1997-01-03 19:50:26 +00:00
|
|
|
|
2004-04-18 01:15:32 +00:00
|
|
|
/*
|
|
|
|
* XXX These aliases are terribly dangerous because they could apply
|
|
|
|
* to anything.
|
|
|
|
*/
|
1997-01-03 19:50:26 +00:00
|
|
|
#define if_mtu if_data.ifi_mtu
|
|
|
|
#define if_type if_data.ifi_type
|
|
|
|
#define if_physical if_data.ifi_physical
|
|
|
|
#define if_addrlen if_data.ifi_addrlen
|
|
|
|
#define if_hdrlen if_data.ifi_hdrlen
|
|
|
|
#define if_metric if_data.ifi_metric
|
2004-05-03 13:48:35 +00:00
|
|
|
#define if_link_state if_data.ifi_link_state
|
1997-01-03 19:50:26 +00:00
|
|
|
#define if_baudrate if_data.ifi_baudrate
|
2012-10-16 20:18:15 +00:00
|
|
|
#define if_baudrate_pf if_data.ifi_baudrate_pf
|
2000-03-27 19:14:27 +00:00
|
|
|
#define if_hwassist if_data.ifi_hwassist
|
1997-01-03 19:50:26 +00:00
|
|
|
#define if_ipackets if_data.ifi_ipackets
|
|
|
|
#define if_ierrors if_data.ifi_ierrors
|
|
|
|
#define if_opackets if_data.ifi_opackets
|
|
|
|
#define if_oerrors if_data.ifi_oerrors
|
|
|
|
#define if_collisions if_data.ifi_collisions
|
|
|
|
#define if_ibytes if_data.ifi_ibytes
|
|
|
|
#define if_obytes if_data.ifi_obytes
|
|
|
|
#define if_imcasts if_data.ifi_imcasts
|
|
|
|
#define if_omcasts if_data.ifi_omcasts
|
|
|
|
#define if_iqdrops if_data.ifi_iqdrops
|
|
|
|
#define if_noproto if_data.ifi_noproto
|
|
|
|
#define if_lastchange if_data.ifi_lastchange
|
|
|
|
|
1999-11-22 02:45:11 +00:00
|
|
|
/* for compatibility with other BSDs */
|
|
|
|
#define if_addrlist if_addrhead
|
|
|
|
#define if_list if_link
|
2006-08-04 21:27:40 +00:00
|
|
|
#define if_name(ifp) ((ifp)->if_xname)
|
1999-11-22 02:45:11 +00:00
|
|
|
|
2005-08-02 17:43:35 +00:00
|
|
|
/*
|
|
|
|
* Locks for address lists on the network interface.
|
|
|
|
*/
|
2012-01-09 19:34:12 +00:00
|
|
|
#define IF_ADDR_LOCK_INIT(if) rw_init(&(if)->if_addr_lock, "if_addr_lock")
|
|
|
|
#define IF_ADDR_LOCK_DESTROY(if) rw_destroy(&(if)->if_addr_lock)
|
|
|
|
#define IF_ADDR_WLOCK(if) rw_wlock(&(if)->if_addr_lock)
|
|
|
|
#define IF_ADDR_WUNLOCK(if) rw_wunlock(&(if)->if_addr_lock)
|
|
|
|
#define IF_ADDR_RLOCK(if) rw_rlock(&(if)->if_addr_lock)
|
|
|
|
#define IF_ADDR_RUNLOCK(if) rw_runlock(&(if)->if_addr_lock)
|
|
|
|
#define IF_ADDR_LOCK_ASSERT(if) rw_assert(&(if)->if_addr_lock, RA_LOCKED)
|
|
|
|
#define IF_ADDR_WLOCK_ASSERT(if) rw_assert(&(if)->if_addr_lock, RA_WLOCKED)
|
2005-08-02 17:43:35 +00:00
|
|
|
|
2009-06-26 00:36:47 +00:00
|
|
|
/*
|
|
|
|
* Function variations on locking macros intended to be used by loadable
|
|
|
|
* kernel modules in order to divorce them from the internals of address list
|
|
|
|
* locking.
|
|
|
|
*/
|
|
|
|
void if_addr_rlock(struct ifnet *ifp); /* if_addrhead */
|
|
|
|
void if_addr_runlock(struct ifnet *ifp); /* if_addrhead */
|
|
|
|
void if_maddr_rlock(struct ifnet *ifp); /* if_multiaddrs */
|
|
|
|
void if_maddr_runlock(struct ifnet *ifp); /* if_multiaddrs */
|
|
|
|
|
1997-01-03 19:50:26 +00:00
|
|
|
/*
|
|
|
|
* Output queues (ifp->if_snd) and slow device input queues (*ifp->if_slowq)
|
|
|
|
* are queues of messages stored on ifqueue structures
|
|
|
|
* (defined above). Entries are added to and deleted from these structures
|
2012-10-18 13:57:24 +00:00
|
|
|
* by these macros.
|
1997-01-03 19:50:26 +00:00
|
|
|
*/
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
#define IF_LOCK(ifq) mtx_lock(&(ifq)->ifq_mtx)
|
|
|
|
#define IF_UNLOCK(ifq) mtx_unlock(&(ifq)->ifq_mtx)
|
2004-06-13 17:29:10 +00:00
|
|
|
#define IF_LOCK_ASSERT(ifq) mtx_assert(&(ifq)->ifq_mtx, MA_OWNED)
|
Lock down the network interface queues. The queue mutex must be obtained
before adding/removing packets from the queue. Also, the if_obytes and
if_omcasts fields should only be manipulated under protection of the mutex.
IF_ENQUEUE, IF_PREPEND, and IF_DEQUEUE perform all necessary locking on
the queue. An IF_LOCK macro is provided, as well as the old (mutex-less)
versions of the macros in the form _IF_ENQUEUE, _IF_QFULL, for code which
needs them, but their use is discouraged.
Two new macros are introduced: IF_DRAIN() to drain a queue, and IF_HANDOFF,
which takes care of locking/enqueue, and also statistics updating/start
if necessary.
2000-11-25 07:35:38 +00:00
|
|
|
#define _IF_QFULL(ifq) ((ifq)->ifq_len >= (ifq)->ifq_maxlen)
|
|
|
|
#define _IF_DROP(ifq) ((ifq)->ifq_drops++)
|
|
|
|
#define _IF_QLEN(ifq) ((ifq)->ifq_len)
|
|
|
|
|
|
|
|
#define _IF_ENQUEUE(ifq, m) do { \
|
|
|
|
(m)->m_nextpkt = NULL; \
|
|
|
|
if ((ifq)->ifq_tail == NULL) \
|
|
|
|
(ifq)->ifq_head = m; \
|
|
|
|
else \
|
|
|
|
(ifq)->ifq_tail->m_nextpkt = m; \
|
|
|
|
(ifq)->ifq_tail = m; \
|
|
|
|
(ifq)->ifq_len++; \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define IF_ENQUEUE(ifq, m) do { \
|
|
|
|
IF_LOCK(ifq); \
|
|
|
|
_IF_ENQUEUE(ifq, m); \
|
|
|
|
IF_UNLOCK(ifq); \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define _IF_PREPEND(ifq, m) do { \
|
|
|
|
(m)->m_nextpkt = (ifq)->ifq_head; \
|
|
|
|
if ((ifq)->ifq_tail == NULL) \
|
|
|
|
(ifq)->ifq_tail = (m); \
|
|
|
|
(ifq)->ifq_head = (m); \
|
|
|
|
(ifq)->ifq_len++; \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define IF_PREPEND(ifq, m) do { \
|
|
|
|
IF_LOCK(ifq); \
|
|
|
|
_IF_PREPEND(ifq, m); \
|
|
|
|
IF_UNLOCK(ifq); \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define _IF_DEQUEUE(ifq, m) do { \
|
|
|
|
(m) = (ifq)->ifq_head; \
|
|
|
|
if (m) { \
|
2004-10-25 17:04:40 +00:00
|
|
|
if (((ifq)->ifq_head = (m)->m_nextpkt) == NULL) \
|
Lock down the network interface queues. The queue mutex must be obtained
before adding/removing packets from the queue. Also, the if_obytes and
if_omcasts fields should only be manipulated under protection of the mutex.
IF_ENQUEUE, IF_PREPEND, and IF_DEQUEUE perform all necessary locking on
the queue. An IF_LOCK macro is provided, as well as the old (mutex-less)
versions of the macros in the form _IF_ENQUEUE, _IF_QFULL, for code which
needs them, but their use is discouraged.
Two new macros are introduced: IF_DRAIN() to drain a queue, and IF_HANDOFF,
which takes care of locking/enqueue, and also statistics updating/start
if necessary.
2000-11-25 07:35:38 +00:00
|
|
|
(ifq)->ifq_tail = NULL; \
|
|
|
|
(m)->m_nextpkt = NULL; \
|
|
|
|
(ifq)->ifq_len--; \
|
|
|
|
} \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define IF_DEQUEUE(ifq, m) do { \
|
|
|
|
IF_LOCK(ifq); \
|
|
|
|
_IF_DEQUEUE(ifq, m); \
|
|
|
|
IF_UNLOCK(ifq); \
|
|
|
|
} while (0)
|
|
|
|
|
2011-10-27 09:45:12 +00:00
|
|
|
#define _IF_DEQUEUE_ALL(ifq, m) do { \
|
|
|
|
(m) = (ifq)->ifq_head; \
|
|
|
|
(ifq)->ifq_head = (ifq)->ifq_tail = NULL; \
|
|
|
|
(ifq)->ifq_len = 0; \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define IF_DEQUEUE_ALL(ifq, m) do { \
|
|
|
|
IF_LOCK(ifq); \
|
|
|
|
_IF_DEQUEUE_ALL(ifq, m); \
|
|
|
|
IF_UNLOCK(ifq); \
|
|
|
|
} while (0)
|
|
|
|
|
2004-06-13 17:29:10 +00:00
|
|
|
#define _IF_POLL(ifq, m) ((m) = (ifq)->ifq_head)
|
|
|
|
#define IF_POLL(ifq, m) _IF_POLL(ifq, m)
|
|
|
|
|
|
|
|
#define _IF_DRAIN(ifq) do { \
|
Lock down the network interface queues. The queue mutex must be obtained
before adding/removing packets from the queue. Also, the if_obytes and
if_omcasts fields should only be manipulated under protection of the mutex.
IF_ENQUEUE, IF_PREPEND, and IF_DEQUEUE perform all necessary locking on
the queue. An IF_LOCK macro is provided, as well as the old (mutex-less)
versions of the macros in the form _IF_ENQUEUE, _IF_QFULL, for code which
needs them, but their use is discouraged.
Two new macros are introduced: IF_DRAIN() to drain a queue, and IF_HANDOFF,
which takes care of locking/enqueue, and also statistics updating/start
if necessary.
2000-11-25 07:35:38 +00:00
|
|
|
struct mbuf *m; \
|
|
|
|
for (;;) { \
|
|
|
|
_IF_DEQUEUE(ifq, m); \
|
|
|
|
if (m == NULL) \
|
|
|
|
break; \
|
|
|
|
m_freem(m); \
|
|
|
|
} \
|
|
|
|
} while (0)
|
1997-01-03 19:50:26 +00:00
|
|
|
|
2004-06-13 17:29:10 +00:00
|
|
|
#define IF_DRAIN(ifq) do { \
|
|
|
|
IF_LOCK(ifq); \
|
|
|
|
_IF_DRAIN(ifq); \
|
|
|
|
IF_UNLOCK(ifq); \
|
|
|
|
} while(0)
|
|
|
|
|
1999-12-29 04:46:21 +00:00
|
|
|
#ifdef _KERNEL
|
2010-01-18 20:34:00 +00:00
|
|
|
/* interface link layer address change event */
|
|
|
|
typedef void (*iflladdr_event_handler_t)(void *, struct ifnet *);
|
|
|
|
EVENTHANDLER_DECLARE(iflladdr_event, iflladdr_event_handler_t);
|
2004-02-26 04:27:55 +00:00
|
|
|
/* interface address change event */
|
|
|
|
typedef void (*ifaddr_event_handler_t)(void *, struct ifnet *);
|
|
|
|
EVENTHANDLER_DECLARE(ifaddr_event, ifaddr_event_handler_t);
|
|
|
|
/* new interface arrival event */
|
|
|
|
typedef void (*ifnet_arrival_event_handler_t)(void *, struct ifnet *);
|
|
|
|
EVENTHANDLER_DECLARE(ifnet_arrival_event, ifnet_arrival_event_handler_t);
|
|
|
|
/* interface departure event */
|
|
|
|
typedef void (*ifnet_departure_event_handler_t)(void *, struct ifnet *);
|
|
|
|
EVENTHANDLER_DECLARE(ifnet_departure_event, ifnet_departure_event_handler_t);
|
2011-03-21 09:40:01 +00:00
|
|
|
/* Interface link state change event */
|
|
|
|
typedef void (*ifnet_link_event_handler_t)(void *, struct ifnet *, int);
|
|
|
|
EVENTHANDLER_DECLARE(ifnet_link_event, ifnet_link_event_handler_t);
|
2004-02-26 04:27:55 +00:00
|
|
|
|
2006-06-19 22:20:45 +00:00
|
|
|
/*
|
|
|
|
* interface groups
|
|
|
|
*/
|
|
|
|
struct ifg_group {
|
|
|
|
char ifg_group[IFNAMSIZ];
|
|
|
|
u_int ifg_refcnt;
|
|
|
|
void *ifg_pf_kif;
|
|
|
|
TAILQ_HEAD(, ifg_member) ifg_members;
|
|
|
|
TAILQ_ENTRY(ifg_group) ifg_next;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct ifg_member {
|
|
|
|
TAILQ_ENTRY(ifg_member) ifgm_next;
|
|
|
|
struct ifnet *ifgm_ifp;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct ifg_list {
|
|
|
|
struct ifg_group *ifgl_group;
|
|
|
|
TAILQ_ENTRY(ifg_list) ifgl_next;
|
|
|
|
};
|
|
|
|
|
|
|
|
/* group attach event */
|
|
|
|
typedef void (*group_attach_event_handler_t)(void *, struct ifg_group *);
|
|
|
|
EVENTHANDLER_DECLARE(group_attach_event, group_attach_event_handler_t);
|
|
|
|
/* group detach event */
|
|
|
|
typedef void (*group_detach_event_handler_t)(void *, struct ifg_group *);
|
|
|
|
EVENTHANDLER_DECLARE(group_detach_event, group_detach_event_handler_t);
|
|
|
|
/* group change event */
|
|
|
|
typedef void (*group_change_event_handler_t)(void *, const char *);
|
|
|
|
EVENTHANDLER_DECLARE(group_change_event, group_change_event_handler_t);
|
|
|
|
|
2003-10-24 16:57:59 +00:00
|
|
|
#define IF_AFDATA_LOCK_INIT(ifp) \
|
2008-12-17 00:11:56 +00:00
|
|
|
rw_init(&(ifp)->if_afdata_lock, "if_afdata")
|
2003-10-24 16:57:59 +00:00
|
|
|
|
2008-12-17 00:11:56 +00:00
|
|
|
#define IF_AFDATA_WLOCK(ifp) rw_wlock(&(ifp)->if_afdata_lock)
|
|
|
|
#define IF_AFDATA_RLOCK(ifp) rw_rlock(&(ifp)->if_afdata_lock)
|
|
|
|
#define IF_AFDATA_WUNLOCK(ifp) rw_wunlock(&(ifp)->if_afdata_lock)
|
|
|
|
#define IF_AFDATA_RUNLOCK(ifp) rw_runlock(&(ifp)->if_afdata_lock)
|
|
|
|
#define IF_AFDATA_LOCK(ifp) IF_AFDATA_WLOCK(ifp)
|
|
|
|
#define IF_AFDATA_UNLOCK(ifp) IF_AFDATA_WUNLOCK(ifp)
|
|
|
|
#define IF_AFDATA_TRYLOCK(ifp) rw_try_wlock(&(ifp)->if_afdata_lock)
|
|
|
|
#define IF_AFDATA_DESTROY(ifp) rw_destroy(&(ifp)->if_afdata_lock)
|
|
|
|
|
|
|
|
#define IF_AFDATA_LOCK_ASSERT(ifp) rw_assert(&(ifp)->if_afdata_lock, RA_LOCKED)
|
2012-08-02 13:57:49 +00:00
|
|
|
#define IF_AFDATA_RLOCK_ASSERT(ifp) rw_assert(&(ifp)->if_afdata_lock, RA_RLOCKED)
|
|
|
|
#define IF_AFDATA_WLOCK_ASSERT(ifp) rw_assert(&(ifp)->if_afdata_lock, RA_WLOCKED)
|
2008-12-17 00:11:56 +00:00
|
|
|
#define IF_AFDATA_UNLOCK_ASSERT(ifp) rw_assert(&(ifp)->if_afdata_lock, RA_UNLOCKED)
|
This main goals of this project are:
1. separating L2 tables (ARP, NDP) from the L3 routing tables
2. removing as much locking dependencies among these layers as
possible to allow for some parallelism in the search operations
3. simplify the logic in the routing code,
The most notable end result is the obsolescent of the route
cloning (RTF_CLONING) concept, which translated into code reduction
in both IPv4 ARP and IPv6 NDP related modules, and size reduction in
struct rtentry{}. The change in design obsoletes the semantics of
RTF_CLONING, RTF_WASCLONE and RTF_LLINFO routing flags. The userland
applications such as "arp" and "ndp" have been modified to reflect
those changes. The output from "netstat -r" shows only the routing
entries.
Quite a few developers have contributed to this project in the
past: Glebius Smirnoff, Luigi Rizzo, Alessandro Cerri, and
Andre Oppermann. And most recently:
- Kip Macy revised the locking code completely, thus completing
the last piece of the puzzle, Kip has also been conducting
active functional testing
- Sam Leffler has helped me improving/refactoring the code, and
provided valuable reviews
- Julian Elischer setup the perforce tree for me and has helped
me maintaining that branch before the svn conversion
2008-12-15 06:10:57 +00:00
|
|
|
|
2004-10-30 09:39:13 +00:00
|
|
|
int if_handoff(struct ifqueue *ifq, struct mbuf *m, struct ifnet *ifp,
|
|
|
|
int adjust);
|
2004-06-13 17:29:10 +00:00
|
|
|
#define IF_HANDOFF(ifq, m, ifp) \
|
|
|
|
if_handoff((struct ifqueue *)ifq, m, ifp, 0)
|
|
|
|
#define IF_HANDOFF_ADJ(ifq, m, ifp, adj) \
|
|
|
|
if_handoff((struct ifqueue *)ifq, m, ifp, adj)
|
1997-01-03 19:50:26 +00:00
|
|
|
|
2004-07-27 23:20:45 +00:00
|
|
|
void if_start(struct ifnet *);
|
|
|
|
|
2004-06-13 17:29:10 +00:00
|
|
|
#define IFQ_ENQUEUE(ifq, m, err) \
|
|
|
|
do { \
|
|
|
|
IF_LOCK(ifq); \
|
|
|
|
if (ALTQ_IS_ENABLED(ifq)) \
|
|
|
|
ALTQ_ENQUEUE(ifq, m, NULL, err); \
|
|
|
|
else { \
|
|
|
|
if (_IF_QFULL(ifq)) { \
|
|
|
|
m_freem(m); \
|
|
|
|
(err) = ENOBUFS; \
|
|
|
|
} else { \
|
|
|
|
_IF_ENQUEUE(ifq, m); \
|
|
|
|
(err) = 0; \
|
|
|
|
} \
|
|
|
|
} \
|
|
|
|
if (err) \
|
|
|
|
(ifq)->ifq_drops++; \
|
|
|
|
IF_UNLOCK(ifq); \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define IFQ_DEQUEUE_NOLOCK(ifq, m) \
|
|
|
|
do { \
|
|
|
|
if (TBR_IS_ENABLED(ifq)) \
|
2004-06-15 01:45:19 +00:00
|
|
|
(m) = tbr_dequeue_ptr(ifq, ALTDQ_REMOVE); \
|
2004-06-13 17:29:10 +00:00
|
|
|
else if (ALTQ_IS_ENABLED(ifq)) \
|
|
|
|
ALTQ_DEQUEUE(ifq, m); \
|
|
|
|
else \
|
|
|
|
_IF_DEQUEUE(ifq, m); \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define IFQ_DEQUEUE(ifq, m) \
|
|
|
|
do { \
|
|
|
|
IF_LOCK(ifq); \
|
|
|
|
IFQ_DEQUEUE_NOLOCK(ifq, m); \
|
|
|
|
IF_UNLOCK(ifq); \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define IFQ_POLL_NOLOCK(ifq, m) \
|
|
|
|
do { \
|
|
|
|
if (TBR_IS_ENABLED(ifq)) \
|
2004-06-15 01:45:19 +00:00
|
|
|
(m) = tbr_dequeue_ptr(ifq, ALTDQ_POLL); \
|
2004-06-13 17:29:10 +00:00
|
|
|
else if (ALTQ_IS_ENABLED(ifq)) \
|
|
|
|
ALTQ_POLL(ifq, m); \
|
|
|
|
else \
|
|
|
|
_IF_POLL(ifq, m); \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define IFQ_POLL(ifq, m) \
|
|
|
|
do { \
|
|
|
|
IF_LOCK(ifq); \
|
|
|
|
IFQ_POLL_NOLOCK(ifq, m); \
|
|
|
|
IF_UNLOCK(ifq); \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define IFQ_PURGE_NOLOCK(ifq) \
|
|
|
|
do { \
|
|
|
|
if (ALTQ_IS_ENABLED(ifq)) { \
|
|
|
|
ALTQ_PURGE(ifq); \
|
|
|
|
} else \
|
|
|
|
_IF_DRAIN(ifq); \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define IFQ_PURGE(ifq) \
|
|
|
|
do { \
|
|
|
|
IF_LOCK(ifq); \
|
|
|
|
IFQ_PURGE_NOLOCK(ifq); \
|
|
|
|
IF_UNLOCK(ifq); \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define IFQ_SET_READY(ifq) \
|
|
|
|
do { ((ifq)->altq_flags |= ALTQF_READY); } while (0)
|
|
|
|
|
|
|
|
#define IFQ_LOCK(ifq) IF_LOCK(ifq)
|
|
|
|
#define IFQ_UNLOCK(ifq) IF_UNLOCK(ifq)
|
|
|
|
#define IFQ_LOCK_ASSERT(ifq) IF_LOCK_ASSERT(ifq)
|
|
|
|
#define IFQ_IS_EMPTY(ifq) ((ifq)->ifq_len == 0)
|
|
|
|
#define IFQ_INC_LEN(ifq) ((ifq)->ifq_len++)
|
|
|
|
#define IFQ_DEC_LEN(ifq) (--(ifq)->ifq_len)
|
|
|
|
#define IFQ_INC_DROPS(ifq) ((ifq)->ifq_drops++)
|
|
|
|
#define IFQ_SET_MAXLEN(ifq, len) ((ifq)->ifq_maxlen = (len))
|
|
|
|
|
Rename IFF_RUNNING to IFF_DRV_RUNNING, IFF_OACTIVE to IFF_DRV_OACTIVE,
and move both flags from ifnet.if_flags to ifnet.if_drv_flags, making
and documenting the locking of these flags the responsibility of the
device driver, not the network stack. The flags for these two fields
will be mutually exclusive so that they can be exposed to user space as
though they were stored in the same variable.
Provide #defines to provide the old names #ifndef _KERNEL, so that user
applications (such as ifconfig) can use the old flag names. Using the
old names in a device driver will result in a compile error in order to
help device driver writers adopt the new model.
When exposing the interface flags to user space, via interface ioctls
or routing sockets, or the two fields together. Since the driver flags
cannot currently be set for user space, no new logic is currently
required to handle this case.
Add some assertions that general purpose network stack routines, such
as if_setflags(), are not improperly used on driver-owned flags.
With this change, a large number of very minor network stack races are
closed, subject to correct device driver locking. Most were likely
never triggered.
Driver sweep to follow; many thanks to pjd and bz for the line-by-line
review they gave this patch.
Reviewed by: pjd, bz
MFC after: 7 days
2005-08-09 10:16:17 +00:00
|
|
|
/*
|
|
|
|
* The IFF_DRV_OACTIVE test should really occur in the device driver, not in
|
|
|
|
* the handoff logic, as that flag is locked by the device driver.
|
|
|
|
*/
|
2004-06-13 17:29:10 +00:00
|
|
|
#define IFQ_HANDOFF_ADJ(ifp, m, adj, err) \
|
|
|
|
do { \
|
|
|
|
int len; \
|
|
|
|
short mflags; \
|
|
|
|
\
|
|
|
|
len = (m)->m_pkthdr.len; \
|
|
|
|
mflags = (m)->m_flags; \
|
|
|
|
IFQ_ENQUEUE(&(ifp)->if_snd, m, err); \
|
|
|
|
if ((err) == 0) { \
|
|
|
|
(ifp)->if_obytes += len + (adj); \
|
|
|
|
if (mflags & M_MCAST) \
|
|
|
|
(ifp)->if_omcasts++; \
|
Rename IFF_RUNNING to IFF_DRV_RUNNING, IFF_OACTIVE to IFF_DRV_OACTIVE,
and move both flags from ifnet.if_flags to ifnet.if_drv_flags, making
and documenting the locking of these flags the responsibility of the
device driver, not the network stack. The flags for these two fields
will be mutually exclusive so that they can be exposed to user space as
though they were stored in the same variable.
Provide #defines to provide the old names #ifndef _KERNEL, so that user
applications (such as ifconfig) can use the old flag names. Using the
old names in a device driver will result in a compile error in order to
help device driver writers adopt the new model.
When exposing the interface flags to user space, via interface ioctls
or routing sockets, or the two fields together. Since the driver flags
cannot currently be set for user space, no new logic is currently
required to handle this case.
Add some assertions that general purpose network stack routines, such
as if_setflags(), are not improperly used on driver-owned flags.
With this change, a large number of very minor network stack races are
closed, subject to correct device driver locking. Most were likely
never triggered.
Driver sweep to follow; many thanks to pjd and bz for the line-by-line
review they gave this patch.
Reviewed by: pjd, bz
MFC after: 7 days
2005-08-09 10:16:17 +00:00
|
|
|
if (((ifp)->if_drv_flags & IFF_DRV_OACTIVE) == 0) \
|
2004-07-27 23:20:45 +00:00
|
|
|
if_start(ifp); \
|
2004-06-13 17:29:10 +00:00
|
|
|
} \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define IFQ_HANDOFF(ifp, m, err) \
|
2004-06-15 03:40:39 +00:00
|
|
|
IFQ_HANDOFF_ADJ(ifp, m, 0, err)
|
2004-06-13 17:29:10 +00:00
|
|
|
|
|
|
|
#define IFQ_DRV_DEQUEUE(ifq, m) \
|
|
|
|
do { \
|
|
|
|
(m) = (ifq)->ifq_drv_head; \
|
|
|
|
if (m) { \
|
|
|
|
if (((ifq)->ifq_drv_head = (m)->m_nextpkt) == NULL) \
|
|
|
|
(ifq)->ifq_drv_tail = NULL; \
|
|
|
|
(m)->m_nextpkt = NULL; \
|
|
|
|
(ifq)->ifq_drv_len--; \
|
|
|
|
} else { \
|
|
|
|
IFQ_LOCK(ifq); \
|
|
|
|
IFQ_DEQUEUE_NOLOCK(ifq, m); \
|
|
|
|
while ((ifq)->ifq_drv_len < (ifq)->ifq_drv_maxlen) { \
|
|
|
|
struct mbuf *m0; \
|
|
|
|
IFQ_DEQUEUE_NOLOCK(ifq, m0); \
|
|
|
|
if (m0 == NULL) \
|
|
|
|
break; \
|
|
|
|
m0->m_nextpkt = NULL; \
|
|
|
|
if ((ifq)->ifq_drv_tail == NULL) \
|
|
|
|
(ifq)->ifq_drv_head = m0; \
|
|
|
|
else \
|
|
|
|
(ifq)->ifq_drv_tail->m_nextpkt = m0; \
|
|
|
|
(ifq)->ifq_drv_tail = m0; \
|
|
|
|
(ifq)->ifq_drv_len++; \
|
|
|
|
} \
|
|
|
|
IFQ_UNLOCK(ifq); \
|
|
|
|
} \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define IFQ_DRV_PREPEND(ifq, m) \
|
|
|
|
do { \
|
|
|
|
(m)->m_nextpkt = (ifq)->ifq_drv_head; \
|
2004-07-14 13:31:41 +00:00
|
|
|
if ((ifq)->ifq_drv_tail == NULL) \
|
|
|
|
(ifq)->ifq_drv_tail = (m); \
|
2004-06-13 17:29:10 +00:00
|
|
|
(ifq)->ifq_drv_head = (m); \
|
|
|
|
(ifq)->ifq_drv_len++; \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define IFQ_DRV_IS_EMPTY(ifq) \
|
|
|
|
(((ifq)->ifq_drv_len == 0) && ((ifq)->ifq_len == 0))
|
|
|
|
|
|
|
|
#define IFQ_DRV_PURGE(ifq) \
|
|
|
|
do { \
|
2004-07-14 13:31:41 +00:00
|
|
|
struct mbuf *m, *n = (ifq)->ifq_drv_head; \
|
|
|
|
while((m = n) != NULL) { \
|
|
|
|
n = m->m_nextpkt; \
|
2004-06-13 17:29:10 +00:00
|
|
|
m_freem(m); \
|
|
|
|
} \
|
|
|
|
(ifq)->ifq_drv_head = (ifq)->ifq_drv_tail = NULL; \
|
|
|
|
(ifq)->ifq_drv_len = 0; \
|
|
|
|
IFQ_PURGE(ifq); \
|
|
|
|
} while (0)
|
1999-08-06 13:53:03 +00:00
|
|
|
|
2008-12-17 04:00:43 +00:00
|
|
|
#ifdef _KERNEL
|
2012-10-17 19:24:13 +00:00
|
|
|
static __inline void
|
|
|
|
if_initbaudrate(struct ifnet *ifp, uintmax_t baud)
|
|
|
|
{
|
|
|
|
|
|
|
|
ifp->if_baudrate_pf = 0;
|
|
|
|
while (baud > (u_long)(~0UL)) {
|
|
|
|
baud /= 10;
|
|
|
|
ifp->if_baudrate_pf++;
|
|
|
|
}
|
|
|
|
ifp->if_baudrate = baud;
|
|
|
|
}
|
|
|
|
|
2008-12-17 04:00:43 +00:00
|
|
|
static __inline int
|
2008-12-17 08:12:50 +00:00
|
|
|
drbr_enqueue(struct ifnet *ifp, struct buf_ring *br, struct mbuf *m)
|
2008-12-17 04:00:43 +00:00
|
|
|
{
|
|
|
|
int error = 0;
|
|
|
|
|
2009-04-14 00:27:59 +00:00
|
|
|
#ifdef ALTQ
|
|
|
|
if (ALTQ_IS_ENABLED(&ifp->if_snd)) {
|
|
|
|
IFQ_ENQUEUE(&ifp->if_snd, m, error);
|
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
#endif
|
The drbr(9) API appeared to be so unclear, that most drivers in
tree used it incorrectly, which lead to inaccurate overrated
if_obytes accounting. The drbr(9) used to update ifnet stats on
drbr_enqueue(), which is not accurate since enqueuing doesn't
imply successful processing by driver. Dequeuing neither mean
that. Most drivers also called drbr_stats_update() which did
accounting again, leading to doubled if_obytes statistics. And
in case of severe transmitting, when a packet could be several
times enqueued and dequeued it could have been accounted several
times.
o Thus, make drbr(9) API thinner. Now drbr(9) merely chooses between
ALTQ queueing or buf_ring(9) queueing.
- It doesn't touch the buf_ring stats any more.
- It doesn't touch ifnet stats anymore.
- drbr_stats_update() no longer exists.
o buf_ring(9) handles its stats itself:
- It handles br_drops itself.
- br_prod_bytes stats are dropped. Rationale: no one ever
reads them but update of a common counter on every packet
negatively affects performance due to excessive cache
invalidation.
- buf_ring_enqueue_bytes() reduced to buf_ring_enqueue(), since
we no longer account bytes.
o Drivers handle their stats theirselves: if_obytes, if_omcasts.
o mlx4(4), igb(4), em(4), vxge(4), oce(4) and ixv(4) no longer
use drbr_stats_update(), and update ifnet stats theirselves.
o bxe(4) was the most correct driver, it didn't call
drbr_stats_update(), thus it was the only driver accurate under
moderate load. Now it also maintains stats itself.
o ixgbe(4) had already taken stats from hardware, so just
- drop software stats updating.
- take multicast packet count from hardware as well.
o mxge(4) just no longer needs NO_SLOW_STATS define.
o cxgb(4), cxgbe(4) need no change, since they obtain stats
from hardware.
Reviewed by: jfv, gnn
2012-09-28 18:28:27 +00:00
|
|
|
error = buf_ring_enqueue(br, m);
|
|
|
|
if (error)
|
2008-12-17 04:00:43 +00:00
|
|
|
m_freem(m);
|
The drbr(9) API appeared to be so unclear, that most drivers in
tree used it incorrectly, which lead to inaccurate overrated
if_obytes accounting. The drbr(9) used to update ifnet stats on
drbr_enqueue(), which is not accurate since enqueuing doesn't
imply successful processing by driver. Dequeuing neither mean
that. Most drivers also called drbr_stats_update() which did
accounting again, leading to doubled if_obytes statistics. And
in case of severe transmitting, when a packet could be several
times enqueued and dequeued it could have been accounted several
times.
o Thus, make drbr(9) API thinner. Now drbr(9) merely chooses between
ALTQ queueing or buf_ring(9) queueing.
- It doesn't touch the buf_ring stats any more.
- It doesn't touch ifnet stats anymore.
- drbr_stats_update() no longer exists.
o buf_ring(9) handles its stats itself:
- It handles br_drops itself.
- br_prod_bytes stats are dropped. Rationale: no one ever
reads them but update of a common counter on every packet
negatively affects performance due to excessive cache
invalidation.
- buf_ring_enqueue_bytes() reduced to buf_ring_enqueue(), since
we no longer account bytes.
o Drivers handle their stats theirselves: if_obytes, if_omcasts.
o mlx4(4), igb(4), em(4), vxge(4), oce(4) and ixv(4) no longer
use drbr_stats_update(), and update ifnet stats theirselves.
o bxe(4) was the most correct driver, it didn't call
drbr_stats_update(), thus it was the only driver accurate under
moderate load. Now it also maintains stats itself.
o ixgbe(4) had already taken stats from hardware, so just
- drop software stats updating.
- take multicast packet count from hardware as well.
o mxge(4) just no longer needs NO_SLOW_STATS define.
o cxgb(4), cxgbe(4) need no change, since they obtain stats
from hardware.
Reviewed by: jfv, gnn
2012-09-28 18:28:27 +00:00
|
|
|
|
2008-12-17 04:00:43 +00:00
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
|
2013-02-07 15:20:54 +00:00
|
|
|
static __inline void
|
|
|
|
drbr_putback(struct ifnet *ifp, struct buf_ring *br, struct mbuf *new)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* The top of the list needs to be swapped
|
|
|
|
* for this one.
|
|
|
|
*/
|
|
|
|
#ifdef ALTQ
|
|
|
|
if (ifp != NULL && ALTQ_IS_ENABLED(&ifp->if_snd)) {
|
|
|
|
/*
|
|
|
|
* Peek in altq case dequeued it
|
|
|
|
* so put it back.
|
|
|
|
*/
|
|
|
|
IFQ_DRV_PREPEND(&ifp->if_snd, new);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
buf_ring_putback_sc(br, new);
|
|
|
|
}
|
|
|
|
|
|
|
|
static __inline struct mbuf *
|
|
|
|
drbr_peek(struct ifnet *ifp, struct buf_ring *br)
|
|
|
|
{
|
|
|
|
#ifdef ALTQ
|
|
|
|
struct mbuf *m;
|
|
|
|
if (ifp != NULL && ALTQ_IS_ENABLED(&ifp->if_snd)) {
|
|
|
|
/*
|
|
|
|
* Pull it off like a dequeue
|
|
|
|
* since drbr_advance() does nothing
|
|
|
|
* for altq and drbr_putback() will
|
|
|
|
* use the old prepend function.
|
|
|
|
*/
|
|
|
|
IFQ_DEQUEUE(&ifp->if_snd, m);
|
|
|
|
return (m);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
return(buf_ring_peek(br));
|
|
|
|
}
|
|
|
|
|
2008-12-17 04:00:43 +00:00
|
|
|
static __inline void
|
2009-06-19 23:11:20 +00:00
|
|
|
drbr_flush(struct ifnet *ifp, struct buf_ring *br)
|
2008-12-17 04:00:43 +00:00
|
|
|
{
|
|
|
|
struct mbuf *m;
|
|
|
|
|
2009-06-19 23:11:20 +00:00
|
|
|
#ifdef ALTQ
|
2010-02-13 16:04:58 +00:00
|
|
|
if (ifp != NULL && ALTQ_IS_ENABLED(&ifp->if_snd))
|
|
|
|
IFQ_PURGE(&ifp->if_snd);
|
2009-06-19 23:11:20 +00:00
|
|
|
#endif
|
2008-12-17 04:00:43 +00:00
|
|
|
while ((m = buf_ring_dequeue_sc(br)) != NULL)
|
|
|
|
m_freem(m);
|
2009-06-19 23:11:20 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static __inline void
|
|
|
|
drbr_free(struct buf_ring *br, struct malloc_type *type)
|
|
|
|
{
|
2008-12-17 04:00:43 +00:00
|
|
|
|
2009-06-19 23:11:20 +00:00
|
|
|
drbr_flush(NULL, br);
|
2008-12-17 04:00:43 +00:00
|
|
|
buf_ring_free(br, type);
|
|
|
|
}
|
2009-04-14 00:27:59 +00:00
|
|
|
|
|
|
|
static __inline struct mbuf *
|
|
|
|
drbr_dequeue(struct ifnet *ifp, struct buf_ring *br)
|
|
|
|
{
|
|
|
|
#ifdef ALTQ
|
|
|
|
struct mbuf *m;
|
|
|
|
|
2013-02-07 15:20:54 +00:00
|
|
|
if (ifp != NULL && ALTQ_IS_ENABLED(&ifp->if_snd)) {
|
2010-02-13 16:04:58 +00:00
|
|
|
IFQ_DEQUEUE(&ifp->if_snd, m);
|
2009-04-14 00:27:59 +00:00
|
|
|
return (m);
|
|
|
|
}
|
2008-12-17 04:00:43 +00:00
|
|
|
#endif
|
2009-04-14 00:27:59 +00:00
|
|
|
return (buf_ring_dequeue_sc(br));
|
|
|
|
}
|
2008-12-17 04:00:43 +00:00
|
|
|
|
2013-02-07 15:20:54 +00:00
|
|
|
static __inline void
|
|
|
|
drbr_advance(struct ifnet *ifp, struct buf_ring *br)
|
|
|
|
{
|
|
|
|
#ifdef ALTQ
|
|
|
|
/* Nothing to do here since peek dequeues in altq case */
|
|
|
|
if (ifp != NULL && ALTQ_IS_ENABLED(&ifp->if_snd))
|
|
|
|
return;
|
|
|
|
#endif
|
|
|
|
return (buf_ring_advance_sc(br));
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2009-06-09 19:19:16 +00:00
|
|
|
static __inline struct mbuf *
|
|
|
|
drbr_dequeue_cond(struct ifnet *ifp, struct buf_ring *br,
|
|
|
|
int (*func) (struct mbuf *, void *), void *arg)
|
|
|
|
{
|
|
|
|
struct mbuf *m;
|
|
|
|
#ifdef ALTQ
|
2010-02-13 16:04:58 +00:00
|
|
|
if (ALTQ_IS_ENABLED(&ifp->if_snd)) {
|
|
|
|
IFQ_LOCK(&ifp->if_snd);
|
|
|
|
IFQ_POLL_NOLOCK(&ifp->if_snd, m);
|
|
|
|
if (m != NULL && func(m, arg) == 0) {
|
|
|
|
IFQ_UNLOCK(&ifp->if_snd);
|
|
|
|
return (NULL);
|
|
|
|
}
|
2010-03-15 21:15:03 +00:00
|
|
|
IFQ_DEQUEUE_NOLOCK(&ifp->if_snd, m);
|
2010-02-13 16:04:58 +00:00
|
|
|
IFQ_UNLOCK(&ifp->if_snd);
|
2009-06-09 19:19:16 +00:00
|
|
|
return (m);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
m = buf_ring_peek(br);
|
|
|
|
if (m == NULL || func(m, arg) == 0)
|
|
|
|
return (NULL);
|
|
|
|
|
|
|
|
return (buf_ring_dequeue_sc(br));
|
|
|
|
}
|
|
|
|
|
2009-04-14 00:27:59 +00:00
|
|
|
static __inline int
|
|
|
|
drbr_empty(struct ifnet *ifp, struct buf_ring *br)
|
|
|
|
{
|
|
|
|
#ifdef ALTQ
|
|
|
|
if (ALTQ_IS_ENABLED(&ifp->if_snd))
|
2010-02-13 16:04:58 +00:00
|
|
|
return (IFQ_IS_EMPTY(&ifp->if_snd));
|
2009-04-14 00:27:59 +00:00
|
|
|
#endif
|
|
|
|
return (buf_ring_empty(br));
|
|
|
|
}
|
2009-06-09 19:19:16 +00:00
|
|
|
|
2010-02-13 16:04:58 +00:00
|
|
|
static __inline int
|
|
|
|
drbr_needs_enqueue(struct ifnet *ifp, struct buf_ring *br)
|
|
|
|
{
|
|
|
|
#ifdef ALTQ
|
|
|
|
if (ALTQ_IS_ENABLED(&ifp->if_snd))
|
|
|
|
return (1);
|
|
|
|
#endif
|
|
|
|
return (!buf_ring_empty(br));
|
|
|
|
}
|
|
|
|
|
2009-06-09 19:19:16 +00:00
|
|
|
static __inline int
|
|
|
|
drbr_inuse(struct ifnet *ifp, struct buf_ring *br)
|
|
|
|
{
|
|
|
|
#ifdef ALTQ
|
|
|
|
if (ALTQ_IS_ENABLED(&ifp->if_snd))
|
|
|
|
return (ifp->if_snd.ifq_len);
|
|
|
|
#endif
|
|
|
|
return (buf_ring_count(br));
|
|
|
|
}
|
2009-04-14 00:27:59 +00:00
|
|
|
#endif
|
1999-08-06 13:53:03 +00:00
|
|
|
/*
|
|
|
|
* 72 was chosen below because it is the size of a TCP/IP
|
|
|
|
* header (40) + the minimum mss (32).
|
|
|
|
*/
|
|
|
|
#define IF_MINMTU 72
|
|
|
|
#define IF_MAXMTU 65535
|
|
|
|
|
2012-06-19 07:34:13 +00:00
|
|
|
#define TOEDEV(ifp) ((ifp)->if_llsoftc)
|
|
|
|
|
1999-12-29 04:46:21 +00:00
|
|
|
#endif /* _KERNEL */
|
1997-01-03 19:50:26 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The ifaddr structure contains information about one address
|
|
|
|
* of an interface. They are maintained by the different address families,
|
|
|
|
* are allocated and attached when an address is set, and are linked
|
|
|
|
* together so all addresses for an interface can be located.
|
2004-04-15 19:45:59 +00:00
|
|
|
*
|
|
|
|
* NOTE: a 'struct ifaddr' is always at the beginning of a larger
|
|
|
|
* chunk of malloc'ed memory, where we store the three addresses
|
|
|
|
* (ifa_addr, ifa_dstaddr and ifa_netmask) referenced here.
|
1997-01-03 19:50:26 +00:00
|
|
|
*/
|
|
|
|
struct ifaddr {
|
|
|
|
struct sockaddr *ifa_addr; /* address of interface */
|
|
|
|
struct sockaddr *ifa_dstaddr; /* other end of p-to-p link */
|
|
|
|
#define ifa_broadaddr ifa_dstaddr /* broadcast address interface */
|
|
|
|
struct sockaddr *ifa_netmask; /* used to determine subnet */
|
2000-10-19 23:15:54 +00:00
|
|
|
struct if_data if_data; /* not all members are meaningful */
|
1997-01-03 19:50:26 +00:00
|
|
|
struct ifnet *ifa_ifp; /* back-pointer to interface */
|
A major overhaul of the CARP implementation. The ip_carp.c was started
from scratch, copying needed functionality from the old implemenation
on demand, with a thorough review of all code. The main change is that
interface layer has been removed from the CARP. Now redundant addresses
are configured exactly on the interfaces, they run on.
The CARP configuration itself is, as before, configured and read via
SIOCSVH/SIOCGVH ioctls. A new prefix created with SIOCAIFADDR or
SIOCAIFADDR_IN6 may now be configured to a particular virtual host id,
which makes the prefix redundant.
ifconfig(8) semantics has been changed too: now one doesn't need
to clone carpXX interface, he/she should directly configure a vhid
on a Ethernet interface.
To supply vhid data from the kernel to an application the getifaddrs(8)
function had been changed to pass ifam_data with each address. [1]
The new implementation definitely closes all PRs related to carp(4)
being an interface, and may close several others. It also allows
to run a single redundant IP per interface.
Big thanks to Bjoern Zeeb for his help with inet6 part of patch, for
idea on using ifam_data and for several rounds of reviewing!
PR: kern/117000, kern/126945, kern/126714, kern/120130, kern/117448
Reviewed by: bz
Submitted by: bz [1]
2011-12-16 12:16:56 +00:00
|
|
|
struct carp_softc *ifa_carp; /* pointer to CARP data */
|
2000-05-26 02:09:24 +00:00
|
|
|
TAILQ_ENTRY(ifaddr) ifa_link; /* queue macro glue */
|
1997-01-03 19:50:26 +00:00
|
|
|
void (*ifa_rtrequest) /* check or clean routes (+ or -)'d */
|
2002-03-19 21:54:18 +00:00
|
|
|
(int, struct rtentry *, struct rt_addrinfo *);
|
1997-01-03 19:50:26 +00:00
|
|
|
u_short ifa_flags; /* mostly rt_flags for cloning */
|
1999-05-16 17:09:20 +00:00
|
|
|
u_int ifa_refcnt; /* references to this structure */
|
1997-01-03 19:50:26 +00:00
|
|
|
int ifa_metric; /* cost of going out this interface */
|
1997-08-28 01:17:12 +00:00
|
|
|
int (*ifa_claim_addr) /* check if an addr goes to this if */
|
2002-03-19 21:54:18 +00:00
|
|
|
(struct ifaddr *, struct sockaddr *);
|
2002-12-18 11:46:59 +00:00
|
|
|
struct mtx ifa_mtx;
|
1997-01-03 19:50:26 +00:00
|
|
|
};
|
|
|
|
#define IFA_ROUTE RTF_UP /* route installed */
|
2009-12-30 21:35:34 +00:00
|
|
|
#define IFA_RTSELF RTF_HOST /* loopback route to self installed */
|
1997-01-03 19:50:26 +00:00
|
|
|
|
1999-11-22 02:45:11 +00:00
|
|
|
/* for compatibility with other BSDs */
|
|
|
|
#define ifa_list ifa_link
|
|
|
|
|
2009-06-21 19:30:33 +00:00
|
|
|
#ifdef _KERNEL
|
2002-12-18 11:46:59 +00:00
|
|
|
#define IFA_LOCK(ifa) mtx_lock(&(ifa)->ifa_mtx)
|
|
|
|
#define IFA_UNLOCK(ifa) mtx_unlock(&(ifa)->ifa_mtx)
|
2009-06-21 19:30:33 +00:00
|
|
|
|
|
|
|
void ifa_free(struct ifaddr *ifa);
|
|
|
|
void ifa_init(struct ifaddr *ifa);
|
|
|
|
void ifa_ref(struct ifaddr *ifa);
|
|
|
|
#endif
|
2002-12-18 11:46:59 +00:00
|
|
|
|
1997-01-07 19:15:32 +00:00
|
|
|
/*
|
|
|
|
* Multicast address structure. This is analogous to the ifaddr
|
|
|
|
* structure except that it keeps track of multicast addresses.
|
|
|
|
*/
|
|
|
|
struct ifmultiaddr {
|
2001-02-06 10:12:15 +00:00
|
|
|
TAILQ_ENTRY(ifmultiaddr) ifma_link; /* queue macro glue */
|
1997-01-08 13:20:25 +00:00
|
|
|
struct sockaddr *ifma_addr; /* address this membership is for */
|
|
|
|
struct sockaddr *ifma_lladdr; /* link-layer translation, if any */
|
|
|
|
struct ifnet *ifma_ifp; /* back-pointer to interface */
|
|
|
|
u_int ifma_refcount; /* reference count */
|
|
|
|
void *ifma_protospec; /* protocol-specific state, if any */
|
2007-03-20 00:36:10 +00:00
|
|
|
struct ifmultiaddr *ifma_llifma; /* pointer to ifma for ifma_lladdr */
|
1997-01-07 19:15:32 +00:00
|
|
|
};
|
|
|
|
|
1999-12-29 04:46:21 +00:00
|
|
|
#ifdef _KERNEL
|
1997-01-03 19:50:26 +00:00
|
|
|
|
2013-04-10 08:09:25 +00:00
|
|
|
extern struct rwlock_padalign ifnet_rwlock;
|
2009-08-23 20:40:19 +00:00
|
|
|
extern struct sx ifnet_sxlock;
|
|
|
|
|
|
|
|
#define IFNET_LOCK_INIT() do { \
|
|
|
|
rw_init_flags(&ifnet_rwlock, "ifnet_rw", RW_RECURSE); \
|
|
|
|
sx_init_flags(&ifnet_sxlock, "ifnet_sx", SX_RECURSE); \
|
|
|
|
} while(0)
|
|
|
|
|
|
|
|
#define IFNET_WLOCK() do { \
|
|
|
|
sx_xlock(&ifnet_sxlock); \
|
|
|
|
rw_wlock(&ifnet_rwlock); \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define IFNET_WUNLOCK() do { \
|
|
|
|
rw_wunlock(&ifnet_rwlock); \
|
|
|
|
sx_xunlock(&ifnet_sxlock); \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
/*
|
|
|
|
* To assert the ifnet lock, you must know not only whether it's for read or
|
|
|
|
* write, but also whether it was acquired with sleep support or not.
|
|
|
|
*/
|
|
|
|
#define IFNET_RLOCK_ASSERT() sx_assert(&ifnet_sxlock, SA_SLOCKED)
|
|
|
|
#define IFNET_RLOCK_NOSLEEP_ASSERT() rw_assert(&ifnet_rwlock, RA_RLOCKED)
|
|
|
|
#define IFNET_WLOCK_ASSERT() do { \
|
|
|
|
sx_assert(&ifnet_sxlock, SA_XLOCKED); \
|
|
|
|
rw_assert(&ifnet_rwlock, RA_WLOCKED); \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define IFNET_RLOCK() sx_slock(&ifnet_sxlock)
|
|
|
|
#define IFNET_RLOCK_NOSLEEP() rw_rlock(&ifnet_rwlock)
|
|
|
|
#define IFNET_RUNLOCK() sx_sunlock(&ifnet_sxlock)
|
|
|
|
#define IFNET_RUNLOCK_NOSLEEP() rw_runlock(&ifnet_rwlock)
|
2002-12-22 05:35:03 +00:00
|
|
|
|
Start to address a number of races relating to use of ifnet pointers
after the corresponding interface has been destroyed:
(1) Add an ifnet refcount, ifp->if_refcount. Initialize it to 1 in
if_alloc(), and modify if_free_type() to decrement and check the
refcount.
(2) Add new if_ref() and if_rele() interfaces to allow kernel code
walking global interface lists to release IFNET_[RW]LOCK() yet
keep the ifnet stable. Currently, if_rele() is a no-op wrapper
around if_free(), but this may change in the future.
(3) Add new ifnet field, if_alloctype, which caches the type passed
to if_alloc(), but unlike if_type, won't be changed by drivers.
This allows asynchronous free's of the interface after the
driver has released it to still use the right type. Use that
instead of the type passed to if_free_type(), but assert that
they are the same (might have to rethink this if that doesn't
work out).
(4) Add a new ifnet_byindex_ref(), which looks up an interface by
index and returns a reference rather than a pointer to it.
(5) Fix if_alloc() to fully initialize the if_addr_mtx before hooking
up the ifnet to global lists.
(6) Modify sysctls in if_mib.c to use ifnet_byindex_ref() and release
the ifnet when done.
When this change is MFC'd, it will need to replace if_ispare fields
rather than adding new fields in order to avoid breaking the binary
interface. Once this change is MFC'd, if_free_type() should be
removed, as its 'type' argument is now optional.
This refcount is not appropriate for counting mbuf pkthdr references,
and also not for counting entry into the device driver via ifnet
function pointers. An rmlock may be appropriate for the latter.
Rather, this is about ensuring data structure stability when reaching
an ifnet via global ifnet lists and tables followed by copy in or out
of userspace.
MFC after: 3 weeks
Reported by: mdtancsa
Reviewed by: brooks
2009-04-21 22:43:32 +00:00
|
|
|
/*
|
|
|
|
* Look up an ifnet given its index; the _ref variant also acquires a
|
|
|
|
* reference that must be freed using if_rele(). It is almost always a bug
|
|
|
|
* to call ifnet_byindex() instead if ifnet_byindex_ref().
|
|
|
|
*/
|
Introduce locking around use of ifindex_table, whose use was previously
unsynchronized. While races were extremely rare, we've now had a
couple of reports of panics in environments involving large numbers of
IPSEC tunnels being added very quickly on an active system.
- Add accessor functions ifnet_byindex(), ifaddr_byindex(),
ifdev_byindex() to replace existing accessor macros. These functions
now acquire the ifnet lock before derefencing the table.
- Add IFNET_WLOCK_ASSERT().
- Add static accessor functions ifnet_setbyindex(), ifdev_setbyindex(),
which set values in the table either asserting of acquiring the ifnet
lock.
- Use accessor functions throughout if.c to modify and read
ifindex_table.
- Rework ifnet attach/detach to lock around ifindex_table modification.
Note that these changes simply close races around use of ifindex_table,
and make no attempt to solve the probem of disappearing ifnets. Further
refinement of this work, including with respect to ifindex_table
resizing, is still required.
In a future change, the ifnet lock should be converted from a mutex to an
rwlock in order to reduce contention.
Reviewed and tested by: brooks
2008-06-26 23:05:28 +00:00
|
|
|
struct ifnet *ifnet_byindex(u_short idx);
|
Change the curvnet variable from a global const struct vnet *,
previously always pointing to the default vnet context, to a
dynamically changing thread-local one. The currvnet context
should be set on entry to networking code via CURVNET_SET() macros,
and reverted to previous state via CURVNET_RESTORE(). Recursions
on curvnet are permitted, though strongly discuouraged.
This change should have no functional impact on nooptions VIMAGE
kernel builds, where CURVNET_* macros expand to whitespace.
The curthread->td_vnet (aka curvnet) variable's purpose is to be an
indicator of the vnet context in which the current network-related
operation takes place, in case we cannot deduce the current vnet
context from any other source, such as by looking at mbuf's
m->m_pkthdr.rcvif->if_vnet, sockets's so->so_vnet etc. Moreover, so
far curvnet has turned out to be an invaluable consistency checking
aid: it helps to catch cases when sockets, ifnets or any other
vnet-aware structures may have leaked from one vnet to another.
The exact placement of the CURVNET_SET() / CURVNET_RESTORE() macros
was a result of an empirical iterative process, whith an aim to
reduce recursions on CURVNET_SET() to a minimum, while still reducing
the scope of CURVNET_SET() to networking only operations - the
alternative would be calling CURVNET_SET() on each system call entry.
In general, curvnet has to be set in three typicall cases: when
processing socket-related requests from userspace or from within the
kernel; when processing inbound traffic flowing from device drivers
to upper layers of the networking stack, and when executing
timer-driven networking functions.
This change also introduces a DDB subcommand to show the list of all
vnet instances.
Approved by: julian (mentor)
2009-05-05 10:56:12 +00:00
|
|
|
struct ifnet *ifnet_byindex_locked(u_short idx);
|
Start to address a number of races relating to use of ifnet pointers
after the corresponding interface has been destroyed:
(1) Add an ifnet refcount, ifp->if_refcount. Initialize it to 1 in
if_alloc(), and modify if_free_type() to decrement and check the
refcount.
(2) Add new if_ref() and if_rele() interfaces to allow kernel code
walking global interface lists to release IFNET_[RW]LOCK() yet
keep the ifnet stable. Currently, if_rele() is a no-op wrapper
around if_free(), but this may change in the future.
(3) Add new ifnet field, if_alloctype, which caches the type passed
to if_alloc(), but unlike if_type, won't be changed by drivers.
This allows asynchronous free's of the interface after the
driver has released it to still use the right type. Use that
instead of the type passed to if_free_type(), but assert that
they are the same (might have to rethink this if that doesn't
work out).
(4) Add a new ifnet_byindex_ref(), which looks up an interface by
index and returns a reference rather than a pointer to it.
(5) Fix if_alloc() to fully initialize the if_addr_mtx before hooking
up the ifnet to global lists.
(6) Modify sysctls in if_mib.c to use ifnet_byindex_ref() and release
the ifnet when done.
When this change is MFC'd, it will need to replace if_ispare fields
rather than adding new fields in order to avoid breaking the binary
interface. Once this change is MFC'd, if_free_type() should be
removed, as its 'type' argument is now optional.
This refcount is not appropriate for counting mbuf pkthdr references,
and also not for counting entry into the device driver via ifnet
function pointers. An rmlock may be appropriate for the latter.
Rather, this is about ensuring data structure stability when reaching
an ifnet via global ifnet lists and tables followed by copy in or out
of userspace.
MFC after: 3 weeks
Reported by: mdtancsa
Reviewed by: brooks
2009-04-21 22:43:32 +00:00
|
|
|
struct ifnet *ifnet_byindex_ref(u_short idx);
|
2008-08-20 03:14:48 +00:00
|
|
|
|
2004-04-16 10:28:54 +00:00
|
|
|
/*
|
|
|
|
* Given the index, ifaddr_byindex() returns the one and only
|
|
|
|
* link-level ifaddr for the interface. You are not supposed to use
|
|
|
|
* it to traverse the list of addresses associated to the interface.
|
|
|
|
*/
|
Introduce locking around use of ifindex_table, whose use was previously
unsynchronized. While races were extremely rare, we've now had a
couple of reports of panics in environments involving large numbers of
IPSEC tunnels being added very quickly on an active system.
- Add accessor functions ifnet_byindex(), ifaddr_byindex(),
ifdev_byindex() to replace existing accessor macros. These functions
now acquire the ifnet lock before derefencing the table.
- Add IFNET_WLOCK_ASSERT().
- Add static accessor functions ifnet_setbyindex(), ifdev_setbyindex(),
which set values in the table either asserting of acquiring the ifnet
lock.
- Use accessor functions throughout if.c to modify and read
ifindex_table.
- Rework ifnet attach/detach to lock around ifindex_table modification.
Note that these changes simply close races around use of ifindex_table,
and make no attempt to solve the probem of disappearing ifnets. Further
refinement of this work, including with respect to ifindex_table
resizing, is still required.
In a future change, the ifnet lock should be converted from a mutex to an
rwlock in order to reduce contention.
Reviewed and tested by: brooks
2008-06-26 23:05:28 +00:00
|
|
|
struct ifaddr *ifaddr_byindex(u_short idx);
|
2001-09-06 02:40:43 +00:00
|
|
|
|
Build on Jeff Roberson's linker-set based dynamic per-CPU allocator
(DPCPU), as suggested by Peter Wemm, and implement a new per-virtual
network stack memory allocator. Modify vnet to use the allocator
instead of monolithic global container structures (vinet, ...). This
change solves many binary compatibility problems associated with
VIMAGE, and restores ELF symbols for virtualized global variables.
Each virtualized global variable exists as a "reference copy", and also
once per virtual network stack. Virtualized global variables are
tagged at compile-time, placing the in a special linker set, which is
loaded into a contiguous region of kernel memory. Virtualized global
variables in the base kernel are linked as normal, but those in modules
are copied and relocated to a reserved portion of the kernel's vnet
region with the help of a the kernel linker.
Virtualized global variables exist in per-vnet memory set up when the
network stack instance is created, and are initialized statically from
the reference copy. Run-time access occurs via an accessor macro, which
converts from the current vnet and requested symbol to a per-vnet
address. When "options VIMAGE" is not compiled into the kernel, normal
global ELF symbols will be used instead and indirection is avoided.
This change restores static initialization for network stack global
variables, restores support for non-global symbols and types, eliminates
the need for many subsystem constructors, eliminates large per-subsystem
structures that caused many binary compatibility issues both for
monitoring applications (netstat) and kernel modules, removes the
per-function INIT_VNET_*() macros throughout the stack, eliminates the
need for vnet_symmap ksym(2) munging, and eliminates duplicate
definitions of virtualized globals under VIMAGE_GLOBALS.
Bump __FreeBSD_version and update UPDATING.
Portions submitted by: bz
Reviewed by: bz, zec
Discussed with: gnn, jamie, jeff, jhb, julian, sam
Suggested by: peter
Approved by: re (kensmith)
2009-07-14 22:48:30 +00:00
|
|
|
VNET_DECLARE(struct ifnethead, ifnet);
|
|
|
|
VNET_DECLARE(struct ifgrouphead, ifg_head);
|
|
|
|
VNET_DECLARE(int, if_index);
|
|
|
|
VNET_DECLARE(struct ifnet *, loif); /* first loopback interface */
|
2009-07-27 17:08:06 +00:00
|
|
|
VNET_DECLARE(int, useloopback);
|
Build on Jeff Roberson's linker-set based dynamic per-CPU allocator
(DPCPU), as suggested by Peter Wemm, and implement a new per-virtual
network stack memory allocator. Modify vnet to use the allocator
instead of monolithic global container structures (vinet, ...). This
change solves many binary compatibility problems associated with
VIMAGE, and restores ELF symbols for virtualized global variables.
Each virtualized global variable exists as a "reference copy", and also
once per virtual network stack. Virtualized global variables are
tagged at compile-time, placing the in a special linker set, which is
loaded into a contiguous region of kernel memory. Virtualized global
variables in the base kernel are linked as normal, but those in modules
are copied and relocated to a reserved portion of the kernel's vnet
region with the help of a the kernel linker.
Virtualized global variables exist in per-vnet memory set up when the
network stack instance is created, and are initialized statically from
the reference copy. Run-time access occurs via an accessor macro, which
converts from the current vnet and requested symbol to a per-vnet
address. When "options VIMAGE" is not compiled into the kernel, normal
global ELF symbols will be used instead and indirection is avoided.
This change restores static initialization for network stack global
variables, restores support for non-global symbols and types, eliminates
the need for many subsystem constructors, eliminates large per-subsystem
structures that caused many binary compatibility issues both for
monitoring applications (netstat) and kernel modules, removes the
per-function INIT_VNET_*() macros throughout the stack, eliminates the
need for vnet_symmap ksym(2) munging, and eliminates duplicate
definitions of virtualized globals under VIMAGE_GLOBALS.
Bump __FreeBSD_version and update UPDATING.
Portions submitted by: bz
Reviewed by: bz, zec
Discussed with: gnn, jamie, jeff, jhb, julian, sam
Suggested by: peter
Approved by: re (kensmith)
2009-07-14 22:48:30 +00:00
|
|
|
|
2009-07-16 21:13:04 +00:00
|
|
|
#define V_ifnet VNET(ifnet)
|
|
|
|
#define V_ifg_head VNET(ifg_head)
|
|
|
|
#define V_if_index VNET(if_index)
|
|
|
|
#define V_loif VNET(loif)
|
2009-07-27 17:08:06 +00:00
|
|
|
#define V_useloopback VNET(useloopback)
|
Build on Jeff Roberson's linker-set based dynamic per-CPU allocator
(DPCPU), as suggested by Peter Wemm, and implement a new per-virtual
network stack memory allocator. Modify vnet to use the allocator
instead of monolithic global container structures (vinet, ...). This
change solves many binary compatibility problems associated with
VIMAGE, and restores ELF symbols for virtualized global variables.
Each virtualized global variable exists as a "reference copy", and also
once per virtual network stack. Virtualized global variables are
tagged at compile-time, placing the in a special linker set, which is
loaded into a contiguous region of kernel memory. Virtualized global
variables in the base kernel are linked as normal, but those in modules
are copied and relocated to a reserved portion of the kernel's vnet
region with the help of a the kernel linker.
Virtualized global variables exist in per-vnet memory set up when the
network stack instance is created, and are initialized statically from
the reference copy. Run-time access occurs via an accessor macro, which
converts from the current vnet and requested symbol to a per-vnet
address. When "options VIMAGE" is not compiled into the kernel, normal
global ELF symbols will be used instead and indirection is avoided.
This change restores static initialization for network stack global
variables, restores support for non-global symbols and types, eliminates
the need for many subsystem constructors, eliminates large per-subsystem
structures that caused many binary compatibility issues both for
monitoring applications (netstat) and kernel modules, removes the
per-function INIT_VNET_*() macros throughout the stack, eliminates the
need for vnet_symmap ksym(2) munging, and eliminates duplicate
definitions of virtualized globals under VIMAGE_GLOBALS.
Bump __FreeBSD_version and update UPDATING.
Portions submitted by: bz
Reviewed by: bz, zec
Discussed with: gnn, jamie, jeff, jhb, julian, sam
Suggested by: peter
Approved by: re (kensmith)
2009-07-14 22:48:30 +00:00
|
|
|
|
2008-12-13 19:13:03 +00:00
|
|
|
extern int ifqmaxlen;
|
1997-01-03 19:50:26 +00:00
|
|
|
|
2006-06-19 22:20:45 +00:00
|
|
|
int if_addgroup(struct ifnet *, const char *);
|
|
|
|
int if_delgroup(struct ifnet *, const char *);
|
2002-03-19 21:54:18 +00:00
|
|
|
int if_addmulti(struct ifnet *, struct sockaddr *, struct ifmultiaddr **);
|
|
|
|
int if_allmulti(struct ifnet *, int);
|
2005-06-10 16:49:24 +00:00
|
|
|
struct ifnet* if_alloc(u_char);
|
2002-03-19 21:54:18 +00:00
|
|
|
void if_attach(struct ifnet *);
|
2009-04-23 11:51:53 +00:00
|
|
|
void if_dead(struct ifnet *);
|
2002-03-19 21:54:18 +00:00
|
|
|
int if_delmulti(struct ifnet *, struct sockaddr *);
|
2007-03-20 00:36:10 +00:00
|
|
|
void if_delmulti_ifma(struct ifmultiaddr *);
|
2002-03-19 21:54:18 +00:00
|
|
|
void if_detach(struct ifnet *);
|
Introduce the if_vmove() function, which will be used in the future
for reassigning ifnets from one vnet to another.
if_vmove() works by calling a restricted subset of actions normally
executed by if_detach() on an ifnet in the current vnet, and then
switches to the target vnet and executes an appropriate subset of
if_attach() actions there.
if_attach() and if_detach() have become wrapper functions around
if_attach_internal() and if_detach_internal(), where the later
variants have an additional argument, a flag indicating whether a
full attach or detach sequence is to be executed, or only a
restricted subset suitable for moving an ifnet from one vnet to
another. Hence, if_vmove() will not call if_detach() and if_attach()
directly, but will call the if_detach_internal() and
if_attach_internal() variants instead, with the vmove flag set.
While here, staticize ifnet_setbyindex() since it is not referenced
from outside of sys/net/if.c.
Also rename ifccnt field in struct vimage to ifcnt, and do some minor
whitespace garbage collection where appropriate.
This change should have no functional impact on nooptions VIMAGE kernel
builds.
Reviewed by: bz, rwatson, brooks?
Approved by: julian (mentor)
2009-05-22 22:09:00 +00:00
|
|
|
void if_vmove(struct ifnet *, struct vnet *);
|
2005-05-25 13:52:03 +00:00
|
|
|
void if_purgeaddrs(struct ifnet *);
|
2010-01-24 16:17:58 +00:00
|
|
|
void if_delallmulti(struct ifnet *);
|
2002-03-19 21:54:18 +00:00
|
|
|
void if_down(struct ifnet *);
|
2007-03-20 03:15:43 +00:00
|
|
|
struct ifmultiaddr *
|
|
|
|
if_findmulti(struct ifnet *, struct sockaddr *);
|
2005-06-10 16:49:24 +00:00
|
|
|
void if_free(struct ifnet *);
|
2003-10-31 18:32:15 +00:00
|
|
|
void if_initname(struct ifnet *, const char *, int);
|
2004-12-08 05:45:59 +00:00
|
|
|
void if_link_state_change(struct ifnet *, int);
|
2002-09-24 17:35:08 +00:00
|
|
|
int if_printf(struct ifnet *, const char *, ...) __printflike(2, 3);
|
2009-04-16 23:05:10 +00:00
|
|
|
void if_qflush(struct ifnet *);
|
Start to address a number of races relating to use of ifnet pointers
after the corresponding interface has been destroyed:
(1) Add an ifnet refcount, ifp->if_refcount. Initialize it to 1 in
if_alloc(), and modify if_free_type() to decrement and check the
refcount.
(2) Add new if_ref() and if_rele() interfaces to allow kernel code
walking global interface lists to release IFNET_[RW]LOCK() yet
keep the ifnet stable. Currently, if_rele() is a no-op wrapper
around if_free(), but this may change in the future.
(3) Add new ifnet field, if_alloctype, which caches the type passed
to if_alloc(), but unlike if_type, won't be changed by drivers.
This allows asynchronous free's of the interface after the
driver has released it to still use the right type. Use that
instead of the type passed to if_free_type(), but assert that
they are the same (might have to rethink this if that doesn't
work out).
(4) Add a new ifnet_byindex_ref(), which looks up an interface by
index and returns a reference rather than a pointer to it.
(5) Fix if_alloc() to fully initialize the if_addr_mtx before hooking
up the ifnet to global lists.
(6) Modify sysctls in if_mib.c to use ifnet_byindex_ref() and release
the ifnet when done.
When this change is MFC'd, it will need to replace if_ispare fields
rather than adding new fields in order to avoid breaking the binary
interface. Once this change is MFC'd, if_free_type() should be
removed, as its 'type' argument is now optional.
This refcount is not appropriate for counting mbuf pkthdr references,
and also not for counting entry into the device driver via ifnet
function pointers. An rmlock may be appropriate for the latter.
Rather, this is about ensuring data structure stability when reaching
an ifnet via global ifnet lists and tables followed by copy in or out
of userspace.
MFC after: 3 weeks
Reported by: mdtancsa
Reviewed by: brooks
2009-04-21 22:43:32 +00:00
|
|
|
void if_ref(struct ifnet *);
|
|
|
|
void if_rele(struct ifnet *);
|
2002-03-19 21:54:18 +00:00
|
|
|
int if_setlladdr(struct ifnet *, const u_char *, int);
|
|
|
|
void if_up(struct ifnet *);
|
|
|
|
int ifioctl(struct socket *, u_long, caddr_t, struct thread *);
|
|
|
|
int ifpromisc(struct ifnet *, int);
|
|
|
|
struct ifnet *ifunit(const char *);
|
2009-04-23 13:08:47 +00:00
|
|
|
struct ifnet *ifunit_ref(const char *);
|
2002-03-19 21:54:18 +00:00
|
|
|
|
2009-06-15 19:50:03 +00:00
|
|
|
void ifq_init(struct ifaltq *, struct ifnet *ifp);
|
|
|
|
void ifq_delete(struct ifaltq *);
|
2008-11-22 05:55:56 +00:00
|
|
|
|
2009-09-15 19:18:34 +00:00
|
|
|
int ifa_add_loopback_route(struct ifaddr *, struct sockaddr *);
|
|
|
|
int ifa_del_loopback_route(struct ifaddr *, struct sockaddr *);
|
|
|
|
|
2002-03-19 21:54:18 +00:00
|
|
|
struct ifaddr *ifa_ifwithaddr(struct sockaddr *);
|
2009-06-22 10:59:34 +00:00
|
|
|
int ifa_ifwithaddr_check(struct sockaddr *);
|
2006-09-06 17:12:10 +00:00
|
|
|
struct ifaddr *ifa_ifwithbroadaddr(struct sockaddr *);
|
2002-03-19 21:54:18 +00:00
|
|
|
struct ifaddr *ifa_ifwithdstaddr(struct sockaddr *);
|
2010-05-25 20:42:35 +00:00
|
|
|
struct ifaddr *ifa_ifwithnet(struct sockaddr *, int);
|
2002-03-19 21:54:18 +00:00
|
|
|
struct ifaddr *ifa_ifwithroute(int, struct sockaddr *, struct sockaddr *);
|
Add code to allow the system to handle multiple routing tables.
This particular implementation is designed to be fully backwards compatible
and to be MFC-able to 7.x (and 6.x)
Currently the only protocol that can make use of the multiple tables is IPv4
Similar functionality exists in OpenBSD and Linux.
From my notes:
-----
One thing where FreeBSD has been falling behind, and which by chance I
have some time to work on is "policy based routing", which allows
different
packet streams to be routed by more than just the destination address.
Constraints:
------------
I want to make some form of this available in the 6.x tree
(and by extension 7.x) , but FreeBSD in general needs it so I might as
well do it in -current and back port the portions I need.
One of the ways that this can be done is to have the ability to
instantiate multiple kernel routing tables (which I will now
refer to as "Forwarding Information Bases" or "FIBs" for political
correctness reasons). Which FIB a particular packet uses to make
the next hop decision can be decided by a number of mechanisms.
The policies these mechanisms implement are the "Policies" referred
to in "Policy based routing".
One of the constraints I have if I try to back port this work to
6.x is that it must be implemented as a EXTENSION to the existing
ABIs in 6.x so that third party applications do not need to be
recompiled in timespan of the branch.
This first version will not have some of the bells and whistles that
will come with later versions. It will, for example, be limited to 16
tables in the first commit.
Implementation method, Compatible version. (part 1)
-------------------------------
For this reason I have implemented a "sufficient subset" of a
multiple routing table solution in Perforce, and back-ported it
to 6.x. (also in Perforce though not always caught up with what I
have done in -current/P4). The subset allows a number of FIBs
to be defined at compile time (8 is sufficient for my purposes in 6.x)
and implements the changes needed to allow IPV4 to use them. I have not
done the changes for ipv6 simply because I do not need it, and I do not
have enough knowledge of ipv6 (e.g. neighbor discovery) needed to do it.
Other protocol families are left untouched and should there be
users with proprietary protocol families, they should continue to work
and be oblivious to the existence of the extra FIBs.
To understand how this is done, one must know that the current FIB
code starts everything off with a single dimensional array of
pointers to FIB head structures (One per protocol family), each of
which in turn points to the trie of routes available to that family.
The basic change in the ABI compatible version of the change is to
extent that array to be a 2 dimensional array, so that
instead of protocol family X looking at rt_tables[X] for the
table it needs, it looks at rt_tables[Y][X] when for all
protocol families except ipv4 Y is always 0.
Code that is unaware of the change always just sees the first row
of the table, which of course looks just like the one dimensional
array that existed before.
The entry points rtrequest(), rtalloc(), rtalloc1(), rtalloc_ign()
are all maintained, but refer only to the first row of the array,
so that existing callers in proprietary protocols can continue to
do the "right thing".
Some new entry points are added, for the exclusive use of ipv4 code
called in_rtrequest(), in_rtalloc(), in_rtalloc1() and in_rtalloc_ign(),
which have an extra argument which refers the code to the correct row.
In addition, there are some new entry points (currently called
rtalloc_fib() and friends) that check the Address family being
looked up and call either rtalloc() (and friends) if the protocol
is not IPv4 forcing the action to row 0 or to the appropriate row
if it IS IPv4 (and that info is available). These are for calling
from code that is not specific to any particular protocol. The way
these are implemented would change in the non ABI preserving code
to be added later.
One feature of the first version of the code is that for ipv4,
the interface routes show up automatically on all the FIBs, so
that no matter what FIB you select you always have the basic
direct attached hosts available to you. (rtinit() does this
automatically).
You CAN delete an interface route from one FIB should you want
to but by default it's there. ARP information is also available
in each FIB. It's assumed that the same machine would have the
same MAC address, regardless of which FIB you are using to get
to it.
This brings us as to how the correct FIB is selected for an outgoing
IPV4 packet.
Firstly, all packets have a FIB associated with them. if nothing
has been done to change it, it will be FIB 0. The FIB is changed
in the following ways.
Packets fall into one of a number of classes.
1/ locally generated packets, coming from a socket/PCB.
Such packets select a FIB from a number associated with the
socket/PCB. This in turn is inherited from the process,
but can be changed by a socket option. The process in turn
inherits it on fork. I have written a utility call setfib
that acts a bit like nice..
setfib -3 ping target.example.com # will use fib 3 for ping.
It is an obvious extension to make it a property of a jail
but I have not done so. It can be achieved by combining the setfib and
jail commands.
2/ packets received on an interface for forwarding.
By default these packets would use table 0,
(or possibly a number settable in a sysctl(not yet)).
but prior to routing the firewall can inspect them (see below).
(possibly in the future you may be able to associate a FIB
with packets received on an interface.. An ifconfig arg, but not yet.)
3/ packets inspected by a packet classifier, which can arbitrarily
associate a fib with it on a packet by packet basis.
A fib assigned to a packet by a packet classifier
(such as ipfw) would over-ride a fib associated by
a more default source. (such as cases 1 or 2).
4/ a tcp listen socket associated with a fib will generate
accept sockets that are associated with that same fib.
5/ Packets generated in response to some other packet (e.g. reset
or icmp packets). These should use the FIB associated with the
packet being reponded to.
6/ Packets generated during encapsulation.
gif, tun and other tunnel interfaces will encapsulate using the FIB
that was in effect withthe proces that set up the tunnel.
thus setfib 1 ifconfig gif0 [tunnel instructions]
will set the fib for the tunnel to use to be fib 1.
Routing messages would be associated with their
process, and thus select one FIB or another.
messages from the kernel would be associated with the fib they
refer to and would only be received by a routing socket associated
with that fib. (not yet implemented)
In addition Netstat has been edited to be able to cope with the
fact that the array is now 2 dimensional. (It looks in system
memory using libkvm (!)). Old versions of netstat see only the first FIB.
In addition two sysctls are added to give:
a) the number of FIBs compiled in (active)
b) the default FIB of the calling process.
Early testing experience:
-------------------------
Basically our (IronPort's) appliance does this functionality already
using ipfw fwd but that method has some drawbacks.
For example,
It can't fully simulate a routing table because it can't influence the
socket's choice of local address when a connect() is done.
Testing during the generating of these changes has been
remarkably smooth so far. Multiple tables have co-existed
with no notable side effects, and packets have been routes
accordingly.
ipfw has grown 2 new keywords:
setfib N ip from anay to any
count ip from any to any fib N
In pf there seems to be a requirement to be able to give symbolic names to the
fibs but I do not have that capacity. I am not sure if it is required.
SCTP has interestingly enough built in support for this, called VRFs
in Cisco parlance. it will be interesting to see how that handles it
when it suddenly actually does something.
Where to next:
--------------------
After committing the ABI compatible version and MFCing it, I'd
like to proceed in a forward direction in -current. this will
result in some roto-tilling in the routing code.
Firstly: the current code's idea of having a separate tree per
protocol family, all of the same format, and pointed to by the
1 dimensional array is a bit silly. Especially when one considers that
there is code that makes assumptions about every protocol having the
same internal structures there. Some protocols don't WANT that
sort of structure. (for example the whole idea of a netmask is foreign
to appletalk). This needs to be made opaque to the external code.
My suggested first change is to add routing method pointers to the
'domain' structure, along with information pointing the data.
instead of having an array of pointers to uniform structures,
there would be an array pointing to the 'domain' structures
for each protocol address domain (protocol family),
and the methods this reached would be called. The methods would have
an argument that gives FIB number, but the protocol would be free
to ignore it.
When the ABI can be changed it raises the possibilty of the
addition of a fib entry into the "struct route". Currently,
the structure contains the sockaddr of the desination, and the resulting
fib entry. To make this work fully, one could add a fib number
so that given an address and a fib, one can find the third element, the
fib entry.
Interaction with the ARP layer/ LL layer would need to be
revisited as well. Qing Li has been working on this already.
This work was sponsored by Ironport Systems/Cisco
Reviewed by: several including rwatson, bz and mlair (parts each)
Obtained from: Ironport systems/Cisco
2008-05-09 23:03:00 +00:00
|
|
|
struct ifaddr *ifa_ifwithroute_fib(int, struct sockaddr *, struct sockaddr *, u_int);
|
2002-03-19 21:54:18 +00:00
|
|
|
struct ifaddr *ifaof_ifpforaddr(struct sockaddr *, struct ifnet *);
|
2013-02-11 10:58:22 +00:00
|
|
|
int ifa_preferred(struct ifaddr *, struct ifaddr *);
|
2002-03-19 21:54:18 +00:00
|
|
|
|
|
|
|
int if_simloop(struct ifnet *ifp, struct mbuf *m, int af, int hlen);
|
|
|
|
|
2005-06-10 16:49:24 +00:00
|
|
|
typedef void *if_com_alloc_t(u_char type, struct ifnet *ifp);
|
|
|
|
typedef void if_com_free_t(void *com, u_char type);
|
|
|
|
void if_register_com_alloc(u_char type, if_com_alloc_t *a, if_com_free_t *f);
|
|
|
|
void if_deregister_com_alloc(u_char type);
|
|
|
|
|
2001-10-14 20:17:53 +00:00
|
|
|
#define IF_LLADDR(ifp) \
|
2005-11-11 16:04:59 +00:00
|
|
|
LLADDR((struct sockaddr_dl *)((ifp)->if_addr->ifa_addr))
|
2001-10-14 20:17:53 +00:00
|
|
|
|
Device Polling code for -current.
Non-SMP, i386-only, no polling in the idle loop at the moment.
To use this code you must compile a kernel with
options DEVICE_POLLING
and at runtime enable polling with
sysctl kern.polling.enable=1
The percentage of CPU reserved to userland can be set with
sysctl kern.polling.user_frac=NN (default is 50)
while the remainder is used by polling device drivers and netisr's.
These are the only two variables that you should need to touch. There
are a few more parameters in kern.polling but the default values
are adequate for all purposes. See the code in kern_poll.c for
more details on them.
Polling in the idle loop will be implemented shortly by introducing
a kernel thread which does the job. Until then, the amount of CPU
dedicated to polling will never exceed (100-user_frac).
The equivalent (actually, better) code for -stable is at
http://info.iet.unipi.it/~luigi/polling/
and also supports polling in the idle loop.
NOTE to Alpha developers:
There is really nothing in this code that is i386-specific.
If you move the 2 lines supporting the new option from
sys/conf/{files,options}.i386 to sys/conf/{files,options} I am
pretty sure that this should work on the Alpha as well, just that
I do not have a suitable test box to try it. If someone feels like
trying it, I would appreciate it.
NOTE to other developers:
sure some things could be done better, and as always I am open to
constructive criticism, which a few of you have already given and
I greatly appreciated.
However, before proposing radical architectural changes, please
take some time to possibly try out this code, or at the very least
read the comments in kern_poll.c, especially re. the reason why I
am using a soft netisr and cannot (I believe) replace it with a
simple timeout.
Quick description of files touched by this commit:
sys/conf/files.i386
new file kern/kern_poll.c
sys/conf/options.i386
new option
sys/i386/i386/trap.c
poll in trap (disabled by default)
sys/kern/kern_clock.c
initialization and hardclock hooks.
sys/kern/kern_intr.c
minor swi_net changes
sys/kern/kern_poll.c
the bulk of the code.
sys/net/if.h
new flag
sys/net/if_var.h
declaration for functions used in device drivers.
sys/net/netisr.h
NETISR_POLL
sys/dev/fxp/if_fxp.c
sys/dev/fxp/if_fxpvar.h
sys/pci/if_dc.c
sys/pci/if_dcreg.h
sys/pci/if_sis.c
sys/pci/if_sisreg.h
device driver modifications
2001-12-14 17:56:12 +00:00
|
|
|
#ifdef DEVICE_POLLING
|
Big polling(4) cleanup.
o Axe poll in trap.
o Axe IFF_POLLING flag from if_flags.
o Rework revision 1.21 (Giant removal), in such a way that
poll_mtx is not dropped during call to polling handler.
This fixes problem with idle polling.
o Make registration and deregistration from polling in a
functional way, insted of next tick/interrupt.
o Obsolete kern.polling.enable. Polling is turned on/off
with ifconfig.
Detailed kern_poll.c changes:
- Remove polling handler flags, introduced in 1.21. The are not
needed now.
- Forget and do not check if_flags, if_capenable and if_drv_flags.
- Call all registered polling handlers unconditionally.
- Do not drop poll_mtx, when entering polling handlers.
- In ether_poll() NET_LOCK_GIANT prior to locking poll_mtx.
- In netisr_poll() axe the block, where polling code asks drivers
to unregister.
- In netisr_poll() and ether_poll() do polling always, if any
handlers are present.
- In ether_poll_[de]register() remove a lot of error hiding code. Assert
that arguments are correct, instead.
- In ether_poll_[de]register() use standard return values in case of
error or success.
- Introduce poll_switch() that is a sysctl handler for kern.polling.enable.
poll_switch() goes through interface list and enabled/disables polling.
A message that kern.polling.enable is deprecated is printed.
Detailed driver changes:
- On attach driver announces IFCAP_POLLING in if_capabilities, but
not in if_capenable.
- On detach driver calls ether_poll_deregister() if polling is enabled.
- In polling handler driver obtains its lock and checks IFF_DRV_RUNNING
flag. If there is no, then unlocks and returns.
- In ioctl handler driver checks for IFCAP_POLLING flag requested to
be set or cleared. Driver first calls ether_poll_[de]register(), then
obtains driver lock and [dis/en]ables interrupts.
- In interrupt handler driver checks IFCAP_POLLING flag in if_capenable.
If present, then returns.This is important to protect from spurious
interrupts.
Reviewed by: ru, sam, jhb
2005-10-01 18:56:19 +00:00
|
|
|
enum poll_cmd { POLL_ONLY, POLL_AND_CHECK_STATUS };
|
Device Polling code for -current.
Non-SMP, i386-only, no polling in the idle loop at the moment.
To use this code you must compile a kernel with
options DEVICE_POLLING
and at runtime enable polling with
sysctl kern.polling.enable=1
The percentage of CPU reserved to userland can be set with
sysctl kern.polling.user_frac=NN (default is 50)
while the remainder is used by polling device drivers and netisr's.
These are the only two variables that you should need to touch. There
are a few more parameters in kern.polling but the default values
are adequate for all purposes. See the code in kern_poll.c for
more details on them.
Polling in the idle loop will be implemented shortly by introducing
a kernel thread which does the job. Until then, the amount of CPU
dedicated to polling will never exceed (100-user_frac).
The equivalent (actually, better) code for -stable is at
http://info.iet.unipi.it/~luigi/polling/
and also supports polling in the idle loop.
NOTE to Alpha developers:
There is really nothing in this code that is i386-specific.
If you move the 2 lines supporting the new option from
sys/conf/{files,options}.i386 to sys/conf/{files,options} I am
pretty sure that this should work on the Alpha as well, just that
I do not have a suitable test box to try it. If someone feels like
trying it, I would appreciate it.
NOTE to other developers:
sure some things could be done better, and as always I am open to
constructive criticism, which a few of you have already given and
I greatly appreciated.
However, before proposing radical architectural changes, please
take some time to possibly try out this code, or at the very least
read the comments in kern_poll.c, especially re. the reason why I
am using a soft netisr and cannot (I believe) replace it with a
simple timeout.
Quick description of files touched by this commit:
sys/conf/files.i386
new file kern/kern_poll.c
sys/conf/options.i386
new option
sys/i386/i386/trap.c
poll in trap (disabled by default)
sys/kern/kern_clock.c
initialization and hardclock hooks.
sys/kern/kern_intr.c
minor swi_net changes
sys/kern/kern_poll.c
the bulk of the code.
sys/net/if.h
new flag
sys/net/if_var.h
declaration for functions used in device drivers.
sys/net/netisr.h
NETISR_POLL
sys/dev/fxp/if_fxp.c
sys/dev/fxp/if_fxpvar.h
sys/pci/if_dc.c
sys/pci/if_dcreg.h
sys/pci/if_sis.c
sys/pci/if_sisreg.h
device driver modifications
2001-12-14 17:56:12 +00:00
|
|
|
|
2009-05-30 15:14:44 +00:00
|
|
|
typedef int poll_handler_t(struct ifnet *ifp, enum poll_cmd cmd, int count);
|
2002-03-19 21:54:18 +00:00
|
|
|
int ether_poll_register(poll_handler_t *h, struct ifnet *ifp);
|
|
|
|
int ether_poll_deregister(struct ifnet *ifp);
|
Device Polling code for -current.
Non-SMP, i386-only, no polling in the idle loop at the moment.
To use this code you must compile a kernel with
options DEVICE_POLLING
and at runtime enable polling with
sysctl kern.polling.enable=1
The percentage of CPU reserved to userland can be set with
sysctl kern.polling.user_frac=NN (default is 50)
while the remainder is used by polling device drivers and netisr's.
These are the only two variables that you should need to touch. There
are a few more parameters in kern.polling but the default values
are adequate for all purposes. See the code in kern_poll.c for
more details on them.
Polling in the idle loop will be implemented shortly by introducing
a kernel thread which does the job. Until then, the amount of CPU
dedicated to polling will never exceed (100-user_frac).
The equivalent (actually, better) code for -stable is at
http://info.iet.unipi.it/~luigi/polling/
and also supports polling in the idle loop.
NOTE to Alpha developers:
There is really nothing in this code that is i386-specific.
If you move the 2 lines supporting the new option from
sys/conf/{files,options}.i386 to sys/conf/{files,options} I am
pretty sure that this should work on the Alpha as well, just that
I do not have a suitable test box to try it. If someone feels like
trying it, I would appreciate it.
NOTE to other developers:
sure some things could be done better, and as always I am open to
constructive criticism, which a few of you have already given and
I greatly appreciated.
However, before proposing radical architectural changes, please
take some time to possibly try out this code, or at the very least
read the comments in kern_poll.c, especially re. the reason why I
am using a soft netisr and cannot (I believe) replace it with a
simple timeout.
Quick description of files touched by this commit:
sys/conf/files.i386
new file kern/kern_poll.c
sys/conf/options.i386
new option
sys/i386/i386/trap.c
poll in trap (disabled by default)
sys/kern/kern_clock.c
initialization and hardclock hooks.
sys/kern/kern_intr.c
minor swi_net changes
sys/kern/kern_poll.c
the bulk of the code.
sys/net/if.h
new flag
sys/net/if_var.h
declaration for functions used in device drivers.
sys/net/netisr.h
NETISR_POLL
sys/dev/fxp/if_fxp.c
sys/dev/fxp/if_fxpvar.h
sys/pci/if_dc.c
sys/pci/if_dcreg.h
sys/pci/if_sis.c
sys/pci/if_sisreg.h
device driver modifications
2001-12-14 17:56:12 +00:00
|
|
|
#endif /* DEVICE_POLLING */
|
|
|
|
|
1999-12-29 04:46:21 +00:00
|
|
|
#endif /* _KERNEL */
|
1997-01-03 19:50:26 +00:00
|
|
|
|
|
|
|
#endif /* !_NET_IF_VAR_H_ */
|