Bring over my initial work from the net80211 TX locking branch.

This patchset implements a new TX lock, covering both the per-VAP (and
thus per-node) TX locking and the serialisation through to the underlying
physical device.

This implements the hard requirement that frames to the underlying physical
device are scheduled to the underlying device in the same order that they
are processed at the VAP layer.  This includes adding extra encapsulation
state (such as sequence numbers and CCMP IV numbers.)  Any order mismatch
here will result in dropped packets at the receiver.

There are multiple transmit contexts from the upper protocol layers as well
as the "raw" interface via the management and BPF transmit paths.
All of these need to be correctly serialised or bad behaviour will result
under load.

The specifics:

* add a new TX IC lock - it will eventually just be used for serialisation
  to the underlying physical device but for now it's used for both the
  VAP encapsulation/serialisation and the physical device dispatch.

  This lock is specifically non-recursive.

* Methodize the parent transmit, vap transmit and ic_raw_xmit function
  pointers; use lock assertions in the parent/vap transmit routines.

* Add a lock assertion in ieee80211_encap() - the TX lock must be held
  here to guarantee sensible behaviour.

* Refactor out the packet sending code from ieee80211_start() - now
  ieee80211_start() is just a loop over the ifnet queue and it dispatches
  each VAP packet send through ieee80211_start_pkt().

  Yes, I will likely rename ieee80211_start_pkt() to something that
  better reflects its status as a VAP packet transmit path.  More on
  that later.

* Add locking around the management and BAR TX sending - to ensure that
  encapsulation and TX are done hand-in-hand.

* Add locking in the mesh code - again, to ensure that encapsulation
  and mesh transmit are done hand-in-hand.

* Add locking around the power save queue and ageq handling, when
  dispatching to the parent interface.

* Add locking around the WDS handoff.

* Add a note in the mesh dispatch code that the TX path needs to be
  re-thought-out - right now it's doing a direct parent device transmit
  rather than going via the vap layer.  It may "work", but it's likely
  incorrect (as it bypasses any possible per-node power save and
  aggregation handling.)

Why not a per-VAP or per-node lock?

Because in order to ensure per-VAP ordering, we'd have to hold the
VAP lock across parent->if_transmit().  There are a few problems
with this:

* There's some state being setup during each driver transmit - specifically,
  the encryption encap / CCMP IV setup.  That should eventually be dragged
  back into the encapsulation phase but for now it lives in the driver TX path.
  This should be locked.

* Two drivers (ath, iwn) re-use the node->ni_txseqs array in order to
  allocate sequence numbers when doing transmit aggregation.  This should
  also be locked.

* Drivers may have multiple frames queued already - so when one calls
  if_transmit(), it may end up dispatching multiple frames for different
  VAPs/nodes, each needing a different lock when handling that particular
  end destination.

So to be "correct" locking-wise, we'd end up needing to grab a VAP or
node lock inside the driver TX path when setting up crypto / AMPDU sequence
numbers, and we may already _have_ a TX lock held - mostly for the same
destination vap/node, but sometimes it'll be for others.  That could lead
to LORs and thus deadlocks.

So for now, I'm sticking with an IC TX lock.  It has the advantage of
papering over the above and it also has the added advantage that I can
assert that it's being held when doing a parent device transmit.
I'll look at splitting the locks out a bit more later on.

General outstanding net80211 TX path issues / TODO:

* Look into separating out the VAP serialisation and the IC handoff.
  It's going to be tricky as parent->if_transmit() doesn't give me the
  opportunity to split queuing from driver dispatch.  See above.

* Work with monthadar to fix up the mesh transmit path so it doesn't go via
  the parent interface when retransmitting frames.

* Push the encryption handling back into the driver, if it's at all
  architectually sane to do so.  I know it's possible - it's what mac80211
  in Linux does.

* Make ieee80211_raw_xmit() queue a frame into VAP or parent queue rather
  than doing a short-cut direct into the driver.  There are QoS issues
  here - you do want your management frames to be encapsulated and pushed
  onto the stack sooner than the (large, bursty) amount of data frames
  that are queued.  But there has to be a saner way to do this.

* Fragments are still broken - drivers need to be upgraded to an if_transmit()
  implementation and then fragmentation handling needs to be properly fixed.

Tested:

* STA - AR5416, AR9280, Intel 5300 abgn wifi
* Hostap - AR5416, AR9160, AR9280
* Mesh - some testing by monthadar@, more to come.
This commit is contained in:
Adrian Chadd 2013-03-08 20:23:55 +00:00
parent bd9fba0cfe
commit 5cda6006e4
13 changed files with 458 additions and 250 deletions

View File

@ -278,6 +278,7 @@ ieee80211_ifattach(struct ieee80211com *ic,
KASSERT(ifp->if_type == IFT_IEEE80211, ("if_type %d", ifp->if_type));
IEEE80211_LOCK_INIT(ic, ifp->if_xname);
IEEE80211_TX_LOCK_INIT(ic, ifp->if_xname);
TAILQ_INIT(&ic->ic_vaps);
/* Create a taskqueue for all state changes */
@ -385,6 +386,7 @@ ieee80211_ifdetach(struct ieee80211com *ic)
ifmedia_removeall(&ic->ic_media);
taskqueue_free(ic->ic_tq);
IEEE80211_TX_LOCK_DESTROY(ic);
IEEE80211_LOCK_DESTROY(ic);
}

View File

@ -504,6 +504,44 @@ ieee80211_process_callback(struct ieee80211_node *ni,
}
}
/*
* Transmit a frame to the parent interface.
*
* TODO: if the transmission fails, make sure the parent node is freed
* (the callers will first need modifying.)
*/
int
ieee80211_parent_transmit(struct ieee80211com *ic,
struct mbuf *m)
{
struct ifnet *parent = ic->ic_ifp;
/*
* Assert the IC TX lock is held - this enforces the
* processing -> queuing order is maintained
*/
IEEE80211_TX_LOCK_ASSERT(ic);
return (parent->if_transmit(parent, m));
}
/*
* Transmit a frame to the VAP interface.
*/
int
ieee80211_vap_transmit(struct ieee80211vap *vap, struct mbuf *m)
{
struct ifnet *ifp = vap->iv_ifp;
/*
* When transmitting via the VAP, we shouldn't hold
* any IC TX lock as the VAP TX path will acquire it.
*/
IEEE80211_TX_UNLOCK_ASSERT(vap->iv_ic);
return (ifp->if_transmit(ifp, m));
}
#include <sys/libkern.h>
void

View File

@ -56,6 +56,30 @@ typedef struct {
#define IEEE80211_UNLOCK_ASSERT(_ic) \
mtx_assert(IEEE80211_LOCK_OBJ(_ic), MA_NOTOWNED)
/*
* Transmit lock.
*
* This is a (mostly) temporary lock designed to serialise all of the
* transmission operations throughout the stack.
*/
typedef struct {
char name[16]; /* e.g. "ath0_com_lock" */
struct mtx mtx;
} ieee80211_tx_lock_t;
#define IEEE80211_TX_LOCK_INIT(_ic, _name) do { \
ieee80211_tx_lock_t *cl = &(_ic)->ic_txlock; \
snprintf(cl->name, sizeof(cl->name), "%s_tx_lock", _name); \
mtx_init(&cl->mtx, cl->name, NULL, MTX_DEF); \
} while (0)
#define IEEE80211_TX_LOCK_OBJ(_ic) (&(_ic)->ic_txlock.mtx)
#define IEEE80211_TX_LOCK_DESTROY(_ic) mtx_destroy(IEEE80211_TX_LOCK_OBJ(_ic))
#define IEEE80211_TX_LOCK(_ic) mtx_lock(IEEE80211_TX_LOCK_OBJ(_ic))
#define IEEE80211_TX_UNLOCK(_ic) mtx_unlock(IEEE80211_TX_LOCK_OBJ(_ic))
#define IEEE80211_TX_LOCK_ASSERT(_ic) \
mtx_assert(IEEE80211_TX_LOCK_OBJ(_ic), MA_OWNED)
#define IEEE80211_TX_UNLOCK_ASSERT(_ic) \
mtx_assert(IEEE80211_TX_LOCK_OBJ(_ic), MA_NOTOWNED)
/*
* Node locking definitions.
*/
@ -272,9 +296,11 @@ int ieee80211_add_callback(struct mbuf *m,
void (*func)(struct ieee80211_node *, void *, int), void *arg);
void ieee80211_process_callback(struct ieee80211_node *, struct mbuf *, int);
void get_random_bytes(void *, size_t);
struct ieee80211com;
int ieee80211_parent_transmit(struct ieee80211com *, struct mbuf *);
int ieee80211_vap_transmit(struct ieee80211vap *, struct mbuf *);
void get_random_bytes(void *, size_t);
void ieee80211_sysctl_attach(struct ieee80211com *);
void ieee80211_sysctl_detach(struct ieee80211com *);

View File

@ -412,7 +412,7 @@ hostap_deliver_data(struct ieee80211vap *vap,
if (mcopy != NULL) {
int len, err;
len = mcopy->m_pkthdr.len;
err = ifp->if_transmit(ifp, mcopy);
err = ieee80211_vap_transmit(vap, mcopy);
if (err) {
/* NB: IFQ_HANDOFF reclaims mcopy */
} else {
@ -2255,8 +2255,8 @@ void
ieee80211_recv_pspoll(struct ieee80211_node *ni, struct mbuf *m0)
{
struct ieee80211vap *vap = ni->ni_vap;
struct ieee80211com *ic = vap->iv_ic;
struct ieee80211_frame_min *wh;
struct ifnet *ifp;
struct mbuf *m;
uint16_t aid;
int qlen;
@ -2320,23 +2320,15 @@ ieee80211_recv_pspoll(struct ieee80211_node *ni, struct mbuf *m0)
}
m->m_flags |= M_PWR_SAV; /* bypass PS handling */
if (m->m_flags & M_ENCAP)
ifp = vap->iv_ic->ic_ifp;
else
ifp = vap->iv_ifp;
/*
* Free any node ref which this mbuf may have.
*
* Much like psq_mfree(), we assume that M_ENCAP nodes have
* node references.
* Do the right thing; if it's an encap'ed frame then
* call ieee80211_parent_transmit() (and free the ref) else
* call ieee80211_vap_transmit().
*/
if (ifp->if_transmit(ifp, m) != 0) {
/*
* XXX m is invalid (freed) at this point, determine M_ENCAP
* an alternate way.
*/
if (ifp == vap->iv_ic->ic_ifp)
if (m->m_flags & M_ENCAP) {
if (ieee80211_parent_transmit(ic, m) != 0)
ieee80211_free_node(ni);
} else {
(void) ieee80211_vap_transmit(vap, m);
}
}

View File

@ -2392,7 +2392,9 @@ ieee80211_send_bar(struct ieee80211_node *ni,
* ic_raw_xmit will free the node reference
* regardless of queue/TX success or failure.
*/
ret = ic->ic_raw_xmit(ni, m, NULL);
IEEE80211_TX_LOCK(ic);
ret = ieee80211_raw_output(vap, ni, m, NULL);
IEEE80211_TX_UNLOCK(ic);
if (ret != 0) {
IEEE80211_NOTE(vap, IEEE80211_MSG_DEBUG | IEEE80211_MSG_11N,
ni, "send BAR: failed: (ret = %d)\n",

View File

@ -592,6 +592,7 @@ hwmp_send_action(struct ieee80211vap *vap,
struct ieee80211_bpf_params params;
struct mbuf *m;
uint8_t *frm;
int ret;
if (IEEE80211_IS_MULTICAST(da)) {
ni = ieee80211_ref_node(vap->iv_bss);
@ -654,6 +655,9 @@ hwmp_send_action(struct ieee80211vap *vap,
vap->iv_stats.is_tx_nobuf++;
return ENOMEM;
}
IEEE80211_TX_LOCK(ic);
ieee80211_send_setup(ni, m,
IEEE80211_FC0_TYPE_MGT | IEEE80211_FC0_SUBTYPE_ACTION,
IEEE80211_NONQOS_TID, vap->iv_myaddr, da, vap->iv_myaddr);
@ -669,7 +673,9 @@ hwmp_send_action(struct ieee80211vap *vap,
else
params.ibp_try0 = ni->ni_txparms->maxretry;
params.ibp_power = ni->ni_txpower;
return ic->ic_raw_xmit(ni, m, &params);
ret = ieee80211_raw_output(vap, ni, m, &params);
IEEE80211_TX_UNLOCK(ic);
return (ret);
}
#define ADDSHORT(frm, v) do { \
@ -1271,12 +1277,9 @@ hwmp_recv_prep(struct ieee80211vap *vap, struct ieee80211_node *ni,
struct ieee80211_mesh_route *rtext = NULL;
struct ieee80211_hwmp_route *hr;
struct ieee80211com *ic = vap->iv_ic;
struct ifnet *ifp = vap->iv_ifp;
struct mbuf *m, *next;
uint32_t metric = 0;
const uint8_t *addr;
int is_encap;
struct ieee80211_node *ni_encap;
IEEE80211_NOTE(vap, IEEE80211_MSG_HWMP, ni,
"received PREP, orig %6D, targ %6D", prep->prep_origaddr, ":",
@ -1450,22 +1453,21 @@ hwmp_recv_prep(struct ieee80211vap *vap, struct ieee80211_node *ni,
m = ieee80211_ageq_remove(&ic->ic_stageq,
(struct ieee80211_node *)(uintptr_t)
ieee80211_mac_hash(ic, addr)); /* either dest or ext_dest */
/*
* All frames in the stageq here should be non-M_ENCAP; or things
* will get very unhappy.
*/
for (; m != NULL; m = next) {
is_encap = !! (m->m_flags & M_ENCAP);
ni_encap = (struct ieee80211_node *) m->m_pkthdr.rcvif;
next = m->m_nextpkt;
m->m_nextpkt = NULL;
IEEE80211_NOTE(vap, IEEE80211_MSG_HWMP, ni,
"flush queued frame %p len %d", m, m->m_pkthdr.len);
/*
* If the mbuf has M_ENCAP set, ensure we free it.
* Note that after if_transmit() is called, m is invalid.
*/
if (ifp->if_transmit(ifp, m) != 0) {
if (is_encap)
ieee80211_free_node(ni_encap);
}
(void) ieee80211_vap_transmit(vap, m);
}
#undef IS_PROXY
#undef PROXIED_BY_US

View File

@ -1041,11 +1041,12 @@ mesh_transmit_to_gate(struct ieee80211vap *vap, struct mbuf *m,
{
struct ifnet *ifp = vap->iv_ifp;
struct ieee80211com *ic = vap->iv_ic;
struct ifnet *parent = ic->ic_ifp;
struct ieee80211_node *ni;
struct ether_header *eh;
int error;
IEEE80211_TX_UNLOCK_ASSERT(ic);
eh = mtod(m, struct ether_header *);
ni = ieee80211_mesh_find_txnode(vap, rt_gate->rt_dest);
if (ni == NULL) {
@ -1132,6 +1133,8 @@ mesh_transmit_to_gate(struct ieee80211vap *vap, struct mbuf *m,
}
}
#endif /* IEEE80211_SUPPORT_SUPERG */
IEEE80211_TX_LOCK(ic);
if (__predict_true((vap->iv_caps & IEEE80211_C_8023ENCAP) == 0)) {
/*
* Encapsulate the packet in prep for transmission.
@ -1143,9 +1146,9 @@ mesh_transmit_to_gate(struct ieee80211vap *vap, struct mbuf *m,
return;
}
}
error = parent->if_transmit(parent, m);
error = ieee80211_parent_transmit(ic, m);
IEEE80211_TX_UNLOCK(ic);
if (error != 0) {
m_freem(m);
ieee80211_free_node(ni);
} else {
ifp->if_opackets++;
@ -1171,6 +1174,8 @@ ieee80211_mesh_forward_to_gates(struct ieee80211vap *vap,
struct ieee80211_mesh_gate_route *gr = NULL, *gr_next;
struct mbuf *m, *mcopy, *next;
IEEE80211_TX_UNLOCK_ASSERT(ic);
KASSERT( rt_dest->rt_flags == IEEE80211_MESHRT_FLAGS_DISCOVER,
("Route is not marked with IEEE80211_MESHRT_FLAGS_DISCOVER"));
@ -1240,7 +1245,6 @@ mesh_forward(struct ieee80211vap *vap, struct mbuf *m,
struct ieee80211com *ic = vap->iv_ic;
struct ieee80211_mesh_state *ms = vap->iv_mesh;
struct ifnet *ifp = vap->iv_ifp;
struct ifnet *parent = ic->ic_ifp;
const struct ieee80211_frame *wh =
mtod(m, const struct ieee80211_frame *);
struct mbuf *mcopy;
@ -1249,6 +1253,9 @@ mesh_forward(struct ieee80211vap *vap, struct mbuf *m,
struct ieee80211_node *ni;
int err;
/* This is called from the RX path - don't hold this lock */
IEEE80211_TX_UNLOCK_ASSERT(ic);
/*
* mesh ttl of 1 means we are the last one receving it,
* according to amendment we decrement and then check if
@ -1320,7 +1327,20 @@ mesh_forward(struct ieee80211vap *vap, struct mbuf *m,
/* XXX do we know m_nextpkt is NULL? */
mcopy->m_pkthdr.rcvif = (void *) ni;
err = parent->if_transmit(parent, mcopy);
/*
* XXX this bypasses all of the VAP TX handling; it passes frames
* directly to the parent interface.
*
* Because of this, there's no TX lock being held as there's no
* encaps state being used.
*
* Doing a direct parent transmit may not be the correct thing
* to do here; we'll have to re-think this soon.
*/
IEEE80211_TX_LOCK(ic);
err = ieee80211_parent_transmit(ic, mcopy);
IEEE80211_TX_UNLOCK(ic);
if (err != 0) {
/* NB: IFQ_HANDOFF reclaims mbuf */
ieee80211_free_node(ni);
@ -1457,6 +1477,10 @@ mesh_recv_indiv_data_to_fwrd(struct ieee80211vap *vap, struct mbuf *m,
struct ieee80211_qosframe_addr4 *qwh;
struct ieee80211_mesh_state *ms = vap->iv_mesh;
struct ieee80211_mesh_route *rt_meshda, *rt_meshsa;
struct ieee80211com *ic = vap->iv_ic;
/* This is called from the RX path - don't hold this lock */
IEEE80211_TX_UNLOCK_ASSERT(ic);
qwh = (struct ieee80211_qosframe_addr4 *)wh;
@ -1512,8 +1536,12 @@ mesh_recv_indiv_data_to_me(struct ieee80211vap *vap, struct mbuf *m,
const struct ieee80211_meshcntl_ae10 *mc10;
struct ieee80211_mesh_state *ms = vap->iv_mesh;
struct ieee80211_mesh_route *rt;
struct ieee80211com *ic = vap->iv_ic;
int ae;
/* This is called from the RX path - don't hold this lock */
IEEE80211_TX_UNLOCK_ASSERT(ic);
qwh = (struct ieee80211_qosframe_addr4 *)wh;
mc10 = (const struct ieee80211_meshcntl_ae10 *)mc;
@ -1575,6 +1603,10 @@ mesh_recv_group_data(struct ieee80211vap *vap, struct mbuf *m,
{
#define MC01(mc) ((const struct ieee80211_meshcntl_ae01 *)mc)
struct ieee80211_mesh_state *ms = vap->iv_mesh;
struct ieee80211com *ic = vap->iv_ic;
/* This is called from the RX path - don't hold this lock */
IEEE80211_TX_UNLOCK_ASSERT(ic);
mesh_forward(vap, m, mc);
@ -1621,6 +1653,9 @@ mesh_input(struct ieee80211_node *ni, struct mbuf *m, int rssi, int nf)
need_tap = 1; /* mbuf need to be tapped. */
type = -1; /* undefined */
/* This is called from the RX path - don't hold this lock */
IEEE80211_TX_UNLOCK_ASSERT(ic);
if (m->m_pkthdr.len < sizeof(struct ieee80211_frame_min)) {
IEEE80211_DISCARD_MAC(vap, IEEE80211_MSG_ANY,
ni->ni_macaddr, NULL,
@ -2743,6 +2778,7 @@ mesh_send_action(struct ieee80211_node *ni,
struct ieee80211com *ic = ni->ni_ic;
struct ieee80211_bpf_params params;
struct ieee80211_frame *wh;
int ret;
KASSERT(ni != NULL, ("null node"));
@ -2761,6 +2797,7 @@ mesh_send_action(struct ieee80211_node *ni,
return ENOMEM;
}
IEEE80211_TX_LOCK(ic);
wh = mtod(m, struct ieee80211_frame *);
ieee80211_send_setup(ni, m,
IEEE80211_FC0_TYPE_MGT | IEEE80211_FC0_SUBTYPE_ACTION,
@ -2778,7 +2815,9 @@ mesh_send_action(struct ieee80211_node *ni,
IEEE80211_NODE_STAT(ni, tx_mgmt);
return ic->ic_raw_xmit(ni, m, &params);
ret = ieee80211_raw_output(vap, ni, m, &params);
IEEE80211_TX_UNLOCK(ic);
return (ret);
}
#define ADDSHORT(frm, v) do { \

View File

@ -109,6 +109,255 @@ doprint(struct ieee80211vap *vap, int subtype)
}
#endif
/*
* Send the given mbuf through the given vap.
*
* This consumes the mbuf regardless of whether the transmit
* was successful or not.
*
* This does none of the initial checks that ieee80211_start()
* does (eg CAC timeout, interface wakeup) - the caller must
* do this first.
*/
static int
ieee80211_start_pkt(struct ieee80211vap *vap, struct mbuf *m)
{
#define IS_DWDS(vap) \
(vap->iv_opmode == IEEE80211_M_WDS && \
(vap->iv_flags_ext & IEEE80211_FEXT_WDSLEGACY) == 0)
struct ieee80211com *ic = vap->iv_ic;
struct ifnet *ifp = vap->iv_ifp;
struct ieee80211_node *ni;
struct ether_header *eh;
int error;
/*
* Cancel any background scan.
*/
if (ic->ic_flags & IEEE80211_F_SCAN)
ieee80211_cancel_anyscan(vap);
/*
* Find the node for the destination so we can do
* things like power save and fast frames aggregation.
*
* NB: past this point various code assumes the first
* mbuf has the 802.3 header present (and contiguous).
*/
ni = NULL;
if (m->m_len < sizeof(struct ether_header) &&
(m = m_pullup(m, sizeof(struct ether_header))) == NULL) {
IEEE80211_DPRINTF(vap, IEEE80211_MSG_OUTPUT,
"discard frame, %s\n", "m_pullup failed");
vap->iv_stats.is_tx_nobuf++; /* XXX */
ifp->if_oerrors++;
return (ENOBUFS);
}
eh = mtod(m, struct ether_header *);
if (ETHER_IS_MULTICAST(eh->ether_dhost)) {
if (IS_DWDS(vap)) {
/*
* Only unicast frames from the above go out
* DWDS vaps; multicast frames are handled by
* dispatching the frame as it comes through
* the AP vap (see below).
*/
IEEE80211_DISCARD_MAC(vap, IEEE80211_MSG_WDS,
eh->ether_dhost, "mcast", "%s", "on DWDS");
vap->iv_stats.is_dwds_mcast++;
m_freem(m);
/* XXX better status? */
return (ENOBUFS);
}
if (vap->iv_opmode == IEEE80211_M_HOSTAP) {
/*
* Spam DWDS vap's w/ multicast traffic.
*/
/* XXX only if dwds in use? */
ieee80211_dwds_mcast(vap, m);
}
}
#ifdef IEEE80211_SUPPORT_MESH
if (vap->iv_opmode != IEEE80211_M_MBSS) {
#endif
ni = ieee80211_find_txnode(vap, eh->ether_dhost);
if (ni == NULL) {
/* NB: ieee80211_find_txnode does stat+msg */
ifp->if_oerrors++;
m_freem(m);
/* XXX better status? */
return (ENOBUFS);
}
if (ni->ni_associd == 0 &&
(ni->ni_flags & IEEE80211_NODE_ASSOCID)) {
IEEE80211_DISCARD_MAC(vap, IEEE80211_MSG_OUTPUT,
eh->ether_dhost, NULL,
"sta not associated (type 0x%04x)",
htons(eh->ether_type));
vap->iv_stats.is_tx_notassoc++;
ifp->if_oerrors++;
m_freem(m);
ieee80211_free_node(ni);
/* XXX better status? */
return (ENOBUFS);
}
#ifdef IEEE80211_SUPPORT_MESH
} else {
if (!IEEE80211_ADDR_EQ(eh->ether_shost, vap->iv_myaddr)) {
/*
* Proxy station only if configured.
*/
if (!ieee80211_mesh_isproxyena(vap)) {
IEEE80211_DISCARD_MAC(vap,
IEEE80211_MSG_OUTPUT |
IEEE80211_MSG_MESH,
eh->ether_dhost, NULL,
"%s", "proxy not enabled");
vap->iv_stats.is_mesh_notproxy++;
ifp->if_oerrors++;
m_freem(m);
/* XXX better status? */
return (ENOBUFS);
}
IEEE80211_DPRINTF(vap, IEEE80211_MSG_OUTPUT,
"forward frame from DS SA(%6D), DA(%6D)\n",
eh->ether_shost, ":",
eh->ether_dhost, ":");
ieee80211_mesh_proxy_check(vap, eh->ether_shost);
}
ni = ieee80211_mesh_discover(vap, eh->ether_dhost, m);
if (ni == NULL) {
/*
* NB: ieee80211_mesh_discover holds/disposes
* frame (e.g. queueing on path discovery).
*/
ifp->if_oerrors++;
/* XXX better status? */
return (ENOBUFS);
}
}
#endif
if ((ni->ni_flags & IEEE80211_NODE_PWR_MGT) &&
(m->m_flags & M_PWR_SAV) == 0) {
/*
* Station in power save mode; pass the frame
* to the 802.11 layer and continue. We'll get
* the frame back when the time is right.
* XXX lose WDS vap linkage?
*/
(void) ieee80211_pwrsave(ni, m);
ieee80211_free_node(ni);
/* XXX better status? */
return (ENOBUFS);
}
/* calculate priority so drivers can find the tx queue */
if (ieee80211_classify(ni, m)) {
IEEE80211_DISCARD_MAC(vap, IEEE80211_MSG_OUTPUT,
eh->ether_dhost, NULL,
"%s", "classification failure");
vap->iv_stats.is_tx_classify++;
ifp->if_oerrors++;
m_freem(m);
ieee80211_free_node(ni);
/* XXX better status? */
return (ENOBUFS);
}
/*
* Stash the node pointer. Note that we do this after
* any call to ieee80211_dwds_mcast because that code
* uses any existing value for rcvif to identify the
* interface it (might have been) received on.
*/
m->m_pkthdr.rcvif = (void *)ni;
BPF_MTAP(ifp, m); /* 802.3 tx */
/*
* Check if A-MPDU tx aggregation is setup or if we
* should try to enable it. The sta must be associated
* with HT and A-MPDU enabled for use. When the policy
* routine decides we should enable A-MPDU we issue an
* ADDBA request and wait for a reply. The frame being
* encapsulated will go out w/o using A-MPDU, or possibly
* it might be collected by the driver and held/retransmit.
* The default ic_ampdu_enable routine handles staggering
* ADDBA requests in case the receiver NAK's us or we are
* otherwise unable to establish a BA stream.
*/
if ((ni->ni_flags & IEEE80211_NODE_AMPDU_TX) &&
(vap->iv_flags_ht & IEEE80211_FHT_AMPDU_TX) &&
(m->m_flags & M_EAPOL) == 0) {
int tid = WME_AC_TO_TID(M_WME_GETAC(m));
struct ieee80211_tx_ampdu *tap = &ni->ni_tx_ampdu[tid];
ieee80211_txampdu_count_packet(tap);
if (IEEE80211_AMPDU_RUNNING(tap)) {
/*
* Operational, mark frame for aggregation.
*
* XXX do tx aggregation here
*/
m->m_flags |= M_AMPDU_MPDU;
} else if (!IEEE80211_AMPDU_REQUESTED(tap) &&
ic->ic_ampdu_enable(ni, tap)) {
/*
* Not negotiated yet, request service.
*/
ieee80211_ampdu_request(ni, tap);
/* XXX hold frame for reply? */
}
}
#ifdef IEEE80211_SUPPORT_SUPERG
else if (IEEE80211_ATH_CAP(vap, ni, IEEE80211_NODE_FF)) {
m = ieee80211_ff_check(ni, m);
if (m == NULL) {
/* NB: any ni ref held on stageq */
/* XXX better status? */
return (ENOBUFS);
}
}
#endif /* IEEE80211_SUPPORT_SUPERG */
/*
* Grab the TX lock - serialise the TX process from this
* point (where TX state is being checked/modified)
* through to driver queue.
*/
IEEE80211_TX_LOCK(ic);
if (__predict_true((vap->iv_caps & IEEE80211_C_8023ENCAP) == 0)) {
/*
* Encapsulate the packet in prep for transmission.
*/
m = ieee80211_encap(vap, ni, m);
if (m == NULL) {
/* NB: stat+msg handled in ieee80211_encap */
IEEE80211_TX_UNLOCK(ic);
ieee80211_free_node(ni);
/* XXX better status? */
return (ENOBUFS);
}
}
error = ieee80211_parent_transmit(ic, m);
/*
* Unlock at this point - no need to hold it across
* ieee80211_free_node() (ie, the comlock)
*/
IEEE80211_TX_UNLOCK(ic);
if (error != 0) {
/* NB: IFQ_HANDOFF reclaims mbuf */
ieee80211_free_node(ni);
} else {
ifp->if_opackets++;
}
ic->ic_lastdata = ticks;
return (0);
#undef IS_DWDS
}
/*
* Start method for vap's. All packets from the stack come
* through here. We handle common processing of the packets
@ -117,16 +366,10 @@ doprint(struct ieee80211vap *vap, int subtype)
void
ieee80211_start(struct ifnet *ifp)
{
#define IS_DWDS(vap) \
(vap->iv_opmode == IEEE80211_M_WDS && \
(vap->iv_flags_ext & IEEE80211_FEXT_WDSLEGACY) == 0)
struct ieee80211vap *vap = ifp->if_softc;
struct ieee80211com *ic = vap->iv_ic;
struct ifnet *parent = ic->ic_ifp;
struct ieee80211_node *ni;
struct mbuf *m;
struct ether_header *eh;
int error;
/* NB: parent must be up and running */
if (!IFNET_IS_UP_RUNNING(parent)) {
@ -165,6 +408,7 @@ ieee80211_start(struct ifnet *ifp)
}
IEEE80211_UNLOCK(ic);
}
for (;;) {
IFQ_DEQUEUE(&ifp->if_snd, m);
if (m == NULL)
@ -180,203 +424,23 @@ ieee80211_start(struct ifnet *ifp)
*/
m->m_flags &= ~(M_80211_TX - M_PWR_SAV - M_MORE_DATA);
/*
* Cancel any background scan.
* Bump to the packet transmission path.
*/
if (ic->ic_flags & IEEE80211_F_SCAN)
ieee80211_cancel_anyscan(vap);
/*
* Find the node for the destination so we can do
* things like power save and fast frames aggregation.
*
* NB: past this point various code assumes the first
* mbuf has the 802.3 header present (and contiguous).
*/
ni = NULL;
if (m->m_len < sizeof(struct ether_header) &&
(m = m_pullup(m, sizeof(struct ether_header))) == NULL) {
IEEE80211_DPRINTF(vap, IEEE80211_MSG_OUTPUT,
"discard frame, %s\n", "m_pullup failed");
vap->iv_stats.is_tx_nobuf++; /* XXX */
ifp->if_oerrors++;
continue;
}
eh = mtod(m, struct ether_header *);
if (ETHER_IS_MULTICAST(eh->ether_dhost)) {
if (IS_DWDS(vap)) {
/*
* Only unicast frames from the above go out
* DWDS vaps; multicast frames are handled by
* dispatching the frame as it comes through
* the AP vap (see below).
*/
IEEE80211_DISCARD_MAC(vap, IEEE80211_MSG_WDS,
eh->ether_dhost, "mcast", "%s", "on DWDS");
vap->iv_stats.is_dwds_mcast++;
m_freem(m);
continue;
}
if (vap->iv_opmode == IEEE80211_M_HOSTAP) {
/*
* Spam DWDS vap's w/ multicast traffic.
*/
/* XXX only if dwds in use? */
ieee80211_dwds_mcast(vap, m);
}
}
#ifdef IEEE80211_SUPPORT_MESH
if (vap->iv_opmode != IEEE80211_M_MBSS) {
#endif
ni = ieee80211_find_txnode(vap, eh->ether_dhost);
if (ni == NULL) {
/* NB: ieee80211_find_txnode does stat+msg */
ifp->if_oerrors++;
m_freem(m);
continue;
}
if (ni->ni_associd == 0 &&
(ni->ni_flags & IEEE80211_NODE_ASSOCID)) {
IEEE80211_DISCARD_MAC(vap, IEEE80211_MSG_OUTPUT,
eh->ether_dhost, NULL,
"sta not associated (type 0x%04x)",
htons(eh->ether_type));
vap->iv_stats.is_tx_notassoc++;
ifp->if_oerrors++;
m_freem(m);
ieee80211_free_node(ni);
continue;
}
#ifdef IEEE80211_SUPPORT_MESH
} else {
if (!IEEE80211_ADDR_EQ(eh->ether_shost, vap->iv_myaddr)) {
/*
* Proxy station only if configured.
*/
if (!ieee80211_mesh_isproxyena(vap)) {
IEEE80211_DISCARD_MAC(vap,
IEEE80211_MSG_OUTPUT |
IEEE80211_MSG_MESH,
eh->ether_dhost, NULL,
"%s", "proxy not enabled");
vap->iv_stats.is_mesh_notproxy++;
ifp->if_oerrors++;
m_freem(m);
continue;
}
IEEE80211_DPRINTF(vap, IEEE80211_MSG_OUTPUT,
"forward frame from DS SA(%6D), DA(%6D)\n",
eh->ether_shost, ":",
eh->ether_dhost, ":");
ieee80211_mesh_proxy_check(vap, eh->ether_shost);
}
ni = ieee80211_mesh_discover(vap, eh->ether_dhost, m);
if (ni == NULL) {
/*
* NB: ieee80211_mesh_discover holds/disposes
* frame (e.g. queueing on path discovery).
*/
ifp->if_oerrors++;
continue;
}
}
#endif
if ((ni->ni_flags & IEEE80211_NODE_PWR_MGT) &&
(m->m_flags & M_PWR_SAV) == 0) {
/*
* Station in power save mode; pass the frame
* to the 802.11 layer and continue. We'll get
* the frame back when the time is right.
* XXX lose WDS vap linkage?
*/
(void) ieee80211_pwrsave(ni, m);
ieee80211_free_node(ni);
continue;
}
/* calculate priority so drivers can find the tx queue */
if (ieee80211_classify(ni, m)) {
IEEE80211_DISCARD_MAC(vap, IEEE80211_MSG_OUTPUT,
eh->ether_dhost, NULL,
"%s", "classification failure");
vap->iv_stats.is_tx_classify++;
ifp->if_oerrors++;
m_freem(m);
ieee80211_free_node(ni);
continue;
}
/*
* Stash the node pointer. Note that we do this after
* any call to ieee80211_dwds_mcast because that code
* uses any existing value for rcvif to identify the
* interface it (might have been) received on.
*/
m->m_pkthdr.rcvif = (void *)ni;
BPF_MTAP(ifp, m); /* 802.3 tx */
/*
* Check if A-MPDU tx aggregation is setup or if we
* should try to enable it. The sta must be associated
* with HT and A-MPDU enabled for use. When the policy
* routine decides we should enable A-MPDU we issue an
* ADDBA request and wait for a reply. The frame being
* encapsulated will go out w/o using A-MPDU, or possibly
* it might be collected by the driver and held/retransmit.
* The default ic_ampdu_enable routine handles staggering
* ADDBA requests in case the receiver NAK's us or we are
* otherwise unable to establish a BA stream.
*/
if ((ni->ni_flags & IEEE80211_NODE_AMPDU_TX) &&
(vap->iv_flags_ht & IEEE80211_FHT_AMPDU_TX) &&
(m->m_flags & M_EAPOL) == 0) {
int tid = WME_AC_TO_TID(M_WME_GETAC(m));
struct ieee80211_tx_ampdu *tap = &ni->ni_tx_ampdu[tid];
ieee80211_txampdu_count_packet(tap);
if (IEEE80211_AMPDU_RUNNING(tap)) {
/*
* Operational, mark frame for aggregation.
*
* XXX do tx aggregation here
*/
m->m_flags |= M_AMPDU_MPDU;
} else if (!IEEE80211_AMPDU_REQUESTED(tap) &&
ic->ic_ampdu_enable(ni, tap)) {
/*
* Not negotiated yet, request service.
*/
ieee80211_ampdu_request(ni, tap);
/* XXX hold frame for reply? */
}
}
#ifdef IEEE80211_SUPPORT_SUPERG
else if (IEEE80211_ATH_CAP(vap, ni, IEEE80211_NODE_FF)) {
m = ieee80211_ff_check(ni, m);
if (m == NULL) {
/* NB: any ni ref held on stageq */
continue;
}
}
#endif /* IEEE80211_SUPPORT_SUPERG */
if (__predict_true((vap->iv_caps & IEEE80211_C_8023ENCAP) == 0)) {
/*
* Encapsulate the packet in prep for transmission.
*/
m = ieee80211_encap(vap, ni, m);
if (m == NULL) {
/* NB: stat+msg handled in ieee80211_encap */
ieee80211_free_node(ni);
continue;
}
}
error = parent->if_transmit(parent, m);
if (error != 0) {
/* NB: IFQ_HANDOFF reclaims mbuf */
ieee80211_free_node(ni);
} else {
ifp->if_opackets++;
}
ic->ic_lastdata = ticks;
(void) ieee80211_start_pkt(vap, m);
/* mbuf is consumed here */
}
#undef IS_DWDS
}
/*
* 802.11 raw output routine.
*/
int
ieee80211_raw_output(struct ieee80211vap *vap, struct ieee80211_node *ni,
struct mbuf *m, const struct ieee80211_bpf_params *params)
{
struct ieee80211com *ic = vap->iv_ic;
return (ic->ic_raw_xmit(ni, m, params));
}
/*
@ -392,7 +456,9 @@ ieee80211_output(struct ifnet *ifp, struct mbuf *m,
struct ieee80211_node *ni = NULL;
struct ieee80211vap *vap;
struct ieee80211_frame *wh;
struct ieee80211com *ic = NULL;
int error;
int ret;
IFQ_LOCK(&ifp->if_snd);
if (ifp->if_drv_flags & IFF_DRV_OACTIVE) {
@ -409,6 +475,7 @@ ieee80211_output(struct ifnet *ifp, struct mbuf *m,
}
IFQ_UNLOCK(&ifp->if_snd);
vap = ifp->if_softc;
ic = vap->iv_ic;
/*
* Hand to the 802.3 code if not tagged as
* a raw 802.11 frame.
@ -489,15 +556,19 @@ ieee80211_output(struct ifnet *ifp, struct mbuf *m,
/* NB: ieee80211_encap does not include 802.11 header */
IEEE80211_NODE_STAT_ADD(ni, tx_bytes, m->m_pkthdr.len);
IEEE80211_TX_LOCK(ic);
/*
* NB: DLT_IEEE802_11_RADIO identifies the parameters are
* present by setting the sa_len field of the sockaddr (yes,
* this is a hack).
* NB: we assume sa_data is suitably aligned to cast.
*/
return vap->iv_ic->ic_raw_xmit(ni, m,
ret = ieee80211_raw_output(vap, ni, m,
(const struct ieee80211_bpf_params *)(dst->sa_len ?
dst->sa_data : NULL));
IEEE80211_TX_UNLOCK(ic);
return (ret);
bad:
if (m != NULL)
m_freem(m);
@ -526,8 +597,11 @@ ieee80211_send_setup(
struct ieee80211vap *vap = ni->ni_vap;
struct ieee80211_tx_ampdu *tap;
struct ieee80211_frame *wh = mtod(m, struct ieee80211_frame *);
struct ieee80211com *ic = ni->ni_ic;
ieee80211_seq seqno;
IEEE80211_TX_LOCK_ASSERT(ic);
wh->i_fc[0] = IEEE80211_FC0_VERSION_0 | type;
if ((type & IEEE80211_FC0_TYPE_MASK) == IEEE80211_FC0_TYPE_DATA) {
switch (vap->iv_opmode) {
@ -621,6 +695,7 @@ ieee80211_mgmt_output(struct ieee80211_node *ni, struct mbuf *m, int type,
struct ieee80211vap *vap = ni->ni_vap;
struct ieee80211com *ic = ni->ni_ic;
struct ieee80211_frame *wh;
int ret;
KASSERT(ni != NULL, ("null node"));
@ -642,6 +717,8 @@ ieee80211_mgmt_output(struct ieee80211_node *ni, struct mbuf *m, int type,
return ENOMEM;
}
IEEE80211_TX_LOCK(ic);
wh = mtod(m, struct ieee80211_frame *);
ieee80211_send_setup(ni, m,
IEEE80211_FC0_TYPE_MGT | type, IEEE80211_NONQOS_TID,
@ -670,7 +747,9 @@ ieee80211_mgmt_output(struct ieee80211_node *ni, struct mbuf *m, int type,
#endif
IEEE80211_NODE_STAT(ni, tx_mgmt);
return ic->ic_raw_xmit(ni, m, params);
ret = ieee80211_raw_output(vap, ni, m, params);
IEEE80211_TX_UNLOCK(ic);
return (ret);
}
/*
@ -694,6 +773,7 @@ ieee80211_send_nulldata(struct ieee80211_node *ni)
struct ieee80211_frame *wh;
int hdrlen;
uint8_t *frm;
int ret;
if (vap->iv_state == IEEE80211_S_CAC) {
IEEE80211_NOTE(vap, IEEE80211_MSG_OUTPUT | IEEE80211_MSG_DOTH,
@ -729,6 +809,8 @@ ieee80211_send_nulldata(struct ieee80211_node *ni)
return ENOMEM;
}
IEEE80211_TX_LOCK(ic);
wh = mtod(m, struct ieee80211_frame *); /* NB: a little lie */
if (ni->ni_flags & IEEE80211_NODE_QOS) {
const int tid = WME_AC_TO_TID(WME_AC_BE);
@ -771,7 +853,9 @@ ieee80211_send_nulldata(struct ieee80211_node *ni)
ieee80211_chan2ieee(ic, ic->ic_curchan),
wh->i_fc[1] & IEEE80211_FC1_PWR_MGT ? "ena" : "dis");
return ic->ic_raw_xmit(ni, m, NULL);
ret = ieee80211_raw_output(vap, ni, m, NULL);
IEEE80211_TX_UNLOCK(ic);
return (ret);
}
/*
@ -1034,6 +1118,8 @@ ieee80211_encap(struct ieee80211vap *vap, struct ieee80211_node *ni,
ieee80211_seq seqno;
int meshhdrsize, meshae;
uint8_t *qos;
IEEE80211_TX_LOCK_ASSERT(ic);
/*
* Copy existing Ethernet header to a safe place. The
@ -1806,6 +1892,7 @@ ieee80211_send_probereq(struct ieee80211_node *ni,
const struct ieee80211_rateset *rs;
struct mbuf *m;
uint8_t *frm;
int ret;
if (vap->iv_state == IEEE80211_S_CAC) {
IEEE80211_NOTE(vap, IEEE80211_MSG_OUTPUT, ni,
@ -1878,6 +1965,7 @@ ieee80211_send_probereq(struct ieee80211_node *ni,
return ENOMEM;
}
IEEE80211_TX_LOCK(ic);
wh = mtod(m, struct ieee80211_frame *);
ieee80211_send_setup(ni, m,
IEEE80211_FC0_TYPE_MGT | IEEE80211_FC0_SUBTYPE_PROBE_REQ,
@ -1905,7 +1993,9 @@ ieee80211_send_probereq(struct ieee80211_node *ni,
} else
params.ibp_try0 = tp->maxretry;
params.ibp_power = ni->ni_txpower;
return ic->ic_raw_xmit(ni, m, &params);
ret = ieee80211_raw_output(vap, ni, m, &params);
IEEE80211_TX_UNLOCK(ic);
return (ret);
}
/*
@ -2474,6 +2564,7 @@ ieee80211_send_proberesp(struct ieee80211vap *vap,
struct ieee80211com *ic = vap->iv_ic;
struct ieee80211_frame *wh;
struct mbuf *m;
int ret;
if (vap->iv_state == IEEE80211_S_CAC) {
IEEE80211_NOTE(vap, IEEE80211_MSG_OUTPUT, bss,
@ -2502,6 +2593,7 @@ ieee80211_send_proberesp(struct ieee80211vap *vap,
M_PREPEND(m, sizeof(struct ieee80211_frame), M_NOWAIT);
KASSERT(m != NULL, ("no room for header"));
IEEE80211_TX_LOCK(ic);
wh = mtod(m, struct ieee80211_frame *);
ieee80211_send_setup(bss, m,
IEEE80211_FC0_TYPE_MGT | IEEE80211_FC0_SUBTYPE_PROBE_RESP,
@ -2517,7 +2609,9 @@ ieee80211_send_proberesp(struct ieee80211vap *vap,
legacy ? " <legacy>" : "");
IEEE80211_NODE_STAT(bss, tx_mgmt);
return ic->ic_raw_xmit(bss, m, NULL);
ret = ieee80211_raw_output(vap, bss, m, NULL);
IEEE80211_TX_UNLOCK(ic);
return (ret);
}
/*

View File

@ -413,6 +413,7 @@ static void
pwrsave_flushq(struct ieee80211_node *ni)
{
struct ieee80211_psq *psq = &ni->ni_psq;
struct ieee80211com *ic = ni->ni_ic;
struct ieee80211vap *vap = ni->ni_vap;
struct ieee80211_psq_head *qhead;
struct ifnet *parent, *ifp;
@ -463,7 +464,7 @@ pwrsave_flushq(struct ieee80211_node *ni)
* For encaped frames, we need to free the node
* reference upon failure.
*/
if (parent->if_transmit(parent, m) != 0)
if (ieee80211_parent_transmit(ic, m) != 0)
ieee80211_free_node(ni);
}
}
@ -475,7 +476,7 @@ pwrsave_flushq(struct ieee80211_node *ni)
ifp_q = m->m_nextpkt;
KASSERT((!(m->m_flags & M_ENCAP)),
("%s: vapq with M_ENCAP frame!\n", __func__));
(void) ifp->if_transmit(ifp, m);
(void) ieee80211_vap_transmit(vap, m);
}
}
}

View File

@ -98,10 +98,12 @@ int ieee80211_raw_xmit(struct ieee80211_node *, struct mbuf *,
const struct ieee80211_bpf_params *);
int ieee80211_output(struct ifnet *, struct mbuf *,
struct sockaddr *, struct route *ro);
int ieee80211_raw_output(struct ieee80211vap *, struct ieee80211_node *,
struct mbuf *, const struct ieee80211_bpf_params *);
void ieee80211_send_setup(struct ieee80211_node *, struct mbuf *, int, int,
const uint8_t [IEEE80211_ADDR_LEN], const uint8_t [IEEE80211_ADDR_LEN],
const uint8_t [IEEE80211_ADDR_LEN]);
void ieee80211_start(struct ifnet *);
void ieee80211_start(struct ifnet *ifp);
int ieee80211_send_nulldata(struct ieee80211_node *);
int ieee80211_classify(struct ieee80211_node *, struct mbuf *m);
struct mbuf *ieee80211_mbuf_adjust(struct ieee80211vap *, int,

View File

@ -501,15 +501,17 @@ static void
ff_transmit(struct ieee80211_node *ni, struct mbuf *m)
{
struct ieee80211vap *vap = ni->ni_vap;
struct ieee80211com *ic = ni->ni_ic;
int error;
IEEE80211_TX_LOCK_ASSERT(vap->iv_ic);
/* encap and xmit */
m = ieee80211_encap(vap, ni, m);
if (m != NULL) {
struct ifnet *ifp = vap->iv_ifp;
struct ifnet *parent = ni->ni_ic->ic_ifp;
error = parent->if_transmit(parent, m);
error = ieee80211_parent_transmit(ic, m);;
if (error != 0) {
/* NB: IFQ_HANDOFF reclaims mbuf */
ieee80211_free_node(ni);
@ -532,6 +534,8 @@ ff_flush(struct mbuf *head, struct mbuf *last)
struct ieee80211_node *ni;
struct ieee80211vap *vap;
IEEE80211_TX_LOCK_ASSERT(vap->iv_ic);
for (m = head; m != last; m = next) {
next = m->m_nextpkt;
m->m_nextpkt = NULL;
@ -590,7 +594,9 @@ ieee80211_ff_age(struct ieee80211com *ic, struct ieee80211_stageq *sq,
M_AGE_SUB(m, quanta);
IEEE80211_UNLOCK(ic);
IEEE80211_TX_LOCK(ic);
ff_flush(head, m);
IEEE80211_TX_UNLOCK(ic);
}
static void
@ -679,6 +685,8 @@ ieee80211_ff_check(struct ieee80211_node *ni, struct mbuf *m)
struct mbuf *mstaged;
uint32_t txtime, limit;
IEEE80211_TX_UNLOCK_ASSERT(ic);
/*
* Check if the supplied frame can be aggregated.
*
@ -734,10 +742,12 @@ ieee80211_ff_check(struct ieee80211_node *ni, struct mbuf *m)
IEEE80211_UNLOCK(ic);
if (mstaged != NULL) {
IEEE80211_TX_LOCK(ic);
IEEE80211_NOTE(vap, IEEE80211_MSG_SUPERG, ni,
"%s: flush staged frame", __func__);
/* encap and xmit */
ff_transmit(ni, mstaged);
IEEE80211_TX_UNLOCK(ic);
}
return m; /* NB: original frame */
}

View File

@ -118,6 +118,7 @@ struct ieee80211_frame;
struct ieee80211com {
struct ifnet *ic_ifp; /* associated device */
ieee80211_com_lock_t ic_comlock; /* state update lock */
ieee80211_tx_lock_t ic_txlock; /* ic/vap TX lock */
TAILQ_HEAD(, ieee80211vap) ic_vaps; /* list of vap instances */
int ic_headroom; /* driver tx headroom needs */
enum ieee80211_phytype ic_phytype; /* XXX wrong for multi-mode */

View File

@ -232,7 +232,6 @@ void
ieee80211_dwds_mcast(struct ieee80211vap *vap0, struct mbuf *m)
{
struct ieee80211com *ic = vap0->iv_ic;
struct ifnet *parent = ic->ic_ifp;
const struct ether_header *eh = mtod(m, const struct ether_header *);
struct ieee80211_node *ni;
struct ieee80211vap *vap;
@ -296,7 +295,7 @@ ieee80211_dwds_mcast(struct ieee80211vap *vap0, struct mbuf *m)
mcopy->m_flags |= M_MCAST;
mcopy->m_pkthdr.rcvif = (void *) ni;
err = parent->if_transmit(parent, mcopy);
err = ieee80211_parent_transmit(ic, mcopy);
if (err) {
/* NB: IFQ_HANDOFF reclaims mbuf */
ifp->if_oerrors++;