2007-10-28 15:55:23 +00:00
|
|
|
/*-
|
Rework MAC Framework synchronization in a number of ways in order to
improve performance:
- Eliminate custom reference count and condition variable to monitor
threads entering the framework, as this had both significant overhead
and behaved badly in the face of contention.
- Replace reference count with two locks: an rwlock and an sx lock,
which will be read-acquired by threads entering the framework
depending on whether a give policy entry point is permitted to sleep
or not.
- Replace previous mutex locking of the reference count for exclusive
access with write acquiring of both the policy list sx and rw locks,
which occurs only when policies are attached or detached.
- Do a lockless read of the dynamic policy list head before acquiring
any locks in order to reduce overhead when no dynamic policies are
loaded; this a race we can afford to lose.
- For every policy entry point invocation, decide whether sleeping is
permitted, and if not, use a _NOSLEEP() variant of the composition
macros, which will use the rwlock instead of the sxlock. In some
cases, we decide which to use based on allocation flags passed to the
MAC Framework entry point.
As with the move to rwlocks/rmlocks in pfil, this may trigger witness
warnings, but these should (generally) be false positives as all
acquisition of the locks is for read with two very narrow exceptions
for policy load/unload, and those code blocks should never acquire
other locks.
Sponsored by: Google, Inc.
Obtained from: TrustedBSD Project
Discussed with: csjp (idea, not specific patch)
2009-03-14 16:06:06 +00:00
|
|
|
* Copyright (c) 2007-2009 Robert N. M. Watson
|
2007-10-28 15:55:23 +00:00
|
|
|
* All rights reserved.
|
|
|
|
*
|
|
|
|
* This software was developed by Robert Watson for the TrustedBSD Project.
|
|
|
|
*
|
Rework MAC Framework synchronization in a number of ways in order to
improve performance:
- Eliminate custom reference count and condition variable to monitor
threads entering the framework, as this had both significant overhead
and behaved badly in the face of contention.
- Replace reference count with two locks: an rwlock and an sx lock,
which will be read-acquired by threads entering the framework
depending on whether a give policy entry point is permitted to sleep
or not.
- Replace previous mutex locking of the reference count for exclusive
access with write acquiring of both the policy list sx and rw locks,
which occurs only when policies are attached or detached.
- Do a lockless read of the dynamic policy list head before acquiring
any locks in order to reduce overhead when no dynamic policies are
loaded; this a race we can afford to lose.
- For every policy entry point invocation, decide whether sleeping is
permitted, and if not, use a _NOSLEEP() variant of the composition
macros, which will use the rwlock instead of the sxlock. In some
cases, we decide which to use based on allocation flags passed to the
MAC Framework entry point.
As with the move to rwlocks/rmlocks in pfil, this may trigger witness
warnings, but these should (generally) be false positives as all
acquisition of the locks is for read with two very narrow exceptions
for policy load/unload, and those code blocks should never acquire
other locks.
Sponsored by: Google, Inc.
Obtained from: TrustedBSD Project
Discussed with: csjp (idea, not specific patch)
2009-03-14 16:06:06 +00:00
|
|
|
* This software was developed at the University of Cambridge Computer
|
|
|
|
* Laboratory with support from a grant from Google, Inc.
|
|
|
|
*
|
2007-10-28 15:55:23 +00:00
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <sys/cdefs.h>
|
|
|
|
__FBSDID("$FreeBSD$");
|
|
|
|
|
|
|
|
#include "opt_mac.h"
|
|
|
|
|
|
|
|
#include <sys/param.h>
|
|
|
|
#include <sys/kernel.h>
|
|
|
|
#include <sys/lock.h>
|
|
|
|
#include <sys/malloc.h>
|
|
|
|
#include <sys/mutex.h>
|
|
|
|
#include <sys/sbuf.h>
|
|
|
|
#include <sys/systm.h>
|
|
|
|
#include <sys/mount.h>
|
|
|
|
#include <sys/file.h>
|
|
|
|
#include <sys/namei.h>
|
|
|
|
#include <sys/protosw.h>
|
|
|
|
#include <sys/socket.h>
|
|
|
|
#include <sys/socketvar.h>
|
|
|
|
#include <sys/sysctl.h>
|
|
|
|
|
|
|
|
#include <net/if.h>
|
|
|
|
#include <net/if_var.h>
|
|
|
|
|
2008-10-26 22:45:18 +00:00
|
|
|
#include <netinet/in.h>
|
|
|
|
#include <netinet/ip6.h>
|
|
|
|
#include <netinet6/ip6_var.h>
|
|
|
|
|
2007-10-28 15:55:23 +00:00
|
|
|
#include <security/mac/mac_framework.h>
|
|
|
|
#include <security/mac/mac_internal.h>
|
|
|
|
#include <security/mac/mac_policy.h>
|
|
|
|
|
2008-10-26 22:45:18 +00:00
|
|
|
static struct label *
|
|
|
|
mac_ip6q_label_alloc(int flag)
|
|
|
|
{
|
|
|
|
struct label *label;
|
|
|
|
int error;
|
|
|
|
|
|
|
|
label = mac_labelzone_alloc(flag);
|
|
|
|
if (label == NULL)
|
|
|
|
return (NULL);
|
|
|
|
|
Rework MAC Framework synchronization in a number of ways in order to
improve performance:
- Eliminate custom reference count and condition variable to monitor
threads entering the framework, as this had both significant overhead
and behaved badly in the face of contention.
- Replace reference count with two locks: an rwlock and an sx lock,
which will be read-acquired by threads entering the framework
depending on whether a give policy entry point is permitted to sleep
or not.
- Replace previous mutex locking of the reference count for exclusive
access with write acquiring of both the policy list sx and rw locks,
which occurs only when policies are attached or detached.
- Do a lockless read of the dynamic policy list head before acquiring
any locks in order to reduce overhead when no dynamic policies are
loaded; this a race we can afford to lose.
- For every policy entry point invocation, decide whether sleeping is
permitted, and if not, use a _NOSLEEP() variant of the composition
macros, which will use the rwlock instead of the sxlock. In some
cases, we decide which to use based on allocation flags passed to the
MAC Framework entry point.
As with the move to rwlocks/rmlocks in pfil, this may trigger witness
warnings, but these should (generally) be false positives as all
acquisition of the locks is for read with two very narrow exceptions
for policy load/unload, and those code blocks should never acquire
other locks.
Sponsored by: Google, Inc.
Obtained from: TrustedBSD Project
Discussed with: csjp (idea, not specific patch)
2009-03-14 16:06:06 +00:00
|
|
|
if (flag & M_WAITOK)
|
2009-05-01 21:05:40 +00:00
|
|
|
MAC_POLICY_CHECK(ip6q_init_label, label, flag);
|
Rework MAC Framework synchronization in a number of ways in order to
improve performance:
- Eliminate custom reference count and condition variable to monitor
threads entering the framework, as this had both significant overhead
and behaved badly in the face of contention.
- Replace reference count with two locks: an rwlock and an sx lock,
which will be read-acquired by threads entering the framework
depending on whether a give policy entry point is permitted to sleep
or not.
- Replace previous mutex locking of the reference count for exclusive
access with write acquiring of both the policy list sx and rw locks,
which occurs only when policies are attached or detached.
- Do a lockless read of the dynamic policy list head before acquiring
any locks in order to reduce overhead when no dynamic policies are
loaded; this a race we can afford to lose.
- For every policy entry point invocation, decide whether sleeping is
permitted, and if not, use a _NOSLEEP() variant of the composition
macros, which will use the rwlock instead of the sxlock. In some
cases, we decide which to use based on allocation flags passed to the
MAC Framework entry point.
As with the move to rwlocks/rmlocks in pfil, this may trigger witness
warnings, but these should (generally) be false positives as all
acquisition of the locks is for read with two very narrow exceptions
for policy load/unload, and those code blocks should never acquire
other locks.
Sponsored by: Google, Inc.
Obtained from: TrustedBSD Project
Discussed with: csjp (idea, not specific patch)
2009-03-14 16:06:06 +00:00
|
|
|
else
|
2009-05-01 21:05:40 +00:00
|
|
|
MAC_POLICY_CHECK_NOSLEEP(ip6q_init_label, label, flag);
|
2008-10-26 22:45:18 +00:00
|
|
|
if (error) {
|
2009-05-01 21:05:40 +00:00
|
|
|
MAC_POLICY_PERFORM_NOSLEEP(ip6q_destroy_label, label);
|
2008-10-26 22:45:18 +00:00
|
|
|
mac_labelzone_free(label);
|
|
|
|
return (NULL);
|
|
|
|
}
|
|
|
|
return (label);
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
mac_ip6q_init(struct ip6q *q6, int flag)
|
|
|
|
{
|
|
|
|
|
2009-01-10 09:17:16 +00:00
|
|
|
if (mac_labeled & MPC_OBJECT_IP6Q) {
|
2008-10-26 22:45:18 +00:00
|
|
|
q6->ip6q_label = mac_ip6q_label_alloc(flag);
|
|
|
|
if (q6->ip6q_label == NULL)
|
|
|
|
return (ENOMEM);
|
|
|
|
} else
|
|
|
|
q6->ip6q_label = NULL;
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
mac_ip6q_label_free(struct label *label)
|
|
|
|
{
|
|
|
|
|
2009-05-01 21:05:40 +00:00
|
|
|
MAC_POLICY_PERFORM_NOSLEEP(ip6q_destroy_label, label);
|
2008-10-26 22:45:18 +00:00
|
|
|
mac_labelzone_free(label);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
mac_ip6q_destroy(struct ip6q *q6)
|
|
|
|
{
|
|
|
|
|
|
|
|
if (q6->ip6q_label != NULL) {
|
|
|
|
mac_ip6q_label_free(q6->ip6q_label);
|
|
|
|
q6->ip6q_label = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
mac_ip6q_reassemble(struct ip6q *q6, struct mbuf *m)
|
|
|
|
{
|
|
|
|
struct label *label;
|
|
|
|
|
2009-06-03 18:46:28 +00:00
|
|
|
if (mac_policy_count == 0)
|
|
|
|
return;
|
|
|
|
|
2008-10-26 22:45:18 +00:00
|
|
|
label = mac_mbuf_to_label(m);
|
|
|
|
|
2009-05-01 21:05:40 +00:00
|
|
|
MAC_POLICY_PERFORM_NOSLEEP(ip6q_reassemble, q6, q6->ip6q_label, m,
|
|
|
|
label);
|
2008-10-26 22:45:18 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
mac_ip6q_create(struct mbuf *m, struct ip6q *q6)
|
|
|
|
{
|
|
|
|
struct label *label;
|
|
|
|
|
2009-06-03 18:46:28 +00:00
|
|
|
if (mac_policy_count == 0)
|
|
|
|
return;
|
|
|
|
|
2008-10-26 22:45:18 +00:00
|
|
|
label = mac_mbuf_to_label(m);
|
|
|
|
|
2009-05-01 21:05:40 +00:00
|
|
|
MAC_POLICY_PERFORM_NOSLEEP(ip6q_create, m, label, q6,
|
|
|
|
q6->ip6q_label);
|
2008-10-26 22:45:18 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
mac_ip6q_match(struct mbuf *m, struct ip6q *q6)
|
|
|
|
{
|
|
|
|
struct label *label;
|
|
|
|
int result;
|
|
|
|
|
2009-06-03 18:46:28 +00:00
|
|
|
if (mac_policy_count == 0)
|
|
|
|
return (1);
|
|
|
|
|
2008-10-26 22:45:18 +00:00
|
|
|
label = mac_mbuf_to_label(m);
|
|
|
|
|
|
|
|
result = 1;
|
2009-05-01 21:05:40 +00:00
|
|
|
MAC_POLICY_BOOLEAN_NOSLEEP(ip6q_match, &&, m, label, q6,
|
|
|
|
q6->ip6q_label);
|
2008-10-26 22:45:18 +00:00
|
|
|
|
|
|
|
return (result);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
mac_ip6q_update(struct mbuf *m, struct ip6q *q6)
|
|
|
|
{
|
|
|
|
struct label *label;
|
|
|
|
|
2009-06-03 18:46:28 +00:00
|
|
|
if (mac_policy_count == 0)
|
|
|
|
return;
|
|
|
|
|
2008-10-26 22:45:18 +00:00
|
|
|
label = mac_mbuf_to_label(m);
|
|
|
|
|
2009-05-01 21:05:40 +00:00
|
|
|
MAC_POLICY_PERFORM_NOSLEEP(ip6q_update, m, label, q6,
|
|
|
|
q6->ip6q_label);
|
2008-10-26 22:45:18 +00:00
|
|
|
}
|
|
|
|
|
2007-10-28 15:55:23 +00:00
|
|
|
void
|
|
|
|
mac_netinet6_nd6_send(struct ifnet *ifp, struct mbuf *m)
|
|
|
|
{
|
|
|
|
struct label *mlabel;
|
|
|
|
|
2009-06-03 18:46:28 +00:00
|
|
|
if (mac_policy_count == 0)
|
|
|
|
return;
|
|
|
|
|
2007-10-28 15:55:23 +00:00
|
|
|
mlabel = mac_mbuf_to_label(m);
|
|
|
|
|
2009-05-01 21:05:40 +00:00
|
|
|
MAC_POLICY_PERFORM_NOSLEEP(netinet6_nd6_send, ifp, ifp->if_label, m,
|
Rework MAC Framework synchronization in a number of ways in order to
improve performance:
- Eliminate custom reference count and condition variable to monitor
threads entering the framework, as this had both significant overhead
and behaved badly in the face of contention.
- Replace reference count with two locks: an rwlock and an sx lock,
which will be read-acquired by threads entering the framework
depending on whether a give policy entry point is permitted to sleep
or not.
- Replace previous mutex locking of the reference count for exclusive
access with write acquiring of both the policy list sx and rw locks,
which occurs only when policies are attached or detached.
- Do a lockless read of the dynamic policy list head before acquiring
any locks in order to reduce overhead when no dynamic policies are
loaded; this a race we can afford to lose.
- For every policy entry point invocation, decide whether sleeping is
permitted, and if not, use a _NOSLEEP() variant of the composition
macros, which will use the rwlock instead of the sxlock. In some
cases, we decide which to use based on allocation flags passed to the
MAC Framework entry point.
As with the move to rwlocks/rmlocks in pfil, this may trigger witness
warnings, but these should (generally) be false positives as all
acquisition of the locks is for read with two very narrow exceptions
for policy load/unload, and those code blocks should never acquire
other locks.
Sponsored by: Google, Inc.
Obtained from: TrustedBSD Project
Discussed with: csjp (idea, not specific patch)
2009-03-14 16:06:06 +00:00
|
|
|
mlabel);
|
2007-10-28 15:55:23 +00:00
|
|
|
}
|