freebsd-dev/sys/security/mac/mac_internal.h

517 lines
18 KiB
C
Raw Normal View History

/*-
* Copyright (c) 1999-2002, 2006, 2009 Robert N. M. Watson
* Copyright (c) 2001 Ilmar S. Habibulin
* Copyright (c) 2001-2004 Networks Associates Technology, Inc.
Add a new priv(9) kernel interface for checking the availability of privilege for threads and credentials. Unlike the existing suser(9) interface, priv(9) exposes a named privilege identifier to the privilege checking code, allowing more complex policies regarding the granting of privilege to be expressed. Two interfaces are provided, replacing the existing suser(9) interface: suser(td) -> priv_check(td, priv) suser_cred(cred, flags) -> priv_check_cred(cred, priv, flags) A comprehensive list of currently available kernel privileges may be found in priv.h. New privileges are easily added as required, but the comments on adding privileges found in priv.h and priv(9) should be read before doing so. The new privilege interface exposed sufficient information to the privilege checking routine that it will now be possible for jail to determine whether a particular privilege is granted in the check routine, rather than relying on hints from the calling context via the SUSER_ALLOWJAIL flag. For now, the flag is maintained, but a new jail check function, prison_priv_check(), is exposed from kern_jail.c and used by the privilege check routine to determine if the privilege is permitted in jail. As a result, a centralized list of privileges permitted in jail is now present in kern_jail.c. The MAC Framework is now also able to instrument privilege checks, both to deny privileges otherwise granted (mac_priv_check()), and to grant privileges otherwise denied (mac_priv_grant()), permitting MAC Policy modules to implement privilege models, as well as control a much broader range of system behavior in order to constrain processes running with root privilege. The suser() and suser_cred() functions remain implemented, now in terms of priv_check() and the PRIV_ROOT privilege, for use during the transition and possibly continuing use by third party kernel modules that have not been updated. The PRIV_DRIVER privilege exists to allow device drivers to check privilege without adopting a more specific privilege identifier. This change does not modify the actual security policy, rather, it modifies the interface for privilege checks so changes to the security policy become more feasible. Sponsored by: nCircle Network Security, Inc. Obtained from: TrustedBSD Project Discussed on: arch@ Reviewed (at least in part) by: mlaier, jmg, pjd, bde, ceri, Alex Lyashkov <umka at sevcity dot net>, Skip Ford <skip dot ford at verizon dot net>, Antoine Brodin <antoine dot brodin at laposte dot net>
2006-11-06 13:37:19 +00:00
* Copyright (c) 2006 nCircle Network Security, Inc.
* Copyright (c) 2006 SPARTA, Inc.
* Copyright (c) 2009 Apple, Inc.
* All rights reserved.
*
* This software was developed by Robert Watson and Ilmar Habibulin for the
* TrustedBSD Project.
*
* This software was developed for the FreeBSD Project in part by Network
* Associates Laboratories, the Security Research Division of Network
* Associates, Inc. under DARPA/SPAWAR contract N66001-01-C-8035 ("CBOSS"),
* as part of the DARPA CHATS research program.
*
Add a new priv(9) kernel interface for checking the availability of privilege for threads and credentials. Unlike the existing suser(9) interface, priv(9) exposes a named privilege identifier to the privilege checking code, allowing more complex policies regarding the granting of privilege to be expressed. Two interfaces are provided, replacing the existing suser(9) interface: suser(td) -> priv_check(td, priv) suser_cred(cred, flags) -> priv_check_cred(cred, priv, flags) A comprehensive list of currently available kernel privileges may be found in priv.h. New privileges are easily added as required, but the comments on adding privileges found in priv.h and priv(9) should be read before doing so. The new privilege interface exposed sufficient information to the privilege checking routine that it will now be possible for jail to determine whether a particular privilege is granted in the check routine, rather than relying on hints from the calling context via the SUSER_ALLOWJAIL flag. For now, the flag is maintained, but a new jail check function, prison_priv_check(), is exposed from kern_jail.c and used by the privilege check routine to determine if the privilege is permitted in jail. As a result, a centralized list of privileges permitted in jail is now present in kern_jail.c. The MAC Framework is now also able to instrument privilege checks, both to deny privileges otherwise granted (mac_priv_check()), and to grant privileges otherwise denied (mac_priv_grant()), permitting MAC Policy modules to implement privilege models, as well as control a much broader range of system behavior in order to constrain processes running with root privilege. The suser() and suser_cred() functions remain implemented, now in terms of priv_check() and the PRIV_ROOT privilege, for use during the transition and possibly continuing use by third party kernel modules that have not been updated. The PRIV_DRIVER privilege exists to allow device drivers to check privilege without adopting a more specific privilege identifier. This change does not modify the actual security policy, rather, it modifies the interface for privilege checks so changes to the security policy become more feasible. Sponsored by: nCircle Network Security, Inc. Obtained from: TrustedBSD Project Discussed on: arch@ Reviewed (at least in part) by: mlaier, jmg, pjd, bde, ceri, Alex Lyashkov <umka at sevcity dot net>, Skip Ford <skip dot ford at verizon dot net>, Antoine Brodin <antoine dot brodin at laposte dot net>
2006-11-06 13:37:19 +00:00
* This software was developed by Robert N. M. Watson for the TrustedBSD
* Project under contract to nCircle Network Security, Inc.
*
* This software was enhanced by SPARTA ISSO under SPAWAR contract
* N66001-04-C-6019 ("SEFOS").
*
* This software was developed at the University of Cambridge Computer
* Laboratory with support from a grant from Google, Inc.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* $FreeBSD$
*/
2003-06-11 00:56:59 +00:00
#ifndef _SECURITY_MAC_MAC_INTERNAL_H_
#define _SECURITY_MAC_MAC_INTERNAL_H_
#ifndef _KERNEL
#error "no user-serviceable parts inside"
#endif
#include <sys/lock.h>
#include <sys/rmlock.h>
/*
* MAC Framework sysctl namespace.
*/
#ifdef SYSCTL_DECL
SYSCTL_DECL(_security_mac);
#endif /* SYSCTL_DECL */
/*
* MAC Framework SDT DTrace probe namespace, macros for declaring entry
* point probes, macros for invoking them.
*/
#ifdef SDT_PROVIDER_DECLARE
SDT_PROVIDER_DECLARE(mac); /* MAC Framework-level events. */
SDT_PROVIDER_DECLARE(mac_framework); /* Entry points to MAC. */
#define MAC_CHECK_PROBE_DEFINE4(name, arg0, arg1, arg2, arg3) \
SDT_PROBE_DEFINE5(mac_framework, kernel, name, mac_check_err, \
"int", arg0, arg1, arg2, arg3); \
SDT_PROBE_DEFINE5(mac_framework, kernel, name, mac_check_ok, \
"int", arg0, arg1, arg2, arg3);
#define MAC_CHECK_PROBE_DEFINE3(name, arg0, arg1, arg2) \
SDT_PROBE_DEFINE4(mac_framework, kernel, name, mac_check_err, \
"int", arg0, arg1, arg2); \
SDT_PROBE_DEFINE4(mac_framework, kernel, name, mac_check_ok, \
"int", arg0, arg1, arg2);
#define MAC_CHECK_PROBE_DEFINE2(name, arg0, arg1) \
SDT_PROBE_DEFINE3(mac_framework, kernel, name, mac_check_err, \
"int", arg0, arg1); \
SDT_PROBE_DEFINE3(mac_framework, kernel, name, mac_check_ok, \
"int", arg0, arg1);
#define MAC_CHECK_PROBE_DEFINE1(name, arg0) \
SDT_PROBE_DEFINE2(mac_framework, kernel, name, mac_check_err, \
"int", arg0); \
SDT_PROBE_DEFINE2(mac_framework, kernel, name, mac_check_ok, \
"int", arg0);
#define MAC_CHECK_PROBE4(name, error, arg0, arg1, arg2, arg3) do { \
if (error) { \
SDT_PROBE(mac_framework, kernel, name, mac_check_err, \
error, arg0, arg1, arg2, arg3); \
} else { \
SDT_PROBE(mac_framework, kernel, name, mac_check_ok, \
0, arg0, arg1, arg2, arg3); \
} \
} while (0)
#define MAC_CHECK_PROBE3(name, error, arg0, arg1, arg2) \
MAC_CHECK_PROBE4(name, error, arg0, arg1, arg2, 0)
#define MAC_CHECK_PROBE2(name, error, arg0, arg1) \
MAC_CHECK_PROBE3(name, error, arg0, arg1, 0)
#define MAC_CHECK_PROBE1(name, error, arg0) \
MAC_CHECK_PROBE2(name, error, arg0, 0)
#endif
#define MAC_GRANT_PROBE_DEFINE2(name, arg0, arg1) \
SDT_PROBE_DEFINE3(mac_framework, kernel, name, mac_grant_err, \
"int", arg0, arg1); \
SDT_PROBE_DEFINE3(mac_framework, kernel, name, mac_grant_ok, \
"INT", arg0, arg1);
#define MAC_GRANT_PROBE2(name, error, arg0, arg1) do { \
if (error) { \
SDT_PROBE(mac_framework, kernel, name, mac_grant_err, \
error, arg0, arg1, 0, 0); \
} else { \
SDT_PROBE(mac_framework, kernel, name, mac_grant_ok, \
error, arg0, arg1, 0, 0); \
} \
} while (0)
/*
* MAC Framework global types and typedefs.
*/
LIST_HEAD(mac_policy_list_head, mac_policy_conf);
#ifdef MALLOC_DECLARE
MALLOC_DECLARE(M_MACTEMP);
#endif
/*
* MAC labels -- in-kernel storage format.
*
* In general, struct label pointers are embedded in kernel data structures
* representing objects that may be labeled (and protected). Struct label is
* opaque to both kernel services that invoke the MAC Framework and MAC
* policy modules. In particular, we do not wish to encode the layout of the
* label structure into any ABIs. Historically, the slot array contained
* unions of {long, void} but now contains uintptr_t.
*/
#define MAC_MAX_SLOTS 4
#define MAC_FLAG_INITIALIZED 0x0000001 /* Is initialized for use. */
struct label {
int l_flags;
intptr_t l_perpolicy[MAC_MAX_SLOTS];
};
/*
* Flags for mac_labeled, a bitmask of object types need across the union of
* all policies currently registered with the MAC Framework, used to key
* whether or not labels are allocated and constructors for the type are
* invoked.
*/
#define MPC_OBJECT_CRED 0x0000000000000001
#define MPC_OBJECT_PROC 0x0000000000000002
#define MPC_OBJECT_VNODE 0x0000000000000004
#define MPC_OBJECT_INPCB 0x0000000000000008
#define MPC_OBJECT_SOCKET 0x0000000000000010
#define MPC_OBJECT_DEVFS 0x0000000000000020
#define MPC_OBJECT_MBUF 0x0000000000000040
#define MPC_OBJECT_IPQ 0x0000000000000080
#define MPC_OBJECT_IFNET 0x0000000000000100
#define MPC_OBJECT_BPFDESC 0x0000000000000200
#define MPC_OBJECT_PIPE 0x0000000000000400
#define MPC_OBJECT_MOUNT 0x0000000000000800
#define MPC_OBJECT_POSIXSEM 0x0000000000001000
#define MPC_OBJECT_POSIXSHM 0x0000000000002000
#define MPC_OBJECT_SYSVMSG 0x0000000000004000
#define MPC_OBJECT_SYSVMSQ 0x0000000000008000
#define MPC_OBJECT_SYSVSEM 0x0000000000010000
#define MPC_OBJECT_SYSVSHM 0x0000000000020000
#define MPC_OBJECT_SYNCACHE 0x0000000000040000
#define MPC_OBJECT_IP6Q 0x0000000000080000
Move MAC label storage for mbufs into m_tags from the m_pkthdr structure, returning some additional room in the first mbuf in a chain, and avoiding feature-specific contents in the mbuf header. To do this: - Modify mbuf_to_label() to extract the tag, returning NULL if not found. - Introduce mac_init_mbuf_tag() which does most of the work mac_init_mbuf() used to do, except on an m_tag rather than an mbuf. - Scale back mac_init_mbuf() to perform m_tag allocation and invoke mac_init_mbuf_tag(). - Replace mac_destroy_mbuf() with mac_destroy_mbuf_tag(), since m_tag's are now GC'd deep in the m_tag/mbuf code rather than at a higher level when mbufs are directly free()'d. - Add mac_copy_mbuf_tag() to support m_copy_pkthdr() and related notions. - Generally change all references to mbuf labels so that they use mbuf_to_label() rather than &mbuf->m_pkthdr.label. This required no changes in the MAC policies (yay!). - Tweak mbuf release routines to not call mac_destroy_mbuf(), tag destruction takes care of it for us now. - Remove MAC magic from m_copy_pkthdr() and m_move_pkthdr() -- the existing m_tag support does all this for us. Note that we can no longer just zero the m_tag list on the target mbuf, rather, we have to delete the chain because m_tag's will already be hung off freshly allocated mbuf's. - Tweak m_tag copying routines so that if we're copying a MAC m_tag, we don't do a binary copy, rather, we initialize the new storage and do a deep copy of the label. - Remove use of MAC_FLAG_INITIALIZED in a few bizarre places having to do with mbuf header copies previously. - When an mbuf is copied in ip_input(), we no longer need to explicitly copy the label because it will get handled by the m_tag code now. - No longer any weird handling of MAC labels in if_loop.c during header copies. - Add MPC_LOADTIME_FLAG_LABELMBUFS flag to Biba, MLS, mac_test. In mac_test, handle the label==NULL case, since it can be dynamically loaded. In order to improve performance with this change, introduce the notion of "lazy MAC label allocation" -- only allocate m_tag storage for MAC labels if we're running with a policy that uses MAC labels on mbufs. Policies declare this intent by setting the MPC_LOADTIME_FLAG_LABELMBUFS flag in their load-time flags field during declaration. Note: this opens up the possibility of post-boot policy modules getting back NULL slot entries even though they have policy invariants of non-NULL slot entries, as the policy might have been loaded after the mbuf was allocated, leaving the mbuf without label storage. Policies that cannot handle this case must be declared as NOTLATE, or must be modified. - mac_labelmbufs holds the current cumulative status as to whether any policies require mbuf labeling or not. This is updated whenever the active policy set changes by the function mac_policy_updateflags(). The function iterates the list and checks whether any have the flag set. Write access to this variable is protected by the policy list; read access is currently not protected for performance reasons. This might change if it causes problems. - Add MAC_POLICY_LIST_ASSERT_EXCLUSIVE() to permit the flags update function to assert appropriate locks. - This makes allocation in mac_init_mbuf() conditional on the flag. Reviewed by: sam Obtained from: TrustedBSD Project Sponsored by: DARPA, Network Associates Laboratories
2003-04-14 20:39:06 +00:00
/*
* MAC Framework global variables.
Move MAC label storage for mbufs into m_tags from the m_pkthdr structure, returning some additional room in the first mbuf in a chain, and avoiding feature-specific contents in the mbuf header. To do this: - Modify mbuf_to_label() to extract the tag, returning NULL if not found. - Introduce mac_init_mbuf_tag() which does most of the work mac_init_mbuf() used to do, except on an m_tag rather than an mbuf. - Scale back mac_init_mbuf() to perform m_tag allocation and invoke mac_init_mbuf_tag(). - Replace mac_destroy_mbuf() with mac_destroy_mbuf_tag(), since m_tag's are now GC'd deep in the m_tag/mbuf code rather than at a higher level when mbufs are directly free()'d. - Add mac_copy_mbuf_tag() to support m_copy_pkthdr() and related notions. - Generally change all references to mbuf labels so that they use mbuf_to_label() rather than &mbuf->m_pkthdr.label. This required no changes in the MAC policies (yay!). - Tweak mbuf release routines to not call mac_destroy_mbuf(), tag destruction takes care of it for us now. - Remove MAC magic from m_copy_pkthdr() and m_move_pkthdr() -- the existing m_tag support does all this for us. Note that we can no longer just zero the m_tag list on the target mbuf, rather, we have to delete the chain because m_tag's will already be hung off freshly allocated mbuf's. - Tweak m_tag copying routines so that if we're copying a MAC m_tag, we don't do a binary copy, rather, we initialize the new storage and do a deep copy of the label. - Remove use of MAC_FLAG_INITIALIZED in a few bizarre places having to do with mbuf header copies previously. - When an mbuf is copied in ip_input(), we no longer need to explicitly copy the label because it will get handled by the m_tag code now. - No longer any weird handling of MAC labels in if_loop.c during header copies. - Add MPC_LOADTIME_FLAG_LABELMBUFS flag to Biba, MLS, mac_test. In mac_test, handle the label==NULL case, since it can be dynamically loaded. In order to improve performance with this change, introduce the notion of "lazy MAC label allocation" -- only allocate m_tag storage for MAC labels if we're running with a policy that uses MAC labels on mbufs. Policies declare this intent by setting the MPC_LOADTIME_FLAG_LABELMBUFS flag in their load-time flags field during declaration. Note: this opens up the possibility of post-boot policy modules getting back NULL slot entries even though they have policy invariants of non-NULL slot entries, as the policy might have been loaded after the mbuf was allocated, leaving the mbuf without label storage. Policies that cannot handle this case must be declared as NOTLATE, or must be modified. - mac_labelmbufs holds the current cumulative status as to whether any policies require mbuf labeling or not. This is updated whenever the active policy set changes by the function mac_policy_updateflags(). The function iterates the list and checks whether any have the flag set. Write access to this variable is protected by the policy list; read access is currently not protected for performance reasons. This might change if it causes problems. - Add MAC_POLICY_LIST_ASSERT_EXCLUSIVE() to permit the flags update function to assert appropriate locks. - This makes allocation in mac_init_mbuf() conditional on the flag. Reviewed by: sam Obtained from: TrustedBSD Project Sponsored by: DARPA, Network Associates Laboratories
2003-04-14 20:39:06 +00:00
*/
extern struct mac_policy_list_head mac_policy_list;
extern struct mac_policy_list_head mac_static_policy_list;
extern u_int mac_policy_count;
extern uint64_t mac_labeled;
extern struct mtx mac_ifnet_mtx;
Move MAC label storage for mbufs into m_tags from the m_pkthdr structure, returning some additional room in the first mbuf in a chain, and avoiding feature-specific contents in the mbuf header. To do this: - Modify mbuf_to_label() to extract the tag, returning NULL if not found. - Introduce mac_init_mbuf_tag() which does most of the work mac_init_mbuf() used to do, except on an m_tag rather than an mbuf. - Scale back mac_init_mbuf() to perform m_tag allocation and invoke mac_init_mbuf_tag(). - Replace mac_destroy_mbuf() with mac_destroy_mbuf_tag(), since m_tag's are now GC'd deep in the m_tag/mbuf code rather than at a higher level when mbufs are directly free()'d. - Add mac_copy_mbuf_tag() to support m_copy_pkthdr() and related notions. - Generally change all references to mbuf labels so that they use mbuf_to_label() rather than &mbuf->m_pkthdr.label. This required no changes in the MAC policies (yay!). - Tweak mbuf release routines to not call mac_destroy_mbuf(), tag destruction takes care of it for us now. - Remove MAC magic from m_copy_pkthdr() and m_move_pkthdr() -- the existing m_tag support does all this for us. Note that we can no longer just zero the m_tag list on the target mbuf, rather, we have to delete the chain because m_tag's will already be hung off freshly allocated mbuf's. - Tweak m_tag copying routines so that if we're copying a MAC m_tag, we don't do a binary copy, rather, we initialize the new storage and do a deep copy of the label. - Remove use of MAC_FLAG_INITIALIZED in a few bizarre places having to do with mbuf header copies previously. - When an mbuf is copied in ip_input(), we no longer need to explicitly copy the label because it will get handled by the m_tag code now. - No longer any weird handling of MAC labels in if_loop.c during header copies. - Add MPC_LOADTIME_FLAG_LABELMBUFS flag to Biba, MLS, mac_test. In mac_test, handle the label==NULL case, since it can be dynamically loaded. In order to improve performance with this change, introduce the notion of "lazy MAC label allocation" -- only allocate m_tag storage for MAC labels if we're running with a policy that uses MAC labels on mbufs. Policies declare this intent by setting the MPC_LOADTIME_FLAG_LABELMBUFS flag in their load-time flags field during declaration. Note: this opens up the possibility of post-boot policy modules getting back NULL slot entries even though they have policy invariants of non-NULL slot entries, as the policy might have been loaded after the mbuf was allocated, leaving the mbuf without label storage. Policies that cannot handle this case must be declared as NOTLATE, or must be modified. - mac_labelmbufs holds the current cumulative status as to whether any policies require mbuf labeling or not. This is updated whenever the active policy set changes by the function mac_policy_updateflags(). The function iterates the list and checks whether any have the flag set. Write access to this variable is protected by the policy list; read access is currently not protected for performance reasons. This might change if it causes problems. - Add MAC_POLICY_LIST_ASSERT_EXCLUSIVE() to permit the flags update function to assert appropriate locks. - This makes allocation in mac_init_mbuf() conditional on the flag. Reviewed by: sam Obtained from: TrustedBSD Project Sponsored by: DARPA, Network Associates Laboratories
2003-04-14 20:39:06 +00:00
/*
* MAC Framework infrastructure functions.
*/
int mac_error_select(int error1, int error2);
void mac_policy_slock_nosleep(struct rm_priotracker *tracker);
Rework MAC Framework synchronization in a number of ways in order to improve performance: - Eliminate custom reference count and condition variable to monitor threads entering the framework, as this had both significant overhead and behaved badly in the face of contention. - Replace reference count with two locks: an rwlock and an sx lock, which will be read-acquired by threads entering the framework depending on whether a give policy entry point is permitted to sleep or not. - Replace previous mutex locking of the reference count for exclusive access with write acquiring of both the policy list sx and rw locks, which occurs only when policies are attached or detached. - Do a lockless read of the dynamic policy list head before acquiring any locks in order to reduce overhead when no dynamic policies are loaded; this a race we can afford to lose. - For every policy entry point invocation, decide whether sleeping is permitted, and if not, use a _NOSLEEP() variant of the composition macros, which will use the rwlock instead of the sxlock. In some cases, we decide which to use based on allocation flags passed to the MAC Framework entry point. As with the move to rwlocks/rmlocks in pfil, this may trigger witness warnings, but these should (generally) be false positives as all acquisition of the locks is for read with two very narrow exceptions for policy load/unload, and those code blocks should never acquire other locks. Sponsored by: Google, Inc. Obtained from: TrustedBSD Project Discussed with: csjp (idea, not specific patch)
2009-03-14 16:06:06 +00:00
void mac_policy_slock_sleep(void);
void mac_policy_sunlock_nosleep(struct rm_priotracker *tracker);
Rework MAC Framework synchronization in a number of ways in order to improve performance: - Eliminate custom reference count and condition variable to monitor threads entering the framework, as this had both significant overhead and behaved badly in the face of contention. - Replace reference count with two locks: an rwlock and an sx lock, which will be read-acquired by threads entering the framework depending on whether a give policy entry point is permitted to sleep or not. - Replace previous mutex locking of the reference count for exclusive access with write acquiring of both the policy list sx and rw locks, which occurs only when policies are attached or detached. - Do a lockless read of the dynamic policy list head before acquiring any locks in order to reduce overhead when no dynamic policies are loaded; this a race we can afford to lose. - For every policy entry point invocation, decide whether sleeping is permitted, and if not, use a _NOSLEEP() variant of the composition macros, which will use the rwlock instead of the sxlock. In some cases, we decide which to use based on allocation flags passed to the MAC Framework entry point. As with the move to rwlocks/rmlocks in pfil, this may trigger witness warnings, but these should (generally) be false positives as all acquisition of the locks is for read with two very narrow exceptions for policy load/unload, and those code blocks should never acquire other locks. Sponsored by: Google, Inc. Obtained from: TrustedBSD Project Discussed with: csjp (idea, not specific patch)
2009-03-14 16:06:06 +00:00
void mac_policy_sunlock_sleep(void);
Modify the MAC Framework so that instead of embedding a (struct label) in various kernel objects to represent security data, we embed a (struct label *) pointer, which now references labels allocated using a UMA zone (mac_label.c). This allows the size and shape of struct label to be varied without changing the size and shape of these kernel objects, which become part of the frozen ABI with 5-STABLE. This opens the door for boot-time selection of the number of label slots, and hence changes to the bound on the number of simultaneous labeled policies at boot-time instead of compile-time. This also makes it easier to embed label references in new objects as required for locking/caching with fine-grained network stack locking, such as inpcb structures. This change also moves us further in the direction of hiding the structure of kernel objects from MAC policy modules, not to mention dramatically reducing the number of '&' symbols appearing in both the MAC Framework and MAC policy modules, and improving readability. While this results in minimal performance change with MAC enabled, it will observably shrink the size of a number of critical kernel data structures for the !MAC case, and should have a small (but measurable) performance benefit (i.e., struct vnode, struct socket) do to memory conservation and reduced cost of zeroing memory. NOTE: Users of MAC must recompile their kernel and all MAC modules as a result of this change. Because this is an API change, third party MAC modules will also need to be updated to make less use of the '&' symbol. Suggestions from: bmilekic Obtained from: TrustedBSD Project Sponsored by: DARPA, Network Associates Laboratories
2003-11-12 03:14:31 +00:00
struct label *mac_labelzone_alloc(int flags);
void mac_labelzone_free(struct label *label);
void mac_labelzone_init(void);
void mac_init_label(struct label *label);
void mac_destroy_label(struct label *label);
int mac_check_structmac_consistent(struct mac *mac);
int mac_allocate_slot(void);
#define MAC_IFNET_LOCK(ifp) mtx_lock(&mac_ifnet_mtx)
#define MAC_IFNET_UNLOCK(ifp) mtx_unlock(&mac_ifnet_mtx)
2003-11-07 04:48:24 +00:00
/*
* MAC Framework per-object type functions. It's not yet clear how the
* namespaces, etc, should work for these, so for now, sort by object type.
*/
struct label *mac_cred_label_alloc(void);
void mac_cred_label_free(struct label *label);
Modify the MAC Framework so that instead of embedding a (struct label) in various kernel objects to represent security data, we embed a (struct label *) pointer, which now references labels allocated using a UMA zone (mac_label.c). This allows the size and shape of struct label to be varied without changing the size and shape of these kernel objects, which become part of the frozen ABI with 5-STABLE. This opens the door for boot-time selection of the number of label slots, and hence changes to the bound on the number of simultaneous labeled policies at boot-time instead of compile-time. This also makes it easier to embed label references in new objects as required for locking/caching with fine-grained network stack locking, such as inpcb structures. This change also moves us further in the direction of hiding the structure of kernel objects from MAC policy modules, not to mention dramatically reducing the number of '&' symbols appearing in both the MAC Framework and MAC policy modules, and improving readability. While this results in minimal performance change with MAC enabled, it will observably shrink the size of a number of critical kernel data structures for the !MAC case, and should have a small (but measurable) performance benefit (i.e., struct vnode, struct socket) do to memory conservation and reduced cost of zeroing memory. NOTE: Users of MAC must recompile their kernel and all MAC modules as a result of this change. Because this is an API change, third party MAC modules will also need to be updated to make less use of the '&' symbol. Suggestions from: bmilekic Obtained from: TrustedBSD Project Sponsored by: DARPA, Network Associates Laboratories
2003-11-12 03:14:31 +00:00
struct label *mac_pipe_label_alloc(void);
void mac_pipe_label_free(struct label *label);
struct label *mac_socket_label_alloc(int flag);
void mac_socket_label_free(struct label *label);
struct label *mac_vnode_label_alloc(void);
void mac_vnode_label_free(struct label *label);
Modify the MAC Framework so that instead of embedding a (struct label) in various kernel objects to represent security data, we embed a (struct label *) pointer, which now references labels allocated using a UMA zone (mac_label.c). This allows the size and shape of struct label to be varied without changing the size and shape of these kernel objects, which become part of the frozen ABI with 5-STABLE. This opens the door for boot-time selection of the number of label slots, and hence changes to the bound on the number of simultaneous labeled policies at boot-time instead of compile-time. This also makes it easier to embed label references in new objects as required for locking/caching with fine-grained network stack locking, such as inpcb structures. This change also moves us further in the direction of hiding the structure of kernel objects from MAC policy modules, not to mention dramatically reducing the number of '&' symbols appearing in both the MAC Framework and MAC policy modules, and improving readability. While this results in minimal performance change with MAC enabled, it will observably shrink the size of a number of critical kernel data structures for the !MAC case, and should have a small (but measurable) performance benefit (i.e., struct vnode, struct socket) do to memory conservation and reduced cost of zeroing memory. NOTE: Users of MAC must recompile their kernel and all MAC modules as a result of this change. Because this is an API change, third party MAC modules will also need to be updated to make less use of the '&' symbol. Suggestions from: bmilekic Obtained from: TrustedBSD Project Sponsored by: DARPA, Network Associates Laboratories
2003-11-12 03:14:31 +00:00
int mac_cred_check_relabel(struct ucred *cred, struct label *newlabel);
int mac_cred_externalize_label(struct label *label, char *elements,
char *outbuf, size_t outbuflen);
int mac_cred_internalize_label(struct label *label, char *string);
void mac_cred_relabel(struct ucred *cred, struct label *newlabel);
struct label *mac_mbuf_to_label(struct mbuf *m);
void mac_pipe_copy_label(struct label *src, struct label *dest);
int mac_pipe_externalize_label(struct label *label, char *elements,
char *outbuf, size_t outbuflen);
int mac_pipe_internalize_label(struct label *label, char *string);
int mac_socket_label_set(struct ucred *cred, struct socket *so,
struct label *label);
void mac_socket_copy_label(struct label *src, struct label *dest);
int mac_socket_externalize_label(struct label *label, char *elements,
char *outbuf, size_t outbuflen);
int mac_socket_internalize_label(struct label *label, char *string);
int mac_vnode_externalize_label(struct label *label, char *elements,
char *outbuf, size_t outbuflen);
int mac_vnode_internalize_label(struct label *label, char *string);
void mac_vnode_check_mmap_downgrade(struct ucred *cred, struct vnode *vp,
int *prot);
int vn_setlabel(struct vnode *vp, struct label *intlabel,
struct ucred *cred);
/*
Rework MAC Framework synchronization in a number of ways in order to improve performance: - Eliminate custom reference count and condition variable to monitor threads entering the framework, as this had both significant overhead and behaved badly in the face of contention. - Replace reference count with two locks: an rwlock and an sx lock, which will be read-acquired by threads entering the framework depending on whether a give policy entry point is permitted to sleep or not. - Replace previous mutex locking of the reference count for exclusive access with write acquiring of both the policy list sx and rw locks, which occurs only when policies are attached or detached. - Do a lockless read of the dynamic policy list head before acquiring any locks in order to reduce overhead when no dynamic policies are loaded; this a race we can afford to lose. - For every policy entry point invocation, decide whether sleeping is permitted, and if not, use a _NOSLEEP() variant of the composition macros, which will use the rwlock instead of the sxlock. In some cases, we decide which to use based on allocation flags passed to the MAC Framework entry point. As with the move to rwlocks/rmlocks in pfil, this may trigger witness warnings, but these should (generally) be false positives as all acquisition of the locks is for read with two very narrow exceptions for policy load/unload, and those code blocks should never acquire other locks. Sponsored by: Google, Inc. Obtained from: TrustedBSD Project Discussed with: csjp (idea, not specific patch)
2009-03-14 16:06:06 +00:00
* MAC Framework composition macros invoke all registered MAC policies for a
* specific entry point. They come in two forms: one which permits policies
* to sleep/block, and another that does not.
*
* MAC_POLICY_CHECK performs the designated check by walking the policy
* module list and checking with each as to how it feels about the request.
* Note that it returns its value via 'error' in the scope of the caller.
*/
#define MAC_POLICY_CHECK(check, args...) do { \
struct mac_policy_conf *mpc; \
\
error = 0; \
Clean up locking for the MAC Framework: (1) Accept that we're now going to use mutexes, so don't attempt to avoid treating them as mutexes. This cleans up locking accessor function names some. (2) Rename variables to _mtx, _cv, _count, simplifying the naming. (3) Add a new form of the _busy() primitive that conditionally makes the list busy: if there are entries on the list, bump the busy count. If there are no entries, don't bump the busy count. Return a boolean indicating whether or not the busy count was bumped. (4) Break mac_policy_list into two lists: one with the same name holding dynamic policies, and a new list, mac_static_policy_list, which holds policies loaded before mac_late and without the unload flag set. The static list may be accessed without holding the busy count, since it can't change at run-time. (5) In general, prefer making the list busy conditionally, meaning we pay only one mutex lock per entry point if all modules are on the static list, rather than two (since we don't have to lower the busy count when we're done with the framework). For systems running just Biba or MLS, this will halve the mutex accesses in the network stack, and may offer a substantial performance benefits. (6) Lay the groundwork for a dynamic-free kernel option which eliminates all locking associated with dynamically loaded or unloaded policies, for pre-configured systems requiring maximum performance but less run-time flexibility. These changes have been running for a few weeks on MAC development branch systems. Approved by: re (jhb) Obtained from: TrustedBSD Project Sponsored by: DARPA, Network Associates Laboratories
2003-05-07 17:49:24 +00:00
LIST_FOREACH(mpc, &mac_static_policy_list, mpc_list) { \
if (mpc->mpc_ops->mpo_ ## check != NULL) \
error = mac_error_select( \
mpc->mpc_ops->mpo_ ## check (args), \
error); \
} \
Rework MAC Framework synchronization in a number of ways in order to improve performance: - Eliminate custom reference count and condition variable to monitor threads entering the framework, as this had both significant overhead and behaved badly in the face of contention. - Replace reference count with two locks: an rwlock and an sx lock, which will be read-acquired by threads entering the framework depending on whether a give policy entry point is permitted to sleep or not. - Replace previous mutex locking of the reference count for exclusive access with write acquiring of both the policy list sx and rw locks, which occurs only when policies are attached or detached. - Do a lockless read of the dynamic policy list head before acquiring any locks in order to reduce overhead when no dynamic policies are loaded; this a race we can afford to lose. - For every policy entry point invocation, decide whether sleeping is permitted, and if not, use a _NOSLEEP() variant of the composition macros, which will use the rwlock instead of the sxlock. In some cases, we decide which to use based on allocation flags passed to the MAC Framework entry point. As with the move to rwlocks/rmlocks in pfil, this may trigger witness warnings, but these should (generally) be false positives as all acquisition of the locks is for read with two very narrow exceptions for policy load/unload, and those code blocks should never acquire other locks. Sponsored by: Google, Inc. Obtained from: TrustedBSD Project Discussed with: csjp (idea, not specific patch)
2009-03-14 16:06:06 +00:00
if (!LIST_EMPTY(&mac_policy_list)) { \
mac_policy_slock_sleep(); \
LIST_FOREACH(mpc, &mac_policy_list, mpc_list) { \
if (mpc->mpc_ops->mpo_ ## check != NULL) \
error = mac_error_select( \
mpc->mpc_ops->mpo_ ## check (args), \
error); \
} \
mac_policy_sunlock_sleep(); \
} \
} while (0)
#define MAC_POLICY_CHECK_NOSLEEP(check, args...) do { \
Rework MAC Framework synchronization in a number of ways in order to improve performance: - Eliminate custom reference count and condition variable to monitor threads entering the framework, as this had both significant overhead and behaved badly in the face of contention. - Replace reference count with two locks: an rwlock and an sx lock, which will be read-acquired by threads entering the framework depending on whether a give policy entry point is permitted to sleep or not. - Replace previous mutex locking of the reference count for exclusive access with write acquiring of both the policy list sx and rw locks, which occurs only when policies are attached or detached. - Do a lockless read of the dynamic policy list head before acquiring any locks in order to reduce overhead when no dynamic policies are loaded; this a race we can afford to lose. - For every policy entry point invocation, decide whether sleeping is permitted, and if not, use a _NOSLEEP() variant of the composition macros, which will use the rwlock instead of the sxlock. In some cases, we decide which to use based on allocation flags passed to the MAC Framework entry point. As with the move to rwlocks/rmlocks in pfil, this may trigger witness warnings, but these should (generally) be false positives as all acquisition of the locks is for read with two very narrow exceptions for policy load/unload, and those code blocks should never acquire other locks. Sponsored by: Google, Inc. Obtained from: TrustedBSD Project Discussed with: csjp (idea, not specific patch)
2009-03-14 16:06:06 +00:00
struct mac_policy_conf *mpc; \
\
error = 0; \
LIST_FOREACH(mpc, &mac_static_policy_list, mpc_list) { \
if (mpc->mpc_ops->mpo_ ## check != NULL) \
error = mac_error_select( \
mpc->mpc_ops->mpo_ ## check (args), \
error); \
} \
if (!LIST_EMPTY(&mac_policy_list)) { \
struct rm_priotracker tracker; \
\
mac_policy_slock_nosleep(&tracker); \
Clean up locking for the MAC Framework: (1) Accept that we're now going to use mutexes, so don't attempt to avoid treating them as mutexes. This cleans up locking accessor function names some. (2) Rename variables to _mtx, _cv, _count, simplifying the naming. (3) Add a new form of the _busy() primitive that conditionally makes the list busy: if there are entries on the list, bump the busy count. If there are no entries, don't bump the busy count. Return a boolean indicating whether or not the busy count was bumped. (4) Break mac_policy_list into two lists: one with the same name holding dynamic policies, and a new list, mac_static_policy_list, which holds policies loaded before mac_late and without the unload flag set. The static list may be accessed without holding the busy count, since it can't change at run-time. (5) In general, prefer making the list busy conditionally, meaning we pay only one mutex lock per entry point if all modules are on the static list, rather than two (since we don't have to lower the busy count when we're done with the framework). For systems running just Biba or MLS, this will halve the mutex accesses in the network stack, and may offer a substantial performance benefits. (6) Lay the groundwork for a dynamic-free kernel option which eliminates all locking associated with dynamically loaded or unloaded policies, for pre-configured systems requiring maximum performance but less run-time flexibility. These changes have been running for a few weeks on MAC development branch systems. Approved by: re (jhb) Obtained from: TrustedBSD Project Sponsored by: DARPA, Network Associates Laboratories
2003-05-07 17:49:24 +00:00
LIST_FOREACH(mpc, &mac_policy_list, mpc_list) { \
if (mpc->mpc_ops->mpo_ ## check != NULL) \
error = mac_error_select( \
Clean up locking for the MAC Framework: (1) Accept that we're now going to use mutexes, so don't attempt to avoid treating them as mutexes. This cleans up locking accessor function names some. (2) Rename variables to _mtx, _cv, _count, simplifying the naming. (3) Add a new form of the _busy() primitive that conditionally makes the list busy: if there are entries on the list, bump the busy count. If there are no entries, don't bump the busy count. Return a boolean indicating whether or not the busy count was bumped. (4) Break mac_policy_list into two lists: one with the same name holding dynamic policies, and a new list, mac_static_policy_list, which holds policies loaded before mac_late and without the unload flag set. The static list may be accessed without holding the busy count, since it can't change at run-time. (5) In general, prefer making the list busy conditionally, meaning we pay only one mutex lock per entry point if all modules are on the static list, rather than two (since we don't have to lower the busy count when we're done with the framework). For systems running just Biba or MLS, this will halve the mutex accesses in the network stack, and may offer a substantial performance benefits. (6) Lay the groundwork for a dynamic-free kernel option which eliminates all locking associated with dynamically loaded or unloaded policies, for pre-configured systems requiring maximum performance but less run-time flexibility. These changes have been running for a few weeks on MAC development branch systems. Approved by: re (jhb) Obtained from: TrustedBSD Project Sponsored by: DARPA, Network Associates Laboratories
2003-05-07 17:49:24 +00:00
mpc->mpc_ops->mpo_ ## check (args), \
error); \
} \
mac_policy_sunlock_nosleep(&tracker); \
Clean up locking for the MAC Framework: (1) Accept that we're now going to use mutexes, so don't attempt to avoid treating them as mutexes. This cleans up locking accessor function names some. (2) Rename variables to _mtx, _cv, _count, simplifying the naming. (3) Add a new form of the _busy() primitive that conditionally makes the list busy: if there are entries on the list, bump the busy count. If there are no entries, don't bump the busy count. Return a boolean indicating whether or not the busy count was bumped. (4) Break mac_policy_list into two lists: one with the same name holding dynamic policies, and a new list, mac_static_policy_list, which holds policies loaded before mac_late and without the unload flag set. The static list may be accessed without holding the busy count, since it can't change at run-time. (5) In general, prefer making the list busy conditionally, meaning we pay only one mutex lock per entry point if all modules are on the static list, rather than two (since we don't have to lower the busy count when we're done with the framework). For systems running just Biba or MLS, this will halve the mutex accesses in the network stack, and may offer a substantial performance benefits. (6) Lay the groundwork for a dynamic-free kernel option which eliminates all locking associated with dynamically loaded or unloaded policies, for pre-configured systems requiring maximum performance but less run-time flexibility. These changes have been running for a few weeks on MAC development branch systems. Approved by: re (jhb) Obtained from: TrustedBSD Project Sponsored by: DARPA, Network Associates Laboratories
2003-05-07 17:49:24 +00:00
} \
} while (0)
Add a new priv(9) kernel interface for checking the availability of privilege for threads and credentials. Unlike the existing suser(9) interface, priv(9) exposes a named privilege identifier to the privilege checking code, allowing more complex policies regarding the granting of privilege to be expressed. Two interfaces are provided, replacing the existing suser(9) interface: suser(td) -> priv_check(td, priv) suser_cred(cred, flags) -> priv_check_cred(cred, priv, flags) A comprehensive list of currently available kernel privileges may be found in priv.h. New privileges are easily added as required, but the comments on adding privileges found in priv.h and priv(9) should be read before doing so. The new privilege interface exposed sufficient information to the privilege checking routine that it will now be possible for jail to determine whether a particular privilege is granted in the check routine, rather than relying on hints from the calling context via the SUSER_ALLOWJAIL flag. For now, the flag is maintained, but a new jail check function, prison_priv_check(), is exposed from kern_jail.c and used by the privilege check routine to determine if the privilege is permitted in jail. As a result, a centralized list of privileges permitted in jail is now present in kern_jail.c. The MAC Framework is now also able to instrument privilege checks, both to deny privileges otherwise granted (mac_priv_check()), and to grant privileges otherwise denied (mac_priv_grant()), permitting MAC Policy modules to implement privilege models, as well as control a much broader range of system behavior in order to constrain processes running with root privilege. The suser() and suser_cred() functions remain implemented, now in terms of priv_check() and the PRIV_ROOT privilege, for use during the transition and possibly continuing use by third party kernel modules that have not been updated. The PRIV_DRIVER privilege exists to allow device drivers to check privilege without adopting a more specific privilege identifier. This change does not modify the actual security policy, rather, it modifies the interface for privilege checks so changes to the security policy become more feasible. Sponsored by: nCircle Network Security, Inc. Obtained from: TrustedBSD Project Discussed on: arch@ Reviewed (at least in part) by: mlaier, jmg, pjd, bde, ceri, Alex Lyashkov <umka at sevcity dot net>, Skip Ford <skip dot ford at verizon dot net>, Antoine Brodin <antoine dot brodin at laposte dot net>
2006-11-06 13:37:19 +00:00
/*
* MAC_POLICY_GRANT performs the designated check by walking the policy
* module list and checking with each as to how it feels about the request.
* Unlike MAC_POLICY_CHECK, it grants if any policies return '0', and
* otherwise returns EPERM. Note that it returns its value via 'error' in
* the scope of the caller.
Add a new priv(9) kernel interface for checking the availability of privilege for threads and credentials. Unlike the existing suser(9) interface, priv(9) exposes a named privilege identifier to the privilege checking code, allowing more complex policies regarding the granting of privilege to be expressed. Two interfaces are provided, replacing the existing suser(9) interface: suser(td) -> priv_check(td, priv) suser_cred(cred, flags) -> priv_check_cred(cred, priv, flags) A comprehensive list of currently available kernel privileges may be found in priv.h. New privileges are easily added as required, but the comments on adding privileges found in priv.h and priv(9) should be read before doing so. The new privilege interface exposed sufficient information to the privilege checking routine that it will now be possible for jail to determine whether a particular privilege is granted in the check routine, rather than relying on hints from the calling context via the SUSER_ALLOWJAIL flag. For now, the flag is maintained, but a new jail check function, prison_priv_check(), is exposed from kern_jail.c and used by the privilege check routine to determine if the privilege is permitted in jail. As a result, a centralized list of privileges permitted in jail is now present in kern_jail.c. The MAC Framework is now also able to instrument privilege checks, both to deny privileges otherwise granted (mac_priv_check()), and to grant privileges otherwise denied (mac_priv_grant()), permitting MAC Policy modules to implement privilege models, as well as control a much broader range of system behavior in order to constrain processes running with root privilege. The suser() and suser_cred() functions remain implemented, now in terms of priv_check() and the PRIV_ROOT privilege, for use during the transition and possibly continuing use by third party kernel modules that have not been updated. The PRIV_DRIVER privilege exists to allow device drivers to check privilege without adopting a more specific privilege identifier. This change does not modify the actual security policy, rather, it modifies the interface for privilege checks so changes to the security policy become more feasible. Sponsored by: nCircle Network Security, Inc. Obtained from: TrustedBSD Project Discussed on: arch@ Reviewed (at least in part) by: mlaier, jmg, pjd, bde, ceri, Alex Lyashkov <umka at sevcity dot net>, Skip Ford <skip dot ford at verizon dot net>, Antoine Brodin <antoine dot brodin at laposte dot net>
2006-11-06 13:37:19 +00:00
*/
#define MAC_POLICY_GRANT_NOSLEEP(check, args...) do { \
Add a new priv(9) kernel interface for checking the availability of privilege for threads and credentials. Unlike the existing suser(9) interface, priv(9) exposes a named privilege identifier to the privilege checking code, allowing more complex policies regarding the granting of privilege to be expressed. Two interfaces are provided, replacing the existing suser(9) interface: suser(td) -> priv_check(td, priv) suser_cred(cred, flags) -> priv_check_cred(cred, priv, flags) A comprehensive list of currently available kernel privileges may be found in priv.h. New privileges are easily added as required, but the comments on adding privileges found in priv.h and priv(9) should be read before doing so. The new privilege interface exposed sufficient information to the privilege checking routine that it will now be possible for jail to determine whether a particular privilege is granted in the check routine, rather than relying on hints from the calling context via the SUSER_ALLOWJAIL flag. For now, the flag is maintained, but a new jail check function, prison_priv_check(), is exposed from kern_jail.c and used by the privilege check routine to determine if the privilege is permitted in jail. As a result, a centralized list of privileges permitted in jail is now present in kern_jail.c. The MAC Framework is now also able to instrument privilege checks, both to deny privileges otherwise granted (mac_priv_check()), and to grant privileges otherwise denied (mac_priv_grant()), permitting MAC Policy modules to implement privilege models, as well as control a much broader range of system behavior in order to constrain processes running with root privilege. The suser() and suser_cred() functions remain implemented, now in terms of priv_check() and the PRIV_ROOT privilege, for use during the transition and possibly continuing use by third party kernel modules that have not been updated. The PRIV_DRIVER privilege exists to allow device drivers to check privilege without adopting a more specific privilege identifier. This change does not modify the actual security policy, rather, it modifies the interface for privilege checks so changes to the security policy become more feasible. Sponsored by: nCircle Network Security, Inc. Obtained from: TrustedBSD Project Discussed on: arch@ Reviewed (at least in part) by: mlaier, jmg, pjd, bde, ceri, Alex Lyashkov <umka at sevcity dot net>, Skip Ford <skip dot ford at verizon dot net>, Antoine Brodin <antoine dot brodin at laposte dot net>
2006-11-06 13:37:19 +00:00
struct mac_policy_conf *mpc; \
\
error = EPERM; \
LIST_FOREACH(mpc, &mac_static_policy_list, mpc_list) { \
if (mpc->mpc_ops->mpo_ ## check != NULL) { \
if (mpc->mpc_ops->mpo_ ## check(args) == 0) \
error = 0; \
} \
} \
Rework MAC Framework synchronization in a number of ways in order to improve performance: - Eliminate custom reference count and condition variable to monitor threads entering the framework, as this had both significant overhead and behaved badly in the face of contention. - Replace reference count with two locks: an rwlock and an sx lock, which will be read-acquired by threads entering the framework depending on whether a give policy entry point is permitted to sleep or not. - Replace previous mutex locking of the reference count for exclusive access with write acquiring of both the policy list sx and rw locks, which occurs only when policies are attached or detached. - Do a lockless read of the dynamic policy list head before acquiring any locks in order to reduce overhead when no dynamic policies are loaded; this a race we can afford to lose. - For every policy entry point invocation, decide whether sleeping is permitted, and if not, use a _NOSLEEP() variant of the composition macros, which will use the rwlock instead of the sxlock. In some cases, we decide which to use based on allocation flags passed to the MAC Framework entry point. As with the move to rwlocks/rmlocks in pfil, this may trigger witness warnings, but these should (generally) be false positives as all acquisition of the locks is for read with two very narrow exceptions for policy load/unload, and those code blocks should never acquire other locks. Sponsored by: Google, Inc. Obtained from: TrustedBSD Project Discussed with: csjp (idea, not specific patch)
2009-03-14 16:06:06 +00:00
if (!LIST_EMPTY(&mac_policy_list)) { \
struct rm_priotracker tracker; \
\
mac_policy_slock_nosleep(&tracker); \
Add a new priv(9) kernel interface for checking the availability of privilege for threads and credentials. Unlike the existing suser(9) interface, priv(9) exposes a named privilege identifier to the privilege checking code, allowing more complex policies regarding the granting of privilege to be expressed. Two interfaces are provided, replacing the existing suser(9) interface: suser(td) -> priv_check(td, priv) suser_cred(cred, flags) -> priv_check_cred(cred, priv, flags) A comprehensive list of currently available kernel privileges may be found in priv.h. New privileges are easily added as required, but the comments on adding privileges found in priv.h and priv(9) should be read before doing so. The new privilege interface exposed sufficient information to the privilege checking routine that it will now be possible for jail to determine whether a particular privilege is granted in the check routine, rather than relying on hints from the calling context via the SUSER_ALLOWJAIL flag. For now, the flag is maintained, but a new jail check function, prison_priv_check(), is exposed from kern_jail.c and used by the privilege check routine to determine if the privilege is permitted in jail. As a result, a centralized list of privileges permitted in jail is now present in kern_jail.c. The MAC Framework is now also able to instrument privilege checks, both to deny privileges otherwise granted (mac_priv_check()), and to grant privileges otherwise denied (mac_priv_grant()), permitting MAC Policy modules to implement privilege models, as well as control a much broader range of system behavior in order to constrain processes running with root privilege. The suser() and suser_cred() functions remain implemented, now in terms of priv_check() and the PRIV_ROOT privilege, for use during the transition and possibly continuing use by third party kernel modules that have not been updated. The PRIV_DRIVER privilege exists to allow device drivers to check privilege without adopting a more specific privilege identifier. This change does not modify the actual security policy, rather, it modifies the interface for privilege checks so changes to the security policy become more feasible. Sponsored by: nCircle Network Security, Inc. Obtained from: TrustedBSD Project Discussed on: arch@ Reviewed (at least in part) by: mlaier, jmg, pjd, bde, ceri, Alex Lyashkov <umka at sevcity dot net>, Skip Ford <skip dot ford at verizon dot net>, Antoine Brodin <antoine dot brodin at laposte dot net>
2006-11-06 13:37:19 +00:00
LIST_FOREACH(mpc, &mac_policy_list, mpc_list) { \
if (mpc->mpc_ops->mpo_ ## check != NULL) { \
if (mpc->mpc_ops->mpo_ ## check (args) \
== 0) \
error = 0; \
} \
} \
mac_policy_sunlock_nosleep(&tracker); \
Add a new priv(9) kernel interface for checking the availability of privilege for threads and credentials. Unlike the existing suser(9) interface, priv(9) exposes a named privilege identifier to the privilege checking code, allowing more complex policies regarding the granting of privilege to be expressed. Two interfaces are provided, replacing the existing suser(9) interface: suser(td) -> priv_check(td, priv) suser_cred(cred, flags) -> priv_check_cred(cred, priv, flags) A comprehensive list of currently available kernel privileges may be found in priv.h. New privileges are easily added as required, but the comments on adding privileges found in priv.h and priv(9) should be read before doing so. The new privilege interface exposed sufficient information to the privilege checking routine that it will now be possible for jail to determine whether a particular privilege is granted in the check routine, rather than relying on hints from the calling context via the SUSER_ALLOWJAIL flag. For now, the flag is maintained, but a new jail check function, prison_priv_check(), is exposed from kern_jail.c and used by the privilege check routine to determine if the privilege is permitted in jail. As a result, a centralized list of privileges permitted in jail is now present in kern_jail.c. The MAC Framework is now also able to instrument privilege checks, both to deny privileges otherwise granted (mac_priv_check()), and to grant privileges otherwise denied (mac_priv_grant()), permitting MAC Policy modules to implement privilege models, as well as control a much broader range of system behavior in order to constrain processes running with root privilege. The suser() and suser_cred() functions remain implemented, now in terms of priv_check() and the PRIV_ROOT privilege, for use during the transition and possibly continuing use by third party kernel modules that have not been updated. The PRIV_DRIVER privilege exists to allow device drivers to check privilege without adopting a more specific privilege identifier. This change does not modify the actual security policy, rather, it modifies the interface for privilege checks so changes to the security policy become more feasible. Sponsored by: nCircle Network Security, Inc. Obtained from: TrustedBSD Project Discussed on: arch@ Reviewed (at least in part) by: mlaier, jmg, pjd, bde, ceri, Alex Lyashkov <umka at sevcity dot net>, Skip Ford <skip dot ford at verizon dot net>, Antoine Brodin <antoine dot brodin at laposte dot net>
2006-11-06 13:37:19 +00:00
} \
} while (0)
/*
* MAC_POLICY_BOOLEAN performs the designated boolean composition by walking
* the module list, invoking each instance of the operation, and combining
* the results using the passed C operator. Note that it returns its value
* via 'result' in the scope of the caller, which should be initialized by
* the caller in a meaningful way to get a meaningful result.
*/
#define MAC_POLICY_BOOLEAN(operation, composition, args...) do { \
struct mac_policy_conf *mpc; \
\
Clean up locking for the MAC Framework: (1) Accept that we're now going to use mutexes, so don't attempt to avoid treating them as mutexes. This cleans up locking accessor function names some. (2) Rename variables to _mtx, _cv, _count, simplifying the naming. (3) Add a new form of the _busy() primitive that conditionally makes the list busy: if there are entries on the list, bump the busy count. If there are no entries, don't bump the busy count. Return a boolean indicating whether or not the busy count was bumped. (4) Break mac_policy_list into two lists: one with the same name holding dynamic policies, and a new list, mac_static_policy_list, which holds policies loaded before mac_late and without the unload flag set. The static list may be accessed without holding the busy count, since it can't change at run-time. (5) In general, prefer making the list busy conditionally, meaning we pay only one mutex lock per entry point if all modules are on the static list, rather than two (since we don't have to lower the busy count when we're done with the framework). For systems running just Biba or MLS, this will halve the mutex accesses in the network stack, and may offer a substantial performance benefits. (6) Lay the groundwork for a dynamic-free kernel option which eliminates all locking associated with dynamically loaded or unloaded policies, for pre-configured systems requiring maximum performance but less run-time flexibility. These changes have been running for a few weeks on MAC development branch systems. Approved by: re (jhb) Obtained from: TrustedBSD Project Sponsored by: DARPA, Network Associates Laboratories
2003-05-07 17:49:24 +00:00
LIST_FOREACH(mpc, &mac_static_policy_list, mpc_list) { \
if (mpc->mpc_ops->mpo_ ## operation != NULL) \
result = result composition \
mpc->mpc_ops->mpo_ ## operation (args); \
} \
Rework MAC Framework synchronization in a number of ways in order to improve performance: - Eliminate custom reference count and condition variable to monitor threads entering the framework, as this had both significant overhead and behaved badly in the face of contention. - Replace reference count with two locks: an rwlock and an sx lock, which will be read-acquired by threads entering the framework depending on whether a give policy entry point is permitted to sleep or not. - Replace previous mutex locking of the reference count for exclusive access with write acquiring of both the policy list sx and rw locks, which occurs only when policies are attached or detached. - Do a lockless read of the dynamic policy list head before acquiring any locks in order to reduce overhead when no dynamic policies are loaded; this a race we can afford to lose. - For every policy entry point invocation, decide whether sleeping is permitted, and if not, use a _NOSLEEP() variant of the composition macros, which will use the rwlock instead of the sxlock. In some cases, we decide which to use based on allocation flags passed to the MAC Framework entry point. As with the move to rwlocks/rmlocks in pfil, this may trigger witness warnings, but these should (generally) be false positives as all acquisition of the locks is for read with two very narrow exceptions for policy load/unload, and those code blocks should never acquire other locks. Sponsored by: Google, Inc. Obtained from: TrustedBSD Project Discussed with: csjp (idea, not specific patch)
2009-03-14 16:06:06 +00:00
if (!LIST_EMPTY(&mac_policy_list)) { \
mac_policy_slock_sleep(); \
LIST_FOREACH(mpc, &mac_policy_list, mpc_list) { \
if (mpc->mpc_ops->mpo_ ## operation != NULL) \
result = result composition \
mpc->mpc_ops->mpo_ ## operation \
(args); \
} \
mac_policy_sunlock_sleep(); \
} \
} while (0)
#define MAC_POLICY_BOOLEAN_NOSLEEP(operation, composition, args...) do {\
Rework MAC Framework synchronization in a number of ways in order to improve performance: - Eliminate custom reference count and condition variable to monitor threads entering the framework, as this had both significant overhead and behaved badly in the face of contention. - Replace reference count with two locks: an rwlock and an sx lock, which will be read-acquired by threads entering the framework depending on whether a give policy entry point is permitted to sleep or not. - Replace previous mutex locking of the reference count for exclusive access with write acquiring of both the policy list sx and rw locks, which occurs only when policies are attached or detached. - Do a lockless read of the dynamic policy list head before acquiring any locks in order to reduce overhead when no dynamic policies are loaded; this a race we can afford to lose. - For every policy entry point invocation, decide whether sleeping is permitted, and if not, use a _NOSLEEP() variant of the composition macros, which will use the rwlock instead of the sxlock. In some cases, we decide which to use based on allocation flags passed to the MAC Framework entry point. As with the move to rwlocks/rmlocks in pfil, this may trigger witness warnings, but these should (generally) be false positives as all acquisition of the locks is for read with two very narrow exceptions for policy load/unload, and those code blocks should never acquire other locks. Sponsored by: Google, Inc. Obtained from: TrustedBSD Project Discussed with: csjp (idea, not specific patch)
2009-03-14 16:06:06 +00:00
struct mac_policy_conf *mpc; \
\
LIST_FOREACH(mpc, &mac_static_policy_list, mpc_list) { \
if (mpc->mpc_ops->mpo_ ## operation != NULL) \
result = result composition \
mpc->mpc_ops->mpo_ ## operation (args); \
} \
if (!LIST_EMPTY(&mac_policy_list)) { \
struct rm_priotracker tracker; \
\
mac_policy_slock_nosleep(&tracker); \
Clean up locking for the MAC Framework: (1) Accept that we're now going to use mutexes, so don't attempt to avoid treating them as mutexes. This cleans up locking accessor function names some. (2) Rename variables to _mtx, _cv, _count, simplifying the naming. (3) Add a new form of the _busy() primitive that conditionally makes the list busy: if there are entries on the list, bump the busy count. If there are no entries, don't bump the busy count. Return a boolean indicating whether or not the busy count was bumped. (4) Break mac_policy_list into two lists: one with the same name holding dynamic policies, and a new list, mac_static_policy_list, which holds policies loaded before mac_late and without the unload flag set. The static list may be accessed without holding the busy count, since it can't change at run-time. (5) In general, prefer making the list busy conditionally, meaning we pay only one mutex lock per entry point if all modules are on the static list, rather than two (since we don't have to lower the busy count when we're done with the framework). For systems running just Biba or MLS, this will halve the mutex accesses in the network stack, and may offer a substantial performance benefits. (6) Lay the groundwork for a dynamic-free kernel option which eliminates all locking associated with dynamically loaded or unloaded policies, for pre-configured systems requiring maximum performance but less run-time flexibility. These changes have been running for a few weeks on MAC development branch systems. Approved by: re (jhb) Obtained from: TrustedBSD Project Sponsored by: DARPA, Network Associates Laboratories
2003-05-07 17:49:24 +00:00
LIST_FOREACH(mpc, &mac_policy_list, mpc_list) { \
if (mpc->mpc_ops->mpo_ ## operation != NULL) \
result = result composition \
mpc->mpc_ops->mpo_ ## operation \
(args); \
} \
mac_policy_sunlock_nosleep(&tracker); \
Clean up locking for the MAC Framework: (1) Accept that we're now going to use mutexes, so don't attempt to avoid treating them as mutexes. This cleans up locking accessor function names some. (2) Rename variables to _mtx, _cv, _count, simplifying the naming. (3) Add a new form of the _busy() primitive that conditionally makes the list busy: if there are entries on the list, bump the busy count. If there are no entries, don't bump the busy count. Return a boolean indicating whether or not the busy count was bumped. (4) Break mac_policy_list into two lists: one with the same name holding dynamic policies, and a new list, mac_static_policy_list, which holds policies loaded before mac_late and without the unload flag set. The static list may be accessed without holding the busy count, since it can't change at run-time. (5) In general, prefer making the list busy conditionally, meaning we pay only one mutex lock per entry point if all modules are on the static list, rather than two (since we don't have to lower the busy count when we're done with the framework). For systems running just Biba or MLS, this will halve the mutex accesses in the network stack, and may offer a substantial performance benefits. (6) Lay the groundwork for a dynamic-free kernel option which eliminates all locking associated with dynamically loaded or unloaded policies, for pre-configured systems requiring maximum performance but less run-time flexibility. These changes have been running for a few weeks on MAC development branch systems. Approved by: re (jhb) Obtained from: TrustedBSD Project Sponsored by: DARPA, Network Associates Laboratories
2003-05-07 17:49:24 +00:00
} \
} while (0)
/*
* MAC_POLICY_EXTERNALIZE queries each policy to see if it can generate an
* externalized version of a label element by name. Policies declare whether
* they have matched a particular element name, parsed from the string by
* MAC_POLICY_EXTERNALIZE, and an error is returned if any element is matched
* by no policy.
*/
#define MAC_POLICY_EXTERNALIZE(type, label, elementlist, outbuf, \
outbuflen) do { \
int claimed, first, ignorenotfound, savedlen; \
char *element_name, *element_temp; \
struct sbuf sb; \
\
error = 0; \
first = 1; \
sbuf_new(&sb, outbuf, outbuflen, SBUF_FIXEDLEN); \
element_temp = elementlist; \
while ((element_name = strsep(&element_temp, ",")) != NULL) { \
if (element_name[0] == '?') { \
element_name++; \
ignorenotfound = 1; \
} else \
ignorenotfound = 0; \
savedlen = sbuf_len(&sb); \
if (first) \
error = sbuf_printf(&sb, "%s/", element_name); \
else \
error = sbuf_printf(&sb, ",%s/", element_name); \
if (error == -1) { \
error = EINVAL; /* XXX: E2BIG? */ \
break; \
} \
claimed = 0; \
MAC_POLICY_CHECK(type ## _externalize_label, label, \
element_name, &sb, &claimed); \
if (error) \
break; \
if (claimed == 0 && ignorenotfound) { \
/* Revert last label name. */ \
sbuf_setpos(&sb, savedlen); \
} else if (claimed != 1) { \
error = EINVAL; /* XXX: ENOLABEL? */ \
break; \
} else { \
first = 0; \
} \
} \
sbuf_finish(&sb); \
} while (0)
/*
* MAC_POLICY_INTERNALIZE presents parsed element names and data to each
* policy to see if any is willing to claim it and internalize the label
* data. If no policies match, an error is returned.
*/
#define MAC_POLICY_INTERNALIZE(type, label, instring) do { \
char *element, *element_name, *element_data; \
int claimed; \
\
error = 0; \
element = instring; \
while ((element_name = strsep(&element, ",")) != NULL) { \
element_data = element_name; \
element_name = strsep(&element_data, "/"); \
if (element_data == NULL) { \
error = EINVAL; \
break; \
} \
claimed = 0; \
MAC_POLICY_CHECK(type ## _internalize_label, label, \
element_name, element_data, &claimed); \
if (error) \
break; \
if (claimed != 1) { \
/* XXXMAC: Another error here? */ \
error = EINVAL; \
break; \
} \
} \
} while (0)
/*
* MAC_POLICY_PERFORM performs the designated operation by walking the policy
* module list and invoking that operation for each policy.
*/
#define MAC_POLICY_PERFORM(operation, args...) do { \
struct mac_policy_conf *mpc; \
\
Clean up locking for the MAC Framework: (1) Accept that we're now going to use mutexes, so don't attempt to avoid treating them as mutexes. This cleans up locking accessor function names some. (2) Rename variables to _mtx, _cv, _count, simplifying the naming. (3) Add a new form of the _busy() primitive that conditionally makes the list busy: if there are entries on the list, bump the busy count. If there are no entries, don't bump the busy count. Return a boolean indicating whether or not the busy count was bumped. (4) Break mac_policy_list into two lists: one with the same name holding dynamic policies, and a new list, mac_static_policy_list, which holds policies loaded before mac_late and without the unload flag set. The static list may be accessed without holding the busy count, since it can't change at run-time. (5) In general, prefer making the list busy conditionally, meaning we pay only one mutex lock per entry point if all modules are on the static list, rather than two (since we don't have to lower the busy count when we're done with the framework). For systems running just Biba or MLS, this will halve the mutex accesses in the network stack, and may offer a substantial performance benefits. (6) Lay the groundwork for a dynamic-free kernel option which eliminates all locking associated with dynamically loaded or unloaded policies, for pre-configured systems requiring maximum performance but less run-time flexibility. These changes have been running for a few weeks on MAC development branch systems. Approved by: re (jhb) Obtained from: TrustedBSD Project Sponsored by: DARPA, Network Associates Laboratories
2003-05-07 17:49:24 +00:00
LIST_FOREACH(mpc, &mac_static_policy_list, mpc_list) { \
if (mpc->mpc_ops->mpo_ ## operation != NULL) \
mpc->mpc_ops->mpo_ ## operation (args); \
} \
Rework MAC Framework synchronization in a number of ways in order to improve performance: - Eliminate custom reference count and condition variable to monitor threads entering the framework, as this had both significant overhead and behaved badly in the face of contention. - Replace reference count with two locks: an rwlock and an sx lock, which will be read-acquired by threads entering the framework depending on whether a give policy entry point is permitted to sleep or not. - Replace previous mutex locking of the reference count for exclusive access with write acquiring of both the policy list sx and rw locks, which occurs only when policies are attached or detached. - Do a lockless read of the dynamic policy list head before acquiring any locks in order to reduce overhead when no dynamic policies are loaded; this a race we can afford to lose. - For every policy entry point invocation, decide whether sleeping is permitted, and if not, use a _NOSLEEP() variant of the composition macros, which will use the rwlock instead of the sxlock. In some cases, we decide which to use based on allocation flags passed to the MAC Framework entry point. As with the move to rwlocks/rmlocks in pfil, this may trigger witness warnings, but these should (generally) be false positives as all acquisition of the locks is for read with two very narrow exceptions for policy load/unload, and those code blocks should never acquire other locks. Sponsored by: Google, Inc. Obtained from: TrustedBSD Project Discussed with: csjp (idea, not specific patch)
2009-03-14 16:06:06 +00:00
if (!LIST_EMPTY(&mac_policy_list)) { \
mac_policy_slock_sleep(); \
LIST_FOREACH(mpc, &mac_policy_list, mpc_list) { \
if (mpc->mpc_ops->mpo_ ## operation != NULL) \
mpc->mpc_ops->mpo_ ## operation (args); \
} \
mac_policy_sunlock_sleep(); \
} \
} while (0)
#define MAC_POLICY_PERFORM_NOSLEEP(operation, args...) do { \
Rework MAC Framework synchronization in a number of ways in order to improve performance: - Eliminate custom reference count and condition variable to monitor threads entering the framework, as this had both significant overhead and behaved badly in the face of contention. - Replace reference count with two locks: an rwlock and an sx lock, which will be read-acquired by threads entering the framework depending on whether a give policy entry point is permitted to sleep or not. - Replace previous mutex locking of the reference count for exclusive access with write acquiring of both the policy list sx and rw locks, which occurs only when policies are attached or detached. - Do a lockless read of the dynamic policy list head before acquiring any locks in order to reduce overhead when no dynamic policies are loaded; this a race we can afford to lose. - For every policy entry point invocation, decide whether sleeping is permitted, and if not, use a _NOSLEEP() variant of the composition macros, which will use the rwlock instead of the sxlock. In some cases, we decide which to use based on allocation flags passed to the MAC Framework entry point. As with the move to rwlocks/rmlocks in pfil, this may trigger witness warnings, but these should (generally) be false positives as all acquisition of the locks is for read with two very narrow exceptions for policy load/unload, and those code blocks should never acquire other locks. Sponsored by: Google, Inc. Obtained from: TrustedBSD Project Discussed with: csjp (idea, not specific patch)
2009-03-14 16:06:06 +00:00
struct mac_policy_conf *mpc; \
\
LIST_FOREACH(mpc, &mac_static_policy_list, mpc_list) { \
if (mpc->mpc_ops->mpo_ ## operation != NULL) \
mpc->mpc_ops->mpo_ ## operation (args); \
} \
if (!LIST_EMPTY(&mac_policy_list)) { \
struct rm_priotracker tracker; \
\
mac_policy_slock_nosleep(&tracker); \
Clean up locking for the MAC Framework: (1) Accept that we're now going to use mutexes, so don't attempt to avoid treating them as mutexes. This cleans up locking accessor function names some. (2) Rename variables to _mtx, _cv, _count, simplifying the naming. (3) Add a new form of the _busy() primitive that conditionally makes the list busy: if there are entries on the list, bump the busy count. If there are no entries, don't bump the busy count. Return a boolean indicating whether or not the busy count was bumped. (4) Break mac_policy_list into two lists: one with the same name holding dynamic policies, and a new list, mac_static_policy_list, which holds policies loaded before mac_late and without the unload flag set. The static list may be accessed without holding the busy count, since it can't change at run-time. (5) In general, prefer making the list busy conditionally, meaning we pay only one mutex lock per entry point if all modules are on the static list, rather than two (since we don't have to lower the busy count when we're done with the framework). For systems running just Biba or MLS, this will halve the mutex accesses in the network stack, and may offer a substantial performance benefits. (6) Lay the groundwork for a dynamic-free kernel option which eliminates all locking associated with dynamically loaded or unloaded policies, for pre-configured systems requiring maximum performance but less run-time flexibility. These changes have been running for a few weeks on MAC development branch systems. Approved by: re (jhb) Obtained from: TrustedBSD Project Sponsored by: DARPA, Network Associates Laboratories
2003-05-07 17:49:24 +00:00
LIST_FOREACH(mpc, &mac_policy_list, mpc_list) { \
if (mpc->mpc_ops->mpo_ ## operation != NULL) \
mpc->mpc_ops->mpo_ ## operation (args); \
} \
mac_policy_sunlock_nosleep(&tracker); \
Clean up locking for the MAC Framework: (1) Accept that we're now going to use mutexes, so don't attempt to avoid treating them as mutexes. This cleans up locking accessor function names some. (2) Rename variables to _mtx, _cv, _count, simplifying the naming. (3) Add a new form of the _busy() primitive that conditionally makes the list busy: if there are entries on the list, bump the busy count. If there are no entries, don't bump the busy count. Return a boolean indicating whether or not the busy count was bumped. (4) Break mac_policy_list into two lists: one with the same name holding dynamic policies, and a new list, mac_static_policy_list, which holds policies loaded before mac_late and without the unload flag set. The static list may be accessed without holding the busy count, since it can't change at run-time. (5) In general, prefer making the list busy conditionally, meaning we pay only one mutex lock per entry point if all modules are on the static list, rather than two (since we don't have to lower the busy count when we're done with the framework). For systems running just Biba or MLS, this will halve the mutex accesses in the network stack, and may offer a substantial performance benefits. (6) Lay the groundwork for a dynamic-free kernel option which eliminates all locking associated with dynamically loaded or unloaded policies, for pre-configured systems requiring maximum performance but less run-time flexibility. These changes have been running for a few weeks on MAC development branch systems. Approved by: re (jhb) Obtained from: TrustedBSD Project Sponsored by: DARPA, Network Associates Laboratories
2003-05-07 17:49:24 +00:00
} \
} while (0)
#endif /* !_SECURITY_MAC_MAC_INTERNAL_H_ */