Commit Graph

181 Commits

Author SHA1 Message Date
Robert Watson
bcf11e8d00 Move "options MAC" from opt_mac.h to opt_global.h, as it's now in GENERIC
and used in a large number of files, but also because an increasing number
of incorrect uses of MAC calls were sneaking in due to copy-and-paste of
MAC-aware code without the associated opt_mac.h include.

Discussed with:	pjd
2009-06-05 14:55:22 +00:00
Robert Watson
e499bd28dd Remove unneeded include.
MFC after:	3 days
2009-06-02 15:59:46 +00:00
Alexander Motin
129c5c814d Teach m_copyback() to use trailing space of the last mbuf in chain. 2009-01-18 20:19:55 +00:00
Andrew Thompson
9128ec21f3 Remove the alignment of the align parameter. This is up to the caller to pass
in and it breaks tap(4) on strict alignment machines as m_uiotombuf is called
with ETHER_ALIGN.

Found by:	Jared Go
Reviewed by:	emax
MFC after:	3 days
2008-09-05 04:05:31 +00:00
Julian Elischer
2182c0cfbf Attempt to make the print types more friendly to other architectures.
Prodded by: Max Laier
Help from: BMS, jhb
2008-04-30 20:00:30 +00:00
Julian Elischer
6eeac1d921 Add an option (compiled out by default)
to profile outoing packets for a number of mbuf chain
related parameters
e.g. number of mbufs, wasted space.
probably will do with further work later.

Reviewed by: various
2008-04-29 21:23:21 +00:00
Ruslan Ermilov
ea26d58729 Replaced the misleading uses of a historical artefact M_TRYWAIT with M_WAIT.
Removed dead code that assumed that M_TRYWAIT can return NULL; it's not true
since the advent of MBUMA.

Reviewed by:	arch

There are ongoing disputes as to whether we want to switch to directly using
UMA flags M_WAITOK/M_NOWAIT for mbuf(9) allocation.
2008-03-25 09:39:02 +00:00
Poul-Henning Kamp
cf827063a9 Give MEXTADD() another argument to make both void pointers to the
free function controlable, instead of passing the KVA of the buffer
storage as the first argument.

Fix all conventional users of the API to pass the KVA of the buffer
as the first argument, to make this a no-op commit.

Likely break the only non-convetional user of the API, after informing
the relevant committer.

Update the mbuf(9) manual page, which was already out of sync on
this point.

Bump __FreeBSD_version to 800016 as there is no way to tell how
many arguments a CPP macro needs any other way.

This paves the way for giving sendfile(9) a way to wait for the
passed storage to have been accessed before returning.

This does not affect the memory layout or size of mbufs.

Parental oversight by:	sam and rwatson.

No MFC is anticipated.
2008-02-01 19:36:27 +00:00
Sam Leffler
eeb76a1889 promote ath_defrag to m_collapse (and retire private+unused
m_collapse from cxgb)

Reviewed by:	pyun, jhb, kmacy
MFC after:	2 weeks
2008-01-17 21:25:09 +00:00
Kip Macy
457869b973 This patch adds an M_NOFREE flag which allows one to mark an mbuf as
not being independently freeable. This allows one to embed an mbuf in
the cluster itself. This confers the benefits of the packet zone on
all cluster sizes. Embedded mbufs currently suffer from the same
limitation that packet zone mbufs do in that one cannot disconnect
them and pass them around independently of the cluster. It would
likely be possible to eliminate this limitation in the future by
adding a second reference for the mbuf itself.

Approved by: re(gnn)
2007-10-06 21:42:39 +00:00
Robert Watson
d19e16a72c Generally migrate to ANSI function headers, and remove 'register' use. 2007-05-16 20:41:08 +00:00
Kip Macy
21ee3e7aff remove now invalid check from m_sanity
panic on m_sanity check failure with INVARIANTS
2007-04-14 20:19:16 +00:00
Andre Oppermann
7c32173ba8 Unbreak writes of 0 bytes. Zero byte writes happen when only ancillary
control data but no payload data is passed.

Change m_uiotombuf() to return at least one empty mbuf if the requested
length was zero.  Add comment to sosend_dgram and sosend_generic().

Diagnoses by:		jhb
Regression test by:	rwatson
Pointy hat to.		andre
2007-01-22 14:50:28 +00:00
Randall Stewart
5288989fac The prepend function did not handle non-pkthdr's correctly.
It always called MH_ALIGN for small lengths being
prepended (less than MHLEN). This meant that if you did
a prepend on a non M_PKTHDR the system would panic with
the KASSERT in MH_ALIGN. Instead we are not aware of
this and do a MH_ALIGN or M_ALIGN as appropriate.

Reviewed by:	andre
Approved by:	gnn
2006-12-21 19:58:04 +00:00
Andre Oppermann
5e20f43d31 Rename m_getm() to m_getm2() and rewrite it to allocate up to page sized
mbuf clusters.  Add a flags parameter to accept M_PKTHDR and M_EOR mbuf
chain flags.  Provide compatibility macro for m_getm() calling m_getm2()
with M_PKTHDR set.

Rewrite m_uiotombuf() to use m_getm2() for mbuf allocation and do the
uiomove() in a tight loop over the mbuf chain.  Add a flags parameter to
accept mbuf flags to be passed to m_getm2().  Adjust all callers for the
extra parameter.

Sponsored by:	TCP/IP Optimization Fundraise 2005
MFC after:	3 month
2006-11-02 17:37:22 +00:00
Robert Watson
aed5570872 Complete break-out of sys/sys/mac.h into sys/security/mac/mac_framework.h
begun with a repo-copy of mac.h to mac_framework.h.  sys/mac.h now
contains the userspace and user<->kernel API and definitions, with all
in-kernel interfaces moved to mac_framework.h, which is now included
across most of the kernel instead.

This change is the first step in a larger cleanup and sweep of MAC
Framework interfaces in the kernel, and will not be MFC'd.

Obtained from:	TrustedBSD Project
Sponsored by:	SPARTA
2006-10-22 11:52:19 +00:00
Randall Stewart
adf5d1c6d0 atomic_fetchadd_int is used by mb_free_ext(), but it
returns the previous value that the "add" effected (In
this case we are adding -1), afterwhich we compare it
to '0'... to see if we free the mbuf... we should
be comparing it to '1'... Note that this only effects
when there is contention since there is a first part
to the comparison that checks to see if its '1'. So
this bug would only crop up if two CPU's are trying
to free the same mbuf refcount at the same time. This
will happen in SCTP but I doubt can happen in TCP or
UDP.
PR:		N/A
Submitted by:	rrs
Reviewed by:	gnn,sam
Approved by:	gnn,sam
2006-09-21 09:55:43 +00:00
Robert Watson
b37ffd3189 Move some functions and definitions from uipc_socket2.c to uipc_socket.c:
- Move sonewconn(), which creates new sockets for incoming connections on
  listen sockets, so that all socket allocate code is together in
  uipc_socket.c.

- Move 'maxsockets' and associated sysctls to uipc_socket.c with the
  socket allocation code.

- Move kern.ipc sysctl node to uipc_socket.c, add a SYSCTL_DECL() for it
  to sysctl.h and remove lots of scattered implementations in various
  IPC modules.

- Sort sodealloc() after soalloc() in uipc_socket.c for dependency order
  reasons.  Statisticize soalloc() and sodealloc() as they are now
  required only in uipc_socket.c, and are internal to the socket
  implementation.

After this change, socket allocation and deallocation is entirely
centralized in one file, and uipc_socket2.c consists entirely of socket
buffer manipulation and default protocol switch functions.

MFC after:	1 month
2006-06-10 14:34:07 +00:00
Sam Leffler
47e2996e8b promote fast ipsec's m_clone routine for public use; it is renamed
m_unshare and the caller can now control how mbufs are allocated

Reviewed by:	andre, luigi, mlaier
MFC after:	1 week
2006-03-15 21:11:11 +00:00
John-Mark Gurney
45e0d0aa30 spell pdata correctly, we now will only dump maxlen of each mbuf in the
chain, instead of the entire mbuf...  This should probably be reworked
so that it prints at max maxlen bytes for the entire chain...
2006-03-14 00:22:10 +00:00
John Baldwin
67c0796ca3 For consistency sake, use >= MINCLSIZE rather than > MINCLSIZE to determine
whether or not to allocate a full mbuf cluster rather than just a plain
mbuf when adding on additional mbufs in m_getm().  In practice, there wasn't
any resulting mem trashing since m_getm() doesn't ever allocate an mbuf with
a packet header, and MINCLSIZE is the available payload in an mbuf with a
header rather than the available payload in a plain mbuf.

Discussed with: 	andre (lightly)
2006-03-07 21:31:20 +00:00
Andre Oppermann
80444f8803 The sysctls kern.ipc.[max_linkhdr|max_protohdr|max_hdr|max_datalen]
can't be changed from userland.  Make them read-only and provide
descriptions.

kern.ipc.max_datalen must never be less than one byte.  Enforce this
with a panic in net_init_domain().

Sponsored by:	TCP/IP Optimization Fundraise 2005
MFC after:	3 days
2006-02-18 17:16:18 +00:00
Andre Oppermann
ec63cb90a3 Replace the 4k fixed sized jumbo mbuf clusters with PAGE_SIZE sized
jumbo mbuf clusters.  To make the variable size clear they are named
MJUMPAGESIZE.

Having jumbo clusters with the native PAGE_SIZE is more useful than
a fixed 4k size according the device driver writers using this API.

The 9k and 16k jumbo mbuf clusters remain unchanged.

Requested by:	glebius, gallatin
Sponsored by:	TCP/IP Optimization Fundraise 2005
MFC after:	3 days
2006-02-17 14:14:15 +00:00
Ed Maste
63e6f39011 When using m_dup(9) to copy more than MHLEN bytes of data, don't create an
mbuf chain that starts with a cluster containing just MHLEN bytes.  This
happened because m_dup called m_get or m_getcl depending on the amount of
data to copy, but then always set the size available in the first mbuf to
MHLEN.

Submitted by:	Matt Koivisto <mkoivisto at sandvine dot com>
Approved by:	jmg
Silence from:	rwatson (mentor)
2005-12-14 23:34:26 +00:00
Andre Oppermann
d5269a636b Add an API for jumbo mbuf cluster allocation and also provide
4k clusters in addition to 9k and 16k ones.

 struct mbuf *m_getjcl(int how, short type, int flags, int size)
 void *m_cljget(struct mbuf *m, int how, int size)

m_getjcl() returns an mbuf with a cluster of the specified size attached
like m_getcl() does for 2k clusters.

m_cljget() is different from m_clget() as it can allocate clusters
without attaching them to an mbuf.  In that case the return value
is the pointer to the cluster of the requested size.  If an mbuf was
specified, it gets the cluster attached to it and the return value
can be safely ignored.

For size both take MCLBYTES, MJUM4BYTES, MJUM9BYTES, MJUM16BYTES.

Reviewed by:	glebius
Tested by:	glebius
Sponsored by:	TCP/IP Optimization Fundraise 2005
2005-12-08 13:13:06 +00:00
Andre Oppermann
cd5bb63b3d Free only those mbuf+clusters back to the packet zone that were allocated
from there.  All others get broken up and free'd individually to the mbuf
and cluster zones.

The packet zone is a secondary zone to the mbuf zone.  There is currently
a limitation in UMA which prevents decreasing the packet zone stock when
the mbuf and cluster zone are drained and all their members are part of
packets.  When this is fixed this change may be reverted.
2005-11-05 19:43:55 +00:00
Andre Oppermann
a5f7708723 Fix a logic error introduced with mandatory mbuf cluster refcounting and
freeing of mbufs+clusters back to the packet zone.
2005-11-04 17:20:53 +00:00
Andre Oppermann
56a4e45ab3 Mandatory mbuf cluster reference counting and groundwork for UMA
based jumbo 9k and jumbo 16k cluster support.

All mbuf's with external storage attached are mandatory reference
counted.  For clusters and jumbo clusters UMA provides the refcnt
storage directly.  It does not have to be separatly allocated.  Any
other type of external storage gets its own refcnt allocated from
an UMA mbuf refcnt zone instead of normal kernel malloc.

The refcount API MEXT_ADD_REF() and MEXT_REM_REF() is no longer
publically accessible.  The proper m_* functions have to be used.

mb_ctor_clust() and mb_dtor_clust() both handle normal 2K as well
as 9k and 16k clusters.

Clusters and jumbo clusters may be obtained without attaching it
immideatly to an mbuf.  This is for high performance cluster
allocation in network drivers where mbufs are attached after the
cluster has been filled.

Tested by:	rwatson
Sponsored by:	TCP/IP Optimizations Fundraise 2005
2005-11-02 16:20:36 +00:00
Andre Oppermann
fdcc028d11 Changes and cleanups to m_sanity():
o for() instead of while() looping  over mbuf chain
o paren's around all flag checks
o more verbose function and purpose description
o some more style changes

Based on feedback from:	sam
2005-08-30 21:31:42 +00:00
Andre Oppermann
e0068c3a69 Unbreak m_demote() and put back the 'all' flag. Without it we cannot
correctly test for m_nextpkt in an mbuf chain.
2005-08-30 21:14:30 +00:00
Andre Oppermann
fbe816384a o Remove the 'all' flag from m_demote(). Users can simply call it with
m_demote(m->m_next) if they wish to start at the second mbuf in chain.
o Test m_type with == instead of &.
o Check m_nextpkt against NULL instead of implicit 0.

Based on feedback from:	sam
2005-08-30 20:07:49 +00:00
Andre Oppermann
4da8443133 Add m_copymdata(struct mbuf *m, struct mbuf *n, int off, int len,
int prep, int how).

Copies the data portion of mbuf (chain) n starting from offset off
for length len to mbuf (chain) m.  Depending on prep the copied
data will be appended or prepended.  The function ensures that the
mbuf (chain) m will be fully writeable by making real (not refcnt)
copies of mbuf clusters.  For the prepending the function returns
a pointer to the new start of mbuf chain m and leaves as much
leading space as possible in the new first mbuf.

Reviewed by:	glebius
2005-08-29 20:15:33 +00:00
Andre Oppermann
a048affba5 Add m_sanity(struct mbuf *m, int sanitize) to do some heavy sanity
checking on mbuf's and mbuf chains.  Set sanitize to 1 to garble
illegal things and have them blow up later when used/accessed.

m_sanity()'s main purpose is for KASSERT()'s and debugging of non-
kosher mbuf manipulation (of which we have a number of).

Reviewed by:	glebius
2005-08-29 19:58:56 +00:00
Andre Oppermann
ed111688e9 Add m_demote(struct mbuf *m, int all) to clean up mbuf (chain) from
any tags and packet headers.  If "all" is set then the first mbuf
in the chain will be cleaned too.

This function is used before an mbuf, that arrived as packet with
m->flags & M_PKTHDR, is appended to an mbuf chain using m->m_next
(not m->m_nextpkt).

Reviewed by:	glebius
2005-08-29 19:45:39 +00:00
Sam Leffler
ab8ab90c5b add m_align, a function to align any type of mbuf (i.e. it
is a superset of M_ALIGN and MH_ALIGN)

Reviewed by:	several
2005-07-30 01:32:16 +00:00
Maksim Yevmenkin
75ae257016 Change m_uiotombuf so it will accept offset at which data should be copied
to the mbuf. Offset cannot exceed MHLEN bytes. This is currently used to
fix Ethernet header alignment problem on alpha and sparc64. Also change all
users of m_uiotombuf to pass proper offset.

Reviewed by:	jmg, sam
Tested by:	Sten Spans "sten AT blinkenlights DOT nl"
MFC after:	1 week
2005-05-04 18:55:03 +00:00
John-Mark Gurney
7ac139a904 add m_copyup function.. This can be used to help make our ip stack less
alignment restrictive, and help performance on some ethernet cards which
currently copy the entire packet a couple bytes to get the packet aligned
properly...

Wordsmithing by:	dwhite
Obtained from:	NetBSD (code only)
I'll clean it up later:	rwatson
2005-03-17 19:34:57 +00:00
Sam Leffler
a4e714295a allow the destination of m_move_pkthdr to have external
storage (e.g. a cluster)

Glanced at by:	rwatson, silby
2005-03-08 17:52:01 +00:00
Alan Cox
2b2c7a6b40 The m_ext reference counts are potentially shared and modified
asynchronously by different threads.  Thus, declare as volatile the
reference count that is accessed through m_ext's pointer, ref_cnt.
Revert the previous change, revision 1.144, that casts as volatile a
single dereference of ref_cnt.

Reviewed by: bmilekic, dwhite
Problem reported by: kris
MFC after: 3 days
2005-03-06 20:09:00 +00:00
Doug White
a1d0c3f203 Insert volatile cast to discourage gcc from optimizing the read outside
of the while loop.

Suggested by:	alc
MFC after:	1 day
2005-03-03 02:41:37 +00:00
Sam Leffler
59d8b31002 change m_adj to reclaim unused mbufs instead of zero'ing m_len
when trim'ing space off the back of a chain; this is indirect
solution to a potential null ptr deref

Noticed by:	Coverity Prevent analysis tool (null ptr deref)
Reviewed by:	dg, rwatson
2005-02-24 00:40:33 +00:00
Sam Leffler
9d8993bbc5 remove dead code
Noticed by:	Coverity Prevent analysis tool
Reviewed by:	silby
2005-02-23 19:34:44 +00:00
Bosko Milekic
3d2a3ff25e Optimize the way reference counting is performed with Mbufs. We
do not need to perform an extra memory fetch in the Packet (Mbuf+Cluster)
constructor to initialize the reference counter anymore.  The reference
counts are located in a separate memory region (in the slab header,
because this zone is UMA_ZONE_REFCNT), so the memory fetch resulted very
often in a cache miss.  Additionally, and perhaps more significantly,
optimize the free mbuf+cluster (packet) case, which is very common, to
no longer require an atomic operation on free (to verify the reference
counter) if the reference on the cluster has never been increased (also
very common).  Reduces an atomic on mbuf free on average.

Original patch submitted by: Gerrit Nagelhout <gnagelhout@sandvine.com>
2005-02-10 22:23:02 +00:00
Poul-Henning Kamp
c711aea6ca Make a bunch of malloc types static.
Found by:	src/tools/tools/kernxref
2005-02-10 12:02:37 +00:00
Warner Losh
9454b2d864 /* -> /*- for copyright notices, minor format tweaks as necessary 2005-01-06 23:35:40 +00:00
Sam Leffler
a37c415e66 fix m_append for case where additional mbufs are required 2004-12-15 19:04:07 +00:00
Sam Leffler
4873d1754f add m_append utility function to be used in forthcoming changes 2004-12-08 05:42:02 +00:00
John-Mark Gurney
7b12509082 improve the mbuf m_print function.. Only pull length from pkthdr if there
is one, detect mbuf loops and stop, add an extra arg so you can only print
the first x bytes of the data per mbuf (print all if arg is -1), print
flags using %b (bitmask)...

No code in the tree appears to use m_print, and it's just a maner of adding
-1 as an additional arg to m_print to restore original behavior..

MFC after:	4 days
2004-09-28 18:40:18 +00:00
Bosko Milekic
01e9ccbd9c Back out just a portion of Alfred's last commit. Remove the MBUF_CHECK
(WITNESS) for code paths that always call uma_zalloc_arg() shortly
after where the check was, because uma_zalloc_arg() already does
a similar check.

No objections from Alfred.  Thanks Alfred.
2004-07-21 21:03:01 +00:00
Alfred Perlstein
063d811465 Make sure we don't call mbuf allocation functions with mutexes held.
Discussed with: rwatson
2004-07-21 07:12:24 +00:00