Commit Graph

55 Commits

Author SHA1 Message Date
Bosko Milekic
b618bba486 Use constants less throughout the code and instead use the objsize
variable.  This makes changing the size of an mbuf or cluster for
testing/debugging/whatever purposes easier.

Submitted by: sam
2003-08-16 19:48:52 +00:00
David Malone
e41cbeba6d Now that we can call kmem_malloc without Giant it should be safe
to do mbuf allocation without Giant, so remove the GIANT_REQUIRED
from mb_alloc in the M_TRYWAIT case.
2003-07-27 14:19:23 +00:00
Poul-Henning Kamp
55d1d7034f Don't attempt to inline large functions mb_alloc() and mb_free(),
it more than doubles the text size of this file.

GCC has wisely ignored us on this previously
2003-07-22 10:24:41 +00:00
David E. O'Brien
677b542ea2 Use __FBSDID(). 2003-06-11 00:56:59 +00:00
Bosko Milekic
479728fd77 Fix a potential bucket leak where when freeing to an empty bucket
we failed to put the bucket back into the general cache/container.

Also, fix a bad assumption.  There was a KASSERT() that aimed to
guarantee that whenever the pcpu container's mc_starved was > 0,
that whatever the bucket we were freeing to was an empty bucket,
assuming it belonged to the pcpu container cache. However, there
is at least one case where this is not true anymore; consider:
1) All containers empty, next thread to try to alloc will touch
   a pcpu container, notice it's empty, and increment the pcpu
   container's mc_starved.
2) Some other thread frees an mbuf belonging to a bucket in
   the general cache/container.  Then it frees another mbuf
   belonging to the same bucket (still in gen container).
3) Some third thread tries to allocate an mbuf from the pcpu
   container and, since empty, grabs one mbuf now available
   in the general cache and moves the non-empty bucket from
   which it took 1 mbuf and to which the thread in (2) freed
   to, and moves it to the pcpu container.
4) A final thread tries to free an mbuf belonging to the
   NON-EMPTY bucket mentionned in (2) and (3) and, since
   the pcpu container's mc_starved is > 0, but the bucket
   is obviously non-empty, it trips on the KASSERT.
This meant that one could potentially get a panic in some
cases when out of mbufs and clusters.  The problem could
be mitigated by commenting out some cv_signal() calls,
but I'm assuming that was pure coincidence and this is
the correct fix.
2003-06-03 19:19:13 +00:00
David Malone
0f7e5f778a Add an INVARIENTS only check to make sure Giant is held if mbuf
allocation is attempted with M_TRYWAIT.

Reviewed by:	bmilekic
Approved by:	re (scottl)
2003-05-29 18:38:24 +00:00
Bosko Milekic
11583f6c93 Make the mb_alloc low-watermark sysctl-tunable read-only and make
netstat(1) not display it for now because its effects are not yet
completely implemented and we're about to cut 5.2-RELEASE.
This is temporary.

Approved by: re (scottl, rwatson)
2003-05-15 19:05:28 +00:00
Bosko Milekic
969bab3efb Make m_freem() just use m_free() instead of duplicating the code. The
reason for the duplication was that m_freem() was meant to eventually
be optimized to hold the lock of the cache being freed to as long as
possible across frees but the difficulty of implementing said
optimization right now is too high, given that in some cases (see MAC
and non-cluster external buffers), we need to call into other subsytems,
something not permissible when the cache lock is held.

This change minimizes code duplication while keeping at least the
atomic mbuf+cluster free optimization.

Suggested by: luigi
2003-05-10 18:08:23 +00:00
Mike Silbersack
d563b41e1f Add the M_FREELIST flag, which is used to detect whenever a
double free of a mbuf occurs and cause an immediate panic, rather
than allowing free list corruption to occur.

This code is trapped under INVARIANTS, so it should not cause any
change in default performance.

Reviewed by:	a bunch of people on -net
MFC after:	1 week
2003-05-02 03:43:40 +00:00
Mark Murray
51da11a27a Fix some easy, global, lint warnings. In most cases, this means
making some local variables static. In a couple of cases, this means
removing an unused variable.
2003-04-30 12:57:40 +00:00
Robert Watson
225bff6f8b Move MAC label storage for mbufs into m_tags from the m_pkthdr structure,
returning some additional room in the first mbuf in a chain, and
avoiding feature-specific contents in the mbuf header.  To do this:

- Modify mbuf_to_label() to extract the tag, returning NULL if not
  found.

- Introduce mac_init_mbuf_tag() which does most of the work
  mac_init_mbuf() used to do, except on an m_tag rather than an
  mbuf.

- Scale back mac_init_mbuf() to perform m_tag allocation and invoke
  mac_init_mbuf_tag().

- Replace mac_destroy_mbuf() with mac_destroy_mbuf_tag(), since
  m_tag's are now GC'd deep in the m_tag/mbuf code rather than
  at a higher level when mbufs are directly free()'d.

- Add mac_copy_mbuf_tag() to support m_copy_pkthdr() and related
  notions.

- Generally change all references to mbuf labels so that they use
  mbuf_to_label() rather than &mbuf->m_pkthdr.label.  This
  required no changes in the MAC policies (yay!).

- Tweak mbuf release routines to not call mac_destroy_mbuf(),
  tag destruction takes care of it for us now.

- Remove MAC magic from m_copy_pkthdr() and m_move_pkthdr() --
  the existing m_tag support does all this for us.  Note that
  we can no longer just zero the m_tag list on the target mbuf,
  rather, we have to delete the chain because m_tag's will
  already be hung off freshly allocated mbuf's.

- Tweak m_tag copying routines so that if we're copying a MAC
  m_tag, we don't do a binary copy, rather, we initialize the
  new storage and do a deep copy of the label.

- Remove use of MAC_FLAG_INITIALIZED in a few bizarre places
  having to do with mbuf header copies previously.

- When an mbuf is copied in ip_input(), we no longer need to
  explicitly copy the label because it will get handled by the
  m_tag code now.

- No longer any weird handling of MAC labels in if_loop.c during
  header copies.

- Add MPC_LOADTIME_FLAG_LABELMBUFS flag to Biba, MLS, mac_test.
  In mac_test, handle the label==NULL case, since it can be
  dynamically loaded.

In order to improve performance with this change, introduce the notion
of "lazy MAC label allocation" -- only allocate m_tag storage for MAC
labels if we're running with a policy that uses MAC labels on mbufs.
Policies declare this intent by setting the MPC_LOADTIME_FLAG_LABELMBUFS
flag in their load-time flags field during declaration.  Note: this
opens up the possibility of post-boot policy modules getting back NULL
slot entries even though they have policy invariants of non-NULL slot
entries, as the policy might have been loaded after the mbuf was
allocated, leaving the mbuf without label storage.  Policies that cannot
handle this case must be declared as NOTLATE, or must be modified.

- mac_labelmbufs holds the current cumulative status as to whether
  any policies require mbuf labeling or not.  This is updated whenever
  the active policy set changes by the function mac_policy_updateflags().
  The function iterates the list and checks whether any have the
  flag set.  Write access to this variable is protected by the policy
  list; read access is currently not protected for performance reasons.
  This might change if it causes problems.

- Add MAC_POLICY_LIST_ASSERT_EXCLUSIVE() to permit the flags update
  function to assert appropriate locks.

- This makes allocation in mac_init_mbuf() conditional on the flag.

Reviewed by:	sam
Obtained from:	TrustedBSD Project
Sponsored by:	DARPA, Network Associates Laboratories
2003-04-14 20:39:06 +00:00
Robert Watson
aa65d9f538 Use MBTOM() to convert mbuf allocator flags to malloc() flags, rather
than using the same compare/substitute in many places.

Obtained from:	TrustedBSD Project
Sponsored by:	DARPA, Network Associates Laboratories
2003-04-14 16:04:10 +00:00
Poul-Henning Kamp
194a0abf73 PHCC[1]:
I had commented the #ifdef INVARIANTS checks out to make sure I ran this
code in all kernels and forgot to comment the #ifdefs back in before I
committed.

Spotted by:	bmilekic

[1] PHCC = Pointy Hat Correction Commit
2003-03-10 20:24:54 +00:00
Poul-Henning Kamp
d3c11994e1 Make malloc and mbuf allocation mode flags nonoverlapping.
Under INVARIANTS whine if we get incompatible flags.

Submitted by:   imp
2003-03-10 19:39:53 +00:00
John Baldwin
263067951a Replace calls to WITNESS_SLEEP() and witness_list() with equivalent calls
to WITNESS_WARN().
2003-03-04 21:03:05 +00:00
Bosko Milekic
025b4be197 o Allow "buckets" in mb_alloc to be differently sized (according to
compile-time constants).  That is, a "bucket" now is not necessarily
  a page-worth of mbufs or clusters, but it is MBUF_BUCK_SZ, CLUS_BUCK_SZ
  worth of mbufs, clusters.
o Rename {mbuf,clust}_limit to {mbuf,clust}_hiwm and introduce
  {mbuf,clust}_lowm, which currently has no effect but will be used
  to set the low watermarks.
o Fix netstat so that it can deal with the differently-sized buckets
  and teach it about the low watermarks too.
o Make sure the per-cpu stats for an absent CPU has mb_active set to 0,
  explicitly.
o Get rid of the allocate refcounts from mbuf map mess.  Instead,
  just malloc() the refcounts in one shot from mbuf_init()
o Clean up / update comments in subr_mbuf.c
2003-02-20 04:26:58 +00:00
Bosko Milekic
ec73437395 Fix a serious bug when computing the index for the
reference counter array for mbuf clusters.  I don't know
how this got past early testing nor how it survived so long
without getting caught.  If anyone was seeing really really
bizarre memory corruption in a few mbufs this would be why.
2003-02-20 03:01:04 +00:00
Warner Losh
a163d034fa Back out M_* changes, per decision of the TRB.
Approved by: trb
2003-02-19 05:47:46 +00:00
Bosko Milekic
9e7225808e Make m_getm() always return the top of the newly allocated chain, as
opposed to returning the top of the old chain when there was one and
the top of the newly allocated chain if there was no old chain.

Actually, it should be noted that prior to this fix, although the
comment above m_getm() advertised that m_getm() would return the
top of the old chain (if an old chain was being passed in) it
actually [wrongly] was returning the tail mbuf in the old chain
instead.  This is a bug but since the one use of m_getm() in
the tree luckily did not depend on the behavior, it happened
to work out without notice.

Harti Brandt pointed out that the advertised behavior was actually
not the real behavior and so this change makes m_getm() ALWAYS
return the newly allocated chain (and fixes the comment).  This
is less confusing and is the best course of action as then the
caller is always able to have both a reference to the top of
the original chain (because it's passing it in in the call) and
a reference to the newly attached chain.  Although the API is
slightly modified, I don't think that any third-party code uses
m_getm() and if it does, it surely can't be working properly
because the old behavior was bogus.

API bug pointed out by: Harti Brandt <brandt@fokus.fraunhofer.de>
2003-02-14 16:50:13 +00:00
Alfred Perlstein
44956c9863 Remove M_TRYWAIT/M_WAITOK/M_WAIT. Callers should use 0.
Merge M_NOWAIT/M_DONTWAIT into a single flag M_NOWAIT.
2003-01-21 08:56:16 +00:00
Andrew Gallatin
1f88bad30a o Introduce a new external mbuf type, EXT_EXTREF.
o Allow callers of m_extadd() to allocate their own reference
m_ext.ref_cnt pointer, rather than having the mbuf system allocate it
with a malloc() in the critical path.  This speeds m_extadd() up, and
also simplifies locking (malloc() may need Giant).

A driver or subsystem wishing to take use its own ref counter must
initialize m_ext.ref_cnt to point to its ref counter prior to
calling m_extadd(), and it must use EXT_EXTREF as its external type.

Eg:
	 m->m_ext.ref_cnt =  my_ref_cnt_ptr;
	 m_extadd(.....,EXT_EXTREF);

Reviewed by: bosko
2003-01-02 21:16:50 +00:00
Tim J. Robbins
fef82663b8 o Initialise each mbuf's m_len to 0 in m_getm(); mb_put_mem() depends
on this.
o Update the `cur' pointer in the cluster loop in m_getm() to avoid
  incorrect truncation and leaked mbufs.

Reviewed by:	bmilekic
Approved by:	re
2002-11-27 04:26:00 +00:00
Bosko Milekic
a91db09ec0 Fix a fairly subtle bug in mbuf_init() where the reference counter
contiguous space was being allocated from the clust_map
instead of the mbuf_map as the comments indicated.  This resulted in
some address space wastage in mbuf_map.

Submitted by: Rohit Jalan <rohjal@yahoo.co.in>
2002-10-16 19:59:08 +00:00
Sam Leffler
5d84645305 Replace aux mbufs with packet tags:
o instead of a list of mbufs use a list of m_tag structures a la openbsd
o for netgraph et. al. extend the stock openbsd m_tag to include a 32-bit
  ABI/module number cookie
o for openbsd compatibility define a well-known cookie MTAG_ABI_COMPAT and
  use this in defining openbsd-compatible m_tag_find and m_tag_get routines
o rewrite KAME use of aux mbufs in terms of packet tags
o eliminate the most heavily used aux mbufs by adding an additional struct
  inpcb parameter to ip_output and ip6_output to allow the IPsec code to
  locate the security policy to apply to outbound packets
o bump __FreeBSD_version so code can be conditionalized
o fixup ipfilter's call to ip_output based on __FreeBSD_version

Reviewed by:	julian, luigi (silent), -arch, -net, darren
Approved by:	julian, silence from everyone else
Obtained from:	openbsd (mostly)
MFC after:	1 month
2002-10-16 01:54:46 +00:00
Poul-Henning Kamp
37c841831f Be consistent about "static" functions: if the function is marked
static in its prototype, mark it static at the definition too.

Inspired by:    FlexeLint warning #512
2002-09-28 17:15:38 +00:00
Bosko Milekic
5fee904c3c Make m_flags an int instead of a short, this is consistent with the
type of the 'flags' argument m_getcl() was using anyway; m_extadd()
needed to be changed to accept an int instead of a short for 'flags.'
This makes things more consistent and also gives us more bits to
use for m_flags in the future (we have almost run out).

Requested by: sam (Sam Leffler)
2002-08-15 14:09:16 +00:00
Bosko Milekic
850be9af25 Only my brain can fart while fixing a previous brain fart. 2002-08-08 13:31:57 +00:00
Bosko Milekic
0584320e56 YIKES, I take the pointy-hat for a really big braino here. I
appologize to those of you who may have been seeing crashes in
code that uses sendfile(2) or other types of external buffers
with mbufs.

Pointed out by, and provided trace:
    Niels Chr. Bank-Pedersen <ncbp at bank-pedersen.dk>
2002-08-08 13:29:32 +00:00
Robert Watson
55ac5e1861 Correct a bug introduced in 1.26: M_PKTHDR is set in the 'flags'
argument, not the 'type' argument.  As a result of the buf, the
MAC label on some packet header mbufs might not be set in mbufs
allocated using m_getcl(), resulting in a page fault.

Obtained from:	TrustedBSD Project
Sponsored by:	DARPA, NAI Labs
2002-08-07 20:15:29 +00:00
Bosko Milekic
abc1263a51 Move the MAC label init/destroy stuff to more appropriate places so that
the inits/destroys are done without the cache locks held even in the
persistent-lock calls.  I may be cheating a little by using the MAC
"already initialized" flag for now.
2002-08-01 14:24:41 +00:00
Robert Watson
a3abeda755 Introduce support for Mandatory Access Control and extensible
kernel access control.

Invoke the necessary MAC entry points to maintain labels on header
mbufs.  In particular, invoke entry points during the two mbuf
header allocation cases, and the mbuf freeing case.  Pass the "how"
argument at allocation time to the MAC framework so that it can
determine if it is permitted to block (as with policy modules),
and permit the initialization entry point to fail if it needs to
allocate memory but is not permitted to, failing the mbuf
allocation.

Obtained from:	TrustedBSD Project
Sponsored by:	DARPA, NAI Labs
2002-07-31 01:42:19 +00:00
Bosko Milekic
c89137ff90 Make reference counting for mbuf clusters [only] work like in RELENG_4.
While I don't think this is the best solution, it certainly is the
fastest and in trying to find bottlenecks in network related code
I want this out of the way, so that I don't have to think about it.
What this means, for mbuf clusters anyway is:
- one less malloc() to do for every cluster allocation (replaced with
  a relatively quick calculation + assignment)
- no more free() in the cluster free case (replaced with empty space) :-)

This can offer a substantial throughput improvement, but it may not for
all cases.  Particularly noticable for larger buffer sends/recvs.
See http://people.freebsd.org/~bmilekic/code/measure2.txt for a rough
idea.
2002-07-30 21:06:27 +00:00
Bosko Milekic
4151d2e620 Move m_freem() from uipc_mbuf.c to subr_mbuf.c so it can take advantage
of the inlines, like its cousin, m_free().  Also, make a small (first
step?) optimisation of m_free() to use the MBP_PERSIST{,ENT} interface
to hold the lock across frees when possible.  The thing is that right
now, we can only do this easily for at most across one mbuf + one
cluster free, as the comment mentions (it also explains why).  Anyway,
some basic tests revealed a 5-10% overall improvement.  Some of the
results can be found here:
http://people.freebsd.org/~bmilekic/code/measure.txt
2002-07-24 15:11:23 +00:00
Bosko Milekic
dd4ac026f7 Introduce mb_free() to the MBP_PERSIST{,ENT} interface. What this means
is that grouped frees will be done as most often as possible without
dropping the cache lock in between.  So, for the most part, they'll be
done without the lock being dropped.  This is particularly true if you
have something that does a grouped m_getm() or m_getcl() (a cluster and
mbuf at the same time) - most likely getting the buffers from the
same per-CPU cache - and then frees them with m_free{,m}().  Unless
the buffers' underlying buckets were moved, the free will be done without
the lock getting dropped in between.  So far, only m_free() has been
shown how to do this, and m_freem() will shortly follow.

Since I'm here, I also fixed a small (but mostly harmless) type-mismatch
introduced in the last commit.
2002-07-23 14:55:33 +00:00
Bosko Milekic
185c2244ce o Introduce new m_getcl() interface routine that allocates an mbuf
and a cluster in one shot.
o Introduce MBP_PERSIST and MBP_PERSISTENT control bits to mb_alloc();
  MBP_PERSIST means "if you can allocate, then keep the cache lock
  held on exit," and MBP_PERSISTENT means "a cache lock is alredy held
  on entry, so allocate from the specified (already locked) cache."
  They may be used in combination.
o m_getcl() uses the MBP_PERSIST/MBP_PERSISTENT interface so that it
  doesn't drop the cache lock in between the mbuf and cluster allocations.
o m_getm(), which takes a size and allocates an mbuf + cluster "best fit"
  chain, has been moved from uipc_mbuf.c to subr_mbuf.c and shown how to
  use MBP_PERSIST/MBP_PERSISTENT to attempt to do a grouped allocation
  without dropping the cache lock in between.

Why this is good: much less bus-locked lock acquires/drops when they're
not needed.  Also, prototype for m_getcl():
struct mbuf * m_getcl(int how, short type, int flags);
"how" and "type" are self-explanatory.  "flags" may be M_PKTHDR, in
which case m_getcl() will make the mbuf a pkthdr-mbuf.

While I'm in subr_mbuf.c:
o Every exported routine now has a nice comment with a description of
  the expected arguments.  Eventually, mbuf(9) needs to be re-vamped
  but there's still more code to write/finalize before I get to that.
o internal macros have been changed a bit.
o consistently use 'short' for "type."  This somehow slipped through
  before (that 'type' was sometimes declared as int).

Alfred has been pushing for the MBP_PERSIST{,ENT} thing for almost a
year now.  Luigi asked for m_getcl(), and will probably MFC that
part of this commit.

TODO [Related]: teach mb_free() about MBP_PERSIST{, ENT}.
2002-07-15 15:32:59 +00:00
Alfred Perlstein
337f75e11c m_extadd takes a void (*freef)(void *, void *) now, not a
void (*freef)(caddr_t, void *).
2002-06-29 00:01:46 +00:00
Bosko Milekic
ad1026f912 Set system_map for both mbuf_map and clust_map to 1, in mbuf_init().
Submitted by: Tor Egge (tegge)
Pointed out to me by: hsu
2002-06-13 23:53:42 +00:00
Eric Melville
096a727e41 Separate "seperate" from kernel source. 2002-05-16 22:43:20 +00:00
Dag-Erling Smørgrav
733c328439 Remove a printf(3) argument with no corresponding format specifier. 2002-05-14 18:28:06 +00:00
Mike Silbersack
a9caffba47 Change the mbuf exhaustion warning message to match the message
in -stable.
2002-05-09 20:21:07 +00:00
John Baldwin
6008862bc2 Change callers of mtx_init() to pass in an appropriate lock type name. In
most cases NULL is passed, but in some cases such as network driver locks
(which use the MTX_NETWORK_LOCK macro) and UMA zone locks, a name is used.

Tested on:	i386, alpha, sparc64
2002-04-04 21:03:38 +00:00
Bosko Milekic
5a4f147089 Fix bug in mb_alloc that made systems configured with
PAGE_SIZE / MCLBYTES == 1 crash. Fix them by changing the
appropriate "allocate new page and bucket" code in mb_alloc to use
the macro for properly grabbing an allocated object from a bucket,
the one that checks whether the bucket is empty.
This should allow ken to continue testing zero-copy stuff on -CURRENT.

Noticed and provided debug info: ken
2002-03-03 22:10:04 +00:00
Bosko Milekic
56b602dd6a On the first day of Christmas bde gave to me:
A [hopefully] conforming style(9) revamp of mb_alloc and related code.
(This was possible due to bde's remarkable patience.)

Submitted by: (in large part) bde
Reviewed by: (the other part) bde
2001-12-23 22:04:08 +00:00
Bosko Milekic
4878b75e6c Move prototype of _mext_free to mbuf.h, where it belongs, because it is
used in MEXTFREE and needs to be in scope for external MEXTFREE users.

Pointed out by: Chad David <davidc@acns.ab.ca>
Confirmed by: bde
2001-12-22 20:09:08 +00:00
Luigi Rizzo
60363fb9f7 vm/vm_kern.c: rate limit (to once per second) diagnostic printf when
you run out of mbuf address space.

kern/subr_mbuf.c: print a warning message when mb_alloc fails, again
	rate-limited to at most once per second. This covers other
	cases of mbuf allocation failures. Probably it also overlaps the
	one handled in vm/vm_kern.c, so maybe the latter should go away.

This warning will let us gradually remove the printf that are scattered
across most network drivers to report mbuf allocation failures.
Those are potentially dangerous, in that they are not rate-limited and
can easily cause systems to panic.

Unless there is disagreement (which does not seem to be the case
judging from the discussion on -net so far), and because this is
sort of a safety bugfix, I plan to commit a similar change to STABLE
during the weekend (it affects kern/uipc_mbuf.c there).

Discussed-with: jlemon, silby and -net
2001-12-01 00:21:30 +00:00
Bosko Milekic
a705398be0 Context:
For an object type, we maintain a variable mb_mapfull. It is 0 by default
and is only raised to 1 in one place: when an mb_pop_cont() fails for
the first time, on the assumption that the reason for the failure is
due to the underlying map for the object (e.g. clust_map, mbuf_map) being
exhausted.

Problem and Changes:
Change how we define "mb_mapfull." It now means: "set to 1 when the first
mb_pop_cont() fails only in the kmem_malloc()-ing of the object, and
only if the call was with the M_TRYWAIT flag." This is a more conservative
definition and should avoid odd [but theoretically possible] situations
from occuring. i.e. we had set mb_mapfull to 1 thinking the map for the
object was actually exhausted when we _actually_ failed in malloc()ing
the space for the bucket structure managing the objects in the page
we're allocating.
2001-11-25 04:42:54 +00:00
Bosko Milekic
70a61707f6 Re-enable mbtypes statistics in the mbuf allocator. I disabled these
when I changed the allocator bits. This implements per-CPU mbtypes
stats by keeping net number of decrements/increments of a given mbtype
per-CPU and then summing all of the per-CPU mbtypes to produce the total
net number of allocated mbufs of the given mbtype.
Counters are carefully balanced to avoid/prevent underflows/overflows.

mbtypes stats are re-enabled with the idea that we may occasionally
(although very rarely) observe slight inconsistencies in the stat
reporting. Most of the time, we should be fine, though.

Also make appropriate modifications to netstat(1) and systat(1) to do
the necessary reporting.

Submitted by: Jiangyi Liu <jyliu@163.net>
2001-09-30 01:58:39 +00:00
Julian Elischer
b40ce4165d KSE Milestone 2
Note ALL MODULES MUST BE RECOMPILED
make the kernel aware that there are smaller units of scheduling than the
process. (but only allow one thread per process at this time).
This is functionally equivalent to teh previousl -current except
that there is a thread associated with each process.

Sorry john! (your next MFC will be a doosie!)

Reviewed by: peter@freebsd.org, dillon@freebsd.org

X-MFC after:    ha ha ha ha
2001-09-12 08:38:13 +00:00
Bosko Milekic
ba3e88262e Rename mb_init() mbuf subsystem initialization routine to mbuf_init(), in
order to avoid namespace collision with subr_mchain.c's mb_init(). This
wasn't "fatal" as the mbuf initialization routine mb_init() was local to
subr_mbuf.c which in turn didn't pull in subr_mchain.c's mb_init()
declaration, but it should deffinately be changed now before it creates
headache.
2001-08-03 05:05:32 +00:00
Bosko Milekic
bb6f838c79 Move CPU_ABSENT() macro to smp.h, where it belongs anyway. It will be
defined to 0 in the non-SMP case, which very much makes sense as it
permits its usage in per-CPU initialization loops (for an example, check
out subr_mbuf.c).
  Further, on a UP system, make mb_alloc always use the first per-CPU
container, regardless of cpuid (i.e. remove reliability on cpuid in the
UP case).

Requested by: alfred
2001-08-01 00:54:00 +00:00