Commit Graph

24 Commits

Author SHA1 Message Date
bmilekic
5c645322cc Make reference counting for mbuf clusters [only] work like in RELENG_4.
While I don't think this is the best solution, it certainly is the
fastest and in trying to find bottlenecks in network related code
I want this out of the way, so that I don't have to think about it.
What this means, for mbuf clusters anyway is:
- one less malloc() to do for every cluster allocation (replaced with
  a relatively quick calculation + assignment)
- no more free() in the cluster free case (replaced with empty space) :-)

This can offer a substantial throughput improvement, but it may not for
all cases.  Particularly noticable for larger buffer sends/recvs.
See http://people.freebsd.org/~bmilekic/code/measure2.txt for a rough
idea.
2002-07-30 21:06:27 +00:00
bmilekic
ffebbf61bc Move m_freem() from uipc_mbuf.c to subr_mbuf.c so it can take advantage
of the inlines, like its cousin, m_free().  Also, make a small (first
step?) optimisation of m_free() to use the MBP_PERSIST{,ENT} interface
to hold the lock across frees when possible.  The thing is that right
now, we can only do this easily for at most across one mbuf + one
cluster free, as the comment mentions (it also explains why).  Anyway,
some basic tests revealed a 5-10% overall improvement.  Some of the
results can be found here:
http://people.freebsd.org/~bmilekic/code/measure.txt
2002-07-24 15:11:23 +00:00
bmilekic
93a279e095 Introduce mb_free() to the MBP_PERSIST{,ENT} interface. What this means
is that grouped frees will be done as most often as possible without
dropping the cache lock in between.  So, for the most part, they'll be
done without the lock being dropped.  This is particularly true if you
have something that does a grouped m_getm() or m_getcl() (a cluster and
mbuf at the same time) - most likely getting the buffers from the
same per-CPU cache - and then frees them with m_free{,m}().  Unless
the buffers' underlying buckets were moved, the free will be done without
the lock getting dropped in between.  So far, only m_free() has been
shown how to do this, and m_freem() will shortly follow.

Since I'm here, I also fixed a small (but mostly harmless) type-mismatch
introduced in the last commit.
2002-07-23 14:55:33 +00:00
bmilekic
a574df8241 o Introduce new m_getcl() interface routine that allocates an mbuf
and a cluster in one shot.
o Introduce MBP_PERSIST and MBP_PERSISTENT control bits to mb_alloc();
  MBP_PERSIST means "if you can allocate, then keep the cache lock
  held on exit," and MBP_PERSISTENT means "a cache lock is alredy held
  on entry, so allocate from the specified (already locked) cache."
  They may be used in combination.
o m_getcl() uses the MBP_PERSIST/MBP_PERSISTENT interface so that it
  doesn't drop the cache lock in between the mbuf and cluster allocations.
o m_getm(), which takes a size and allocates an mbuf + cluster "best fit"
  chain, has been moved from uipc_mbuf.c to subr_mbuf.c and shown how to
  use MBP_PERSIST/MBP_PERSISTENT to attempt to do a grouped allocation
  without dropping the cache lock in between.

Why this is good: much less bus-locked lock acquires/drops when they're
not needed.  Also, prototype for m_getcl():
struct mbuf * m_getcl(int how, short type, int flags);
"how" and "type" are self-explanatory.  "flags" may be M_PKTHDR, in
which case m_getcl() will make the mbuf a pkthdr-mbuf.

While I'm in subr_mbuf.c:
o Every exported routine now has a nice comment with a description of
  the expected arguments.  Eventually, mbuf(9) needs to be re-vamped
  but there's still more code to write/finalize before I get to that.
o internal macros have been changed a bit.
o consistently use 'short' for "type."  This somehow slipped through
  before (that 'type' was sometimes declared as int).

Alfred has been pushing for the MBP_PERSIST{,ENT} thing for almost a
year now.  Luigi asked for m_getcl(), and will probably MFC that
part of this commit.

TODO [Related]: teach mb_free() about MBP_PERSIST{, ENT}.
2002-07-15 15:32:59 +00:00
alfred
65984a1f06 m_extadd takes a void (*freef)(void *, void *) now, not a
void (*freef)(caddr_t, void *).
2002-06-29 00:01:46 +00:00
bmilekic
86f2253b6a Set system_map for both mbuf_map and clust_map to 1, in mbuf_init().
Submitted by: Tor Egge (tegge)
Pointed out to me by: hsu
2002-06-13 23:53:42 +00:00
eric
4579e1dcd0 Separate "seperate" from kernel source. 2002-05-16 22:43:20 +00:00
des
f2d1d92921 Remove a printf(3) argument with no corresponding format specifier. 2002-05-14 18:28:06 +00:00
silby
f3419e2e8f Change the mbuf exhaustion warning message to match the message
in -stable.
2002-05-09 20:21:07 +00:00
jhb
db9aa81e23 Change callers of mtx_init() to pass in an appropriate lock type name. In
most cases NULL is passed, but in some cases such as network driver locks
(which use the MTX_NETWORK_LOCK macro) and UMA zone locks, a name is used.

Tested on:	i386, alpha, sparc64
2002-04-04 21:03:38 +00:00
bmilekic
4d41cd259b Fix bug in mb_alloc that made systems configured with
PAGE_SIZE / MCLBYTES == 1 crash. Fix them by changing the
appropriate "allocate new page and bucket" code in mb_alloc to use
the macro for properly grabbing an allocated object from a bucket,
the one that checks whether the bucket is empty.
This should allow ken to continue testing zero-copy stuff on -CURRENT.

Noticed and provided debug info: ken
2002-03-03 22:10:04 +00:00
bmilekic
965b8e2ef2 On the first day of Christmas bde gave to me:
A [hopefully] conforming style(9) revamp of mb_alloc and related code.
(This was possible due to bde's remarkable patience.)

Submitted by: (in large part) bde
Reviewed by: (the other part) bde
2001-12-23 22:04:08 +00:00
bmilekic
54e5874ed2 Move prototype of _mext_free to mbuf.h, where it belongs, because it is
used in MEXTFREE and needs to be in scope for external MEXTFREE users.

Pointed out by: Chad David <davidc@acns.ab.ca>
Confirmed by: bde
2001-12-22 20:09:08 +00:00
luigi
0d72b82e2e vm/vm_kern.c: rate limit (to once per second) diagnostic printf when
you run out of mbuf address space.

kern/subr_mbuf.c: print a warning message when mb_alloc fails, again
	rate-limited to at most once per second. This covers other
	cases of mbuf allocation failures. Probably it also overlaps the
	one handled in vm/vm_kern.c, so maybe the latter should go away.

This warning will let us gradually remove the printf that are scattered
across most network drivers to report mbuf allocation failures.
Those are potentially dangerous, in that they are not rate-limited and
can easily cause systems to panic.

Unless there is disagreement (which does not seem to be the case
judging from the discussion on -net so far), and because this is
sort of a safety bugfix, I plan to commit a similar change to STABLE
during the weekend (it affects kern/uipc_mbuf.c there).

Discussed-with: jlemon, silby and -net
2001-12-01 00:21:30 +00:00
bmilekic
0dbfbc0131 Context:
For an object type, we maintain a variable mb_mapfull. It is 0 by default
and is only raised to 1 in one place: when an mb_pop_cont() fails for
the first time, on the assumption that the reason for the failure is
due to the underlying map for the object (e.g. clust_map, mbuf_map) being
exhausted.

Problem and Changes:
Change how we define "mb_mapfull." It now means: "set to 1 when the first
mb_pop_cont() fails only in the kmem_malloc()-ing of the object, and
only if the call was with the M_TRYWAIT flag." This is a more conservative
definition and should avoid odd [but theoretically possible] situations
from occuring. i.e. we had set mb_mapfull to 1 thinking the map for the
object was actually exhausted when we _actually_ failed in malloc()ing
the space for the bucket structure managing the objects in the page
we're allocating.
2001-11-25 04:42:54 +00:00
bmilekic
5b4fe25981 Re-enable mbtypes statistics in the mbuf allocator. I disabled these
when I changed the allocator bits. This implements per-CPU mbtypes
stats by keeping net number of decrements/increments of a given mbtype
per-CPU and then summing all of the per-CPU mbtypes to produce the total
net number of allocated mbufs of the given mbtype.
Counters are carefully balanced to avoid/prevent underflows/overflows.

mbtypes stats are re-enabled with the idea that we may occasionally
(although very rarely) observe slight inconsistencies in the stat
reporting. Most of the time, we should be fine, though.

Also make appropriate modifications to netstat(1) and systat(1) to do
the necessary reporting.

Submitted by: Jiangyi Liu <jyliu@163.net>
2001-09-30 01:58:39 +00:00
julian
5596676e6c KSE Milestone 2
Note ALL MODULES MUST BE RECOMPILED
make the kernel aware that there are smaller units of scheduling than the
process. (but only allow one thread per process at this time).
This is functionally equivalent to teh previousl -current except
that there is a thread associated with each process.

Sorry john! (your next MFC will be a doosie!)

Reviewed by: peter@freebsd.org, dillon@freebsd.org

X-MFC after:    ha ha ha ha
2001-09-12 08:38:13 +00:00
bmilekic
4dfee5e935 Rename mb_init() mbuf subsystem initialization routine to mbuf_init(), in
order to avoid namespace collision with subr_mchain.c's mb_init(). This
wasn't "fatal" as the mbuf initialization routine mb_init() was local to
subr_mbuf.c which in turn didn't pull in subr_mchain.c's mb_init()
declaration, but it should deffinately be changed now before it creates
headache.
2001-08-03 05:05:32 +00:00
bmilekic
cceec8d181 Move CPU_ABSENT() macro to smp.h, where it belongs anyway. It will be
defined to 0 in the non-SMP case, which very much makes sense as it
permits its usage in per-CPU initialization loops (for an example, check
out subr_mbuf.c).
  Further, on a UP system, make mb_alloc always use the first per-CPU
container, regardless of cpuid (i.e. remove reliability on cpuid in the
UP case).

Requested by: alfred
2001-08-01 00:54:00 +00:00
peter
94613ac7da Use the tunable maxusers rather than the compile-time one. Evaluate and
initialize in the right order to make derivative settings work right.
eg: at compile time, nmbufs was double nmbclusters.  For POLA this should
work the same at runtime.
2001-07-26 23:08:31 +00:00
bmilekic
0caeab3ccd - Do not handle the per-CPU containers in mbuf code as though the cpuids
were indices in a dense array. The cpuids are a sparse set and treat
  them as such, setting up containers only for CPUs activated during
  mb_init().

- Fix netstat(1) and systat(1) to treat the per-CPU stats area as a sparse
  map, in accordance with the above.

This allows us to properly boot with certain CPUs disactivated. However, if
we later decide to re-activate said CPUs, we will barf until we decide to
implement CPU spinon/spinoff callback hooks to allow for said CPUs' per-CPU
containers to get configured on their activation.

Reported by: mjacob
Partially (sys/ diffs) Submitted by: mjacob
2001-07-26 18:47:46 +00:00
obrien
610c4dc6f4 Increase NMBCLUSTERS by 4x.
This takes a GENERIC kernel (MAXUSERS=32) from 1536 to 3072.
2001-07-17 15:51:12 +00:00
mjacob
4127a25756 Temporary fix at least- define NCPU_PRESENT which will be mp_npcus for
SMP kernels, one (1) for non-SMP.
2001-06-22 16:03:23 +00:00
bmilekic
5d710b296b Introduce numerous SMP friendly changes to the mbuf allocator. Namely,
introduce a modified allocation mechanism for mbufs and mbuf clusters; one
which can scale under SMP and which offers the possibility of resource
reclamation to be implemented in the future. Notable advantages:

 o Reduce contention for SMP by offering per-CPU pools and locks.
 o Better use of data cache due to per-CPU pools.
 o Much less code cache pollution due to excessively large allocation macros.
 o Framework for `grouping' objects from same page together so as to be able
   to possibly free wired-down pages back to the system if they are no longer
   needed by the network stacks.

 Additional things changed with this addition:

  - Moved some mbuf specific declarations and initializations from
    sys/conf/param.c into mbuf-specific code where they belong.
  - m_getclr() has been renamed to m_get_clrd() because the old name is really
    confusing. m_getclr() HAS been preserved though and is defined to the new
    name. No tree sweep has been done "to change the interface," as the old
    name will continue to be supported and is not depracated. The change was
    merely done because m_getclr() sounds too much like "m_get a cluster."
  - TEMPORARILY disabled mbtypes statistics displaying in netstat(1) and
    systat(1) (see TODO below).
  - Fixed systat(1) to display number of "free mbufs" based on new per-CPU
    stat structures.
  - Fixed netstat(1) to display new per-CPU stats based on sysctl-exported
    per-CPU stat structures. All infos are fetched via sysctl.

 TODO (in order of priority):

  - Re-enable mbtypes statistics in both netstat(1) and systat(1) after
    introducing an SMP friendly way to collect the mbtypes stats under the
    already introduced per-CPU locks (i.e. hopefully don't use atomic() - it
    seems too costly for a mere stat update, especially when other locks are
    already present).
  - Optionally have systat(1) display not only "total free mbufs" but also
    "total free mbufs per CPU pool."
  - Fix minor length-fetching issues in netstat(1) related to recently
    re-enabled option to read mbuf stats from a core file.
  - Move reference counters at least for mbuf clusters into an unused portion
    of the cluster itself, to save space and need to allocate a counter.
  - Look into introducing resource freeing possibly from a kproc.

Reviewed by (in parts): jlemon, jake, silby, terry
Tested by: jlemon (Intel & Alpha), mjacob (Intel & Alpha)
Preliminary performance measurements: jlemon (and me, obviously)
URL: http://people.freebsd.org/~bmilekic/mb_alloc/
2001-06-22 06:35:32 +00:00