Commit Graph

201 Commits

Author SHA1 Message Date
rwatson
c27fece07b Revert a portion of kern_malloc.c:1.99, which (in addition to adding
malloc profiling) also modified the set of pre-defined buckets for the
memory allocator.  For reasons unknown to me, this resulted in extensive
memory corruption in the kernel, in particular on SMP boxes, so I'm
committing this work-around until Jeff gets a chance to debug it
properly.  David Wolfskill pointed me at this commit as the one that
might be a problem; I've been running this code on two dual-processor
burn-in boxes for about 12 hours now, and the rate of panics due to
memory corruption has dropped to zero (from one every five minutes).

Hopefully not treading on the toes of:	jeff
2002-04-29 17:12:02 +00:00
phk
834fdde07a Add a basic sanity check on pointers passed to free(9).
Should be improved by:	jeff
2002-04-23 18:50:25 +00:00
jeff
6cb876e7dd Finish adding support code for sysctl kern.mprof. This dumps some malloc
information related to bucket size effeciency.  Three things are printed on
each row:

Size is the size the user actually asked for rounded to 16 bytes.
Requests is the number of times this size was asked for.
Real Size is the size we actually handed out.

At the end the total memory used and total waste is displayed.  Currently my
system displays about 33% wasted memory.

The intent of this code is to gather statistics for tuning the malloc bucket
sizes.  It is not intended to be run with INVARIANTS and it is not entirely
mp safe.  It can be enabled via 'options MALLOC_PROFILE' which was commited
earlier.
2002-04-15 05:24:01 +00:00
jeff
da6660250e Remove malloc_type's ks_limit.
Updated the kmemzones logic such that the ks_size bitmap can be used as an
index into it to report the size of the zone used.

Create the kern.malloc sysctl which replaces the kvm mechanism to report
similar data.  This will provide an easy place for statistics aggregation if
malloc_type statistics become per cpu data.

Add some code ifdef'd under MALLOC_PROFILING to facilitate a tool for sizing
the malloc buckets.
2002-04-15 04:05:53 +00:00
jhb
db9aa81e23 Change callers of mtx_init() to pass in an appropriate lock type name. In
most cases NULL is passed, but in some cases such as network driver locks
(which use the MTX_NETWORK_LOCK macro) and UMA zone locks, a name is used.

Tested on:	i386, alpha, sparc64
2002-04-04 21:03:38 +00:00
alfred
357e37e023 Remove __P. 2002-03-19 21:25:46 +00:00
jeff
2923687da3 This is the first part of the new kernel memory allocator. This replaces
malloc(9) and vm_zone with a slab like allocator.

Reviewed by:	arch@
2002-03-19 09:11:49 +00:00
archie
4ff8306186 Add realloc() and reallocf(), and make free(NULL, ...) acceptable.
Reviewed by:	alfred
2002-03-13 01:42:33 +00:00
julian
5596676e6c KSE Milestone 2
Note ALL MODULES MUST BE RECOMPILED
make the kernel aware that there are smaller units of scheduling than the
process. (but only allow one thread per process at this time).
This is functionally equivalent to teh previousl -current except
that there is a thread associated with each process.

Sorry john! (your next MFC will be a doosie!)

Reviewed by: peter@freebsd.org, dillon@freebsd.org

X-MFC after:    ha ha ha ha
2001-09-12 08:38:13 +00:00
jhb
6c77154134 - Remove asleep(), await(), and M_ASLEEP.
- Callers of asleep() and await() have been converted to calling tsleep().
  The only caller outside of M_ASLEEP was the ata driver, which called both
  asleep() and await() with spl-raised, so there was no need for the
  asleep() and await() pair.  M_ASLEEP was unused.

Reviewed by:	jasone, peter
2001-08-10 06:45:43 +00:00
bmilekic
4dfee5e935 Rename mb_init() mbuf subsystem initialization routine to mbuf_init(), in
order to avoid namespace collision with subr_mchain.c's mb_init(). This
wasn't "fatal" as the mbuf initialization routine mb_init() was local to
subr_mbuf.c which in turn didn't pull in subr_mchain.c's mb_init()
declaration, but it should deffinately be changed now before it creates
headache.
2001-08-03 05:05:32 +00:00
jake
258022abac Remove some code that appears to have endian problems with INVARIANTS.
This is #if BIG_ENDIAN, but is only necessary if malloc types are shorts,
not struct malloc_type * like they are now.
2001-08-03 03:31:45 +00:00
bmilekic
5d710b296b Introduce numerous SMP friendly changes to the mbuf allocator. Namely,
introduce a modified allocation mechanism for mbufs and mbuf clusters; one
which can scale under SMP and which offers the possibility of resource
reclamation to be implemented in the future. Notable advantages:

 o Reduce contention for SMP by offering per-CPU pools and locks.
 o Better use of data cache due to per-CPU pools.
 o Much less code cache pollution due to excessively large allocation macros.
 o Framework for `grouping' objects from same page together so as to be able
   to possibly free wired-down pages back to the system if they are no longer
   needed by the network stacks.

 Additional things changed with this addition:

  - Moved some mbuf specific declarations and initializations from
    sys/conf/param.c into mbuf-specific code where they belong.
  - m_getclr() has been renamed to m_get_clrd() because the old name is really
    confusing. m_getclr() HAS been preserved though and is defined to the new
    name. No tree sweep has been done "to change the interface," as the old
    name will continue to be supported and is not depracated. The change was
    merely done because m_getclr() sounds too much like "m_get a cluster."
  - TEMPORARILY disabled mbtypes statistics displaying in netstat(1) and
    systat(1) (see TODO below).
  - Fixed systat(1) to display number of "free mbufs" based on new per-CPU
    stat structures.
  - Fixed netstat(1) to display new per-CPU stats based on sysctl-exported
    per-CPU stat structures. All infos are fetched via sysctl.

 TODO (in order of priority):

  - Re-enable mbtypes statistics in both netstat(1) and systat(1) after
    introducing an SMP friendly way to collect the mbtypes stats under the
    already introduced per-CPU locks (i.e. hopefully don't use atomic() - it
    seems too costly for a mere stat update, especially when other locks are
    already present).
  - Optionally have systat(1) display not only "total free mbufs" but also
    "total free mbufs per CPU pool."
  - Fix minor length-fetching issues in netstat(1) related to recently
    re-enabled option to read mbuf stats from a core file.
  - Move reference counters at least for mbuf clusters into an unused portion
    of the cluster itself, to save space and need to allocate a counter.
  - Look into introducing resource freeing possibly from a kproc.

Reviewed by (in parts): jlemon, jake, silby, terry
Tested by: jlemon (Intel & Alpha), mjacob (Intel & Alpha)
Preliminary performance measurements: jlemon (and me, obviously)
URL: http://people.freebsd.org/~bmilekic/mb_alloc/
2001-06-22 06:35:32 +00:00
peter
4b91e2ecf0 "Fix" the previous initial attempt at fixing TUNABLE_INT(). This time
around, use a common function for looking up and extracting the tunables
from the kernel environment.  This saves duplicating the same function
over and over again.  This way typically has an overhead of 8 bytes + the
path string, versus about 26 bytes + the path string.
2001-06-08 05:24:21 +00:00
peter
c1df44ae51 Back out part of my previous commit. This was a last minute change
and I botched testing.  This is a perfect example of how NOT to do
this sort of thing. :-(
2001-06-07 03:17:26 +00:00
peter
0732738ec4 Make the TUNABLE_*() macros look and behave more consistantly like the
SYSCTL_*() macros.  TUNABLE_INT_DECL() was an odd name because it didn't
actually declare the int, which is what the name suggests it would do.
2001-06-06 22:17:08 +00:00
markm
bcca5847d5 Undo part of the tangle of having sys/lock.h and sys/mutex.h included in
other "system" header files.

Also help the deprecation of lockmgr.h by making it a sub-include of
sys/lock.h and removing sys/lockmgr.h form kernel .c files.

Sort sys/*.h includes where possible in affected files.

OK'ed by:	bde (with reservations)
2001-05-01 08:13:21 +00:00
bmilekic
b857e0ac23 Fix inconsistency in setup of kernel_map: we need to make sure that
we also reserve _adequate_ space for the mb_map submap; i.e. we need
space for nmbclusters, nmbufs, _and_ nmbcnt. Furthermore, we need to
rounddown, and not roundup, so that we are consistent.

Pointed out by: bde
2001-04-18 23:54:13 +00:00
bmilekic
f364d4ac36 Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:

mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)

similarily, for releasing a lock, we now have:

mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.

The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.

Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:

MTX_QUIET and MTX_NOSWITCH

The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:

mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.

Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.

Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.

Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.

Finally, caught up to the interface changes in all sys code.

Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
bp
076c2e70fd Let M_PANIC go back to the private tree as its intention isn't understood well
for now.
2001-01-31 04:50:20 +00:00
bp
e88e663eb2 Add M_PANIC flag to the list of available flags passed to malloc().
With this flag set malloc() will panic if memory allocation failed.
This usable only in critical places where failed allocation is fatal.

Reviewed by:	peter
2001-01-29 12:48:37 +00:00
peter
ee90d51e9c p->p_intr_nesting_level is MI now and initialized to 0 in kern_fork.c,
so it should be save to KASSERT() on it even on an arch that may not
use it.
2001-01-27 06:32:20 +00:00
jhb
fa0d2b97b0 Don't grab Giant when calling kmem_alloc/kmem_free as this is just
encouraging other people to follow the same practice.  If this is going
to be done, then it should be done inside of those two functions instead.
2001-01-24 00:36:03 +00:00
jake
937122ae6d Make intr_nesting_level per-process, rather than per-cpu. Setup
interrupt threads to run with it always >= 1, so that malloc can
detect M_WAITOK from "interrupt" context.  This is also necessary
in order to context switch from sched_ithd() directly.

Reviewed By:	peter
2001-01-21 19:25:07 +00:00
jasone
24d53563ed Remove MUTEX_DECLARE() and MTX_COLD. Instead, postpone full mutex
initialization until after malloc() is safe to call, then iterate through
all mutexes and complete their initialization.

This change is necessary in order to avoid some circular bootstrapping
dependencies.
2001-01-21 07:52:20 +00:00
jake
4f5d8ed825 Use PCPU_GET, PCPU_PTR and PCPU_SET to access all per-cpu variables
other then curproc.
2001-01-10 04:43:51 +00:00
phk
55e86a81b7 Introduce the M_ZERO flag to malloc(9)
Instead of:

        foo = malloc(sizeof(foo), M_WAIT);
        bzero(foo, sizeof(foo));

You can now (and please do) use:

        foo = malloc(sizeof(foo), M_WAIT | M_ZERO);

In the future this will enable us to do idle-time pre-zeroing of
malloc-space.
2000-10-20 17:54:55 +00:00
jhb
6231c1d6d8 - machine/mutex.h -> sys/mutex.h
- Use MUTEX_DECLARE() and MTX_COLD for the malloc_mtx mutex
2000-10-20 07:29:16 +00:00
jasone
0633919abe Don't #include <sys/proc.h>, since machine/mutex.h does it now. 2000-09-23 00:01:37 +00:00
jhb
ebc05310ca Remove the mtx_t, witness_t, and witness_blessed_t types. Instead, just
use struct mtx, struct witness, and struct witness_blessed.

Requested by:	bde
2000-09-14 20:15:16 +00:00
jasone
1c1433e6c3 Add malloc_mtx to protect malloc and friends, so that they're thread-safe.
Reviewed by:	peter
2000-09-11 02:32:30 +00:00
jasone
4aa7af1bcf Back out the addition of malloc_mtx. It was incompletely conceived, and
will be done correctly in the future.
2000-09-10 01:54:15 +00:00
jasone
ee0ea20aad Add a mutex to the malloc interfaces so that it can safely be called
without owning the Giant lock.
2000-09-09 22:27:35 +00:00
bp
9043d9b4b6 Move #ifdef to the right place. 2000-06-29 09:26:26 +00:00
bp
12d92ca484 If kernel compiled with INVARIANTS:
On unload, remove references from freelist to memory type defined by module.
Print a warning if module defines and allocate its own memory type, but
didn't free it all on unload.

Reviewed by:	peter
2000-06-29 03:41:30 +00:00
bde
76169e6b33 sys/malloc.h:
Order the SYSINIT() for MALLOC_DEFINE() correctly so that malloc()
doesn't have to waste time initializing itself.  The
(SI_SUB_KMEM, SI_ORDER_ANY) order was shared with syscons' SYSINIT()
for scmeminit(), and scmeminit() calls malloc(), so malloc()
initialization was not always complete on the first call to malloc().

kern/kern_malloc.c:
- Removed self-initialization in malloc().
- Removed half-baked sanity check in free().  Trust MALLOC_DEFINE().
2000-06-14 18:31:42 +00:00
kuriyama
8a75c162dc Print "previous type" correctly when INVARIANTS is defined.
Reviewed by:	current@FreeBSD.org
2000-03-14 14:58:04 +00:00
dillon
7a2987cf94 Fix null-pointer dereference crash when the system is intentionally
run out of KVM through a mmap()/fork() bomb that allocates hundreds
    of thousands of vm_map_entry structures.

    Add panic to make null-pointer dereference crash a little more verbose.

    Add a new sysctl, vm.max_proc_mmap, which specifies the maximum number
    of mmap()'d spaces (discrete vm_map_entry's in the process).  The value
    defaults to around 9000 for a 128MB machine.  The test is scaled for the
    number of processes sharing a vmspace (aka linux threads).  Setting
    the value to 0 disables the feature.

PR: kern/16573
Approved by: jkh
2000-02-16 21:11:33 +00:00
dg
fab6f30ed1 Fixed sign and overflow bugs that caused the allocation size of the kernel
malloc region (kmem_map) to be wrong and semi-random on systems with more
than 1GB of RAM. This is not a complete fix, but is sufficient for
machines with 4GB or less of memory. A complete fix will require some
changes to the getenv stuff so that 64bit values can be passed around.

NOT FIXED: machines with more than 4GB of RAM (e.g. some large Alphas)
since we're still using ints to hold some of the values.

Reviewed by:	bde
2000-01-28 04:04:58 +00:00
shin
cad2014b27 KAME netinet6 basic part(no IPsec,no V6 Multicast Forwarding, no UDP/TCP
for IPv6 yet)

With this patch, you can assigne IPv6 addr automatically, and can reply to
IPv6 ping.

Reviewed by: freebsd-arch, cvs-committers
Obtained from: KAME project
1999-11-22 02:45:11 +00:00
phk
322edeeaa9 Before we start to mess with the VFS name-cache clean things up a little bit:
Isolate the namecache in its own file, and give it a dedicated malloc type.
1999-10-03 12:18:29 +00:00
phk
86867c69d2 KASSERT that we cannot use M_WAITOK in interrupt context.
Reviewed by:	 bde
1999-09-19 08:40:11 +00:00
bde
ec1ed330ea Get rid of MALLOC_INSTANTIATE and MALLOC_MAKE_TYPE(). Just handle the 3
malloc types declared in <sys/malloc.h> like other global malloc types.
1999-09-11 16:41:39 +00:00
peter
3b842d34e8 $Id$ -> $FreeBSD$ 1999-08-28 01:08:13 +00:00
msmith
af7615d39a Move the initialisation/tuning of nmbclusters from param.c/machdep.c
into uipc_mbuf.c.  This reduces three sets of identical tunable code to
one set, and puts the initialisation with the mbuf code proper.

Make NMBUFs tunable as well.

Move the nmbclusters sysctl here as well.

Move the initialisation of maxsockets from param.c to uipc_socket2.c,
next to its corresponding sysctl.

Use the new tunable macros for the kern.vm.kmem.size tunable (this should have
been in a separate commit, whoops).
1999-07-05 08:52:54 +00:00
bde
c82c114e2f Fixed corruption of the kmemstatistcs list. The first malloc()
with malloc type at the tail of the list changed the list from
linear to circular.  This seemed to cause surprisingly few problems,
but it now causes weird output from `vmstat -m', probably because
a more important malloc type is now at the tail of the list.

Fix it by abusing ks_limit instead of ks_next as a flag for being
on the list.  Don't forget to clear the flag when a malloc type is
uninit'ed.  Uninit'ing is still fundamentally broken -- it loses
history.
1999-05-12 11:11:27 +00:00
peter
73556bfee1 Add sufficient braces to keep egcs happy about potentially ambiguous
if/else nesting.
1999-05-06 18:13:11 +00:00
dillon
a40e0249d4 Fix warnings in preparation for adding -Wall -Wcast-qual to the
kernel compile
1999-01-27 21:50:00 +00:00
msmith
b8598e583e Allow VM_KMEM_SIZE to be tuned from the kernel environment. This tuning
value *completely* overrides any value precalculated by the kernel.
1999-01-21 21:54:32 +00:00
dillon
df24433bbe This is a rather large commit that encompasses the new swapper,
changes to the VM system to support the new swapper, VM bug
    fixes, several VM optimizations, and some additional revamping of the
    VM code.  The specific bug fixes will be documented with additional
    forced commits.  This commit is somewhat rough in regards to code
    cleanup issues.

Reviewed by:	"John S. Dyson" <root@dyson.iquest.net>, "David Greenman" <dg@root.com>
1999-01-21 08:29:12 +00:00
eivind
89e1199534 KNFize, by bde. 1999-01-10 01:58:29 +00:00
eivind
a8dc66f457 Split DIAGNOSTIC -> DIAGNOSTIC, INVARIANTS, and INVARIANT_SUPPORT as
discussed on -hackers.

Introduce 'KASSERT(assertion, ("panic message", args))' for simple
check + panic.

Reviewed by:	msmith
1999-01-08 17:31:30 +00:00
peter
0f95240d58 Have MALLOC_DECLARE() initialize malloc types explicitly, and have them
removed at module unload (if in a module of course).
However; this introduces a new dependency on <sys/kernel.h> for things
that use MALLOC_DECLARE().  Bruce told me it is better to add sys/kernel.h
to the handful of files that need it rather than add an extra include to
sys/malloc.h for kernel compiles. Updates to follow in subsequent commits.
1998-11-10 08:46:24 +00:00
phk
13c66194f4 Nitpicking and dusting performed on a train. Removes trivial warnings
about unused variables, labels and other lint.
1998-10-25 17:44:59 +00:00
bde
9e27b29fba Use [u]intptr_t instead of [u_]long for casts between pointers and
integers.  Don't forget to cast to (void *) as well.
1998-08-16 01:21:52 +00:00
bde
57b661836a Fixed printf format errors. 1998-07-29 17:38:14 +00:00
julian
10c5ccc30a Reviewed by: dyson@freebsd.org (john Dyson), dg@root.com (david greenman)
Submitted by:	Kirk McKusick (mcKusick@mckusick.com)
Obtained from:  WHistle development tree
1998-03-08 09:59:44 +00:00
dyson
7a5637f439 Try to dynamically size the VM_KMEM_SIZE (but is still able to be overridden
in a way identically as before.)  I had problems with the system properly
handling the number of vnodes when there is alot of system memory, and the
default VM_KMEM_SIZE.  Two new options "VM_KMEM_SIZE_SCALE" and
"VM_KMEM_SIZE_MAX" have been added to support better auto-sizing for systems
with greater than 128MB.
1998-02-23 07:41:23 +00:00
eivind
d7a6ab2803 Staticize. 1998-02-09 06:11:36 +00:00
eivind
4547a09753 Back out DIAGNOSTIC changes. 1998-02-06 12:14:30 +00:00
dyson
ebccbfc1ff 1) Start using a cleaner and more consistant page allocator instead
of the various ad-hoc schemes.
2)	When bringing in UPAGES, the pmap code needs to do another vm_page_lookup.
3)	When appropriate, set the PG_A or PG_M bits a-priori to both avoid some
	processor errata, and to minimize redundant processor updating of page
	tables.
4)	Modify pmap_protect so that it can only remove permissions (as it
	originally supported.)  The additional capability is not needed.
5)	Streamline read-only to read-write page mappings.
6)	For pmap_copy_page, don't enable write mapping for source page.
7)	Correct and clean-up pmap_incore.
8)	Cluster initial kern_exec pagin.
9)	Removal of some minor lint from kern_malloc.
10)	Correct some ioopt code.
11)	Remove some dead code from the MI swapout routine.
12)	Correct vm_object_deallocate (to remove backing_object ref.)
13)	Fix dead object handling, that had problems under heavy memory load.
14)	Add minor vm_page_lookup improvements.
15)	Some pages are not in objects, and make sure that the vm_page.c can
	properly support such pages.
16)	Add some more page deficit handling.
17)	Some minor code readability improvements.
1998-02-05 03:32:49 +00:00
eivind
c552a9a1c3 Turn DIAGNOSTIC into a new-style option. 1998-02-04 22:34:03 +00:00
dyson
197bd655c4 VM level code cleanups.
1)	Start using TSM.
	Struct procs continue to point to upages structure, after being freed.
	Struct vmspace continues to point to pte object and kva space for kstack.
	u_map is now superfluous.
2)	vm_map's don't need to be reference counted.  They always exist either
	in the kernel or in a vmspace.  The vmspaces are managed by reference
	counts.
3)	Remove the "wired" vm_map nonsense.
4)	No need to keep a cache of kernel stack kva's.
5)	Get rid of strange looking ++var, and change to var++.
6)	Change more data structures to use our "zone" allocator.  Added
	struct proc, struct vmspace and struct vnode.  This saves a significant
	amount of kva space and physical memory.  Additionally, this enables
	TSM for the zone managed memory.
7)	Keep ioopt disabled for now.
8)	Remove the now bogus "single use" map concept.
9)	Use generation counts or id's for data structures residing in TSM, where
	it allows us to avoid unneeded restart overhead during traversals, where
	blocking might occur.
10)	Account better for memory deficits, so the pageout daemon will be able
	to make enough memory available (experimental.)
11)	Fix some vnode locking problems. (From Tor, I think.)
12)	Add a check in ufs_lookup, to avoid lots of unneeded calls to bcmp.
	(experimental.)
13)	Significantly shrink, cleanup, and make slightly faster the vm_fault.c
	code.  Use generation counts, get rid of unneded collpase operations,
	and clean up the cluster code.
14)	Make vm_zone more suitable for TSM.

This commit is partially as a result of discussions and contributions from
other people, including DG, Tor Egge, PHK, and probably others that I
have forgotten to attribute (so let me know, if I forgot.)

This is not the infamous, final cleanup of the vnode stuff, but a necessary
step.  Vnode mgmt should be correct, but things might still change, and
there is still some missing stuff (like ioopt, and physical backing of
non-merged cache files, debugging of layering concepts.)
1998-01-22 17:30:44 +00:00
dyson
0f54a3a55f Some fixes from John Hood:
1) Fix the initialization of malloc structure that changed
		due to perf opt.
	2) Remove unneeded include.
	3) An initialization assert added to malloc.
Submitted by:	John Hood <cgull@smoke.marlboro.vt.us>
1997-12-05 05:36:58 +00:00
phk
349f4dfe03 Remove the long description from the in-kernel datastructure.
Put a magic field in there instead, to help catch uninitialized
malloc types.
1997-10-28 19:01:02 +00:00
phk
36e7a51ea1 Last major round (Unless Bruce thinks of somthing :-) of malloc changes.
Distribute all but the most fundamental malloc types.  This time I also
remembered the trick to making things static:  Put "static" in front of
them.

A couple of finer points by:	bde
1997-10-12 20:26:33 +00:00
phk
0b60d9f0b1 Freeing with unknown type is a panic kind of thing. 1997-10-11 13:13:09 +00:00
phk
05d08b5a3d Remove a debug printf entirely. 1997-10-11 10:49:43 +00:00
peter
05925137bf Disable an extremely annoying printf. 1997-10-11 10:41:44 +00:00
phk
cfddde534b Rename "struct kmemstats" to "struct malloc_type" it makes more sense now.
Fix type argument to hashinit() and phashinit()
1997-10-10 18:14:23 +00:00
phk
7c874af4b8 Make malloc more extensible. The malloc type is now a pointer to
the struct kmemstats that describes the type.

This allows subsystems to declare their malloc types locally
and <sys/malloc.h> doesn't need tweaked everytime somebody
gets an idea.  You can even have a type local to a lkm...

I don't know if we really need the longdesc, comments welcome.

TODO: There is a single nit in ext2fs, that will be fixed later,
and I intend to remove all unused malloc types and distribute
the rest closer to their use.
1997-10-10 14:06:34 +00:00
bde
64d0bec2a8 Fixed staticization. buckets[] was staticized but was still declared
extern in <sys/malloc.h> and it should not have been staticized for
the !(KMEMSTATS || DIAGNOSTIC) case.

Fixed the !(KMEMSTATS || DIAGNOSTIC) case.  The MALLOC() and FREE()
macros are evil, but code generally doesn't allow for this and some code
involving else clauses did not compile.

Finished staticization.
1997-09-16 13:52:04 +00:00
bde
6ffb8bf9af Removed unused #includes. 1997-09-02 20:06:59 +00:00
dyson
8fa8ae3d0d Get rid of the ad-hoc memory allocator for vm_map_entries, in lieu of
a simple, clean zone type allocator.  This new allocator will also be
used for machine dependent pmap PV entries.
1997-08-05 00:02:08 +00:00
dg
e95a95cf3a Killed bogus kernacc() call in malloc() DIAGNOSTIC code. kernacc() by
it's nature, locks the kernal_map, and this is deadly if kernal_map had
been locked previous to a (net) interrupt.
1997-06-24 09:41:00 +00:00
peter
94b6d72794 Back out part 1 of the MCFH that changed $Id$ to $FreeBSD$. We are not
ready for it yet.
1997-02-22 09:48:43 +00:00
jkh
808a36ef65 Make the long-awaited change from $Id$ to $FreeBSD$
This will make a number of things easier in the future, as well as (finally!)
avoiding the Id-smashing problem which has plagued developers for so long.

Boy, I'm glad we're not using sup anymore.  This update would have been
insane otherwise.
1997-01-14 07:20:47 +00:00
phk
0a7364f8a4 The check for multiple freed items were bogus. fixed. 1996-08-04 20:08:48 +00:00
dyson
3130798e2b Minor performance improvement to kern_malloc.c that increases the
probability of reuse of recently freed memory.  This improves cache
hit stats on cached memory, and improves at least fork speed consistancy.
1996-05-18 22:33:13 +00:00
wollman
a28a8481af Allocate mbufs from a separate submap so that NMBCLUSTERS works as
expected.
1996-05-10 19:28:55 +00:00
phk
5a6fb3a7da removed:
CLBYTES PD_SHIFT PGSHIFT NBPG PGOFSET CLSIZELOG2 CLSIZE pdei()
        ptei() kvtopte() ptetov() ispt() ptetoav() &c &c
new:
        NPDEPG

Major macro cleanup.
1996-05-02 14:21:14 +00:00
phk
779840c457 First pass at cleaning up macros relating to pages, clusters and all that. 1996-05-02 10:43:17 +00:00
dg
33311f68ed Implement what I mentioned in rev 1.18: limit per-bucket allocations to
60% of physical memory or 60% of malloc area size, whichever is smaller.
1996-01-29 11:12:37 +00:00
dg
bc2512f99f Fixed two bugs in the calculation of the malloc area (kmem_map) size:
1) The calculation didn't account for NMBCLUSTERS, so if a large number of
   clusters was specified, it would leave little or no space for kernel
   malloc.
2) It was bogusly restricted to v_page_count. This doesn't take into
   account the sparseness of the malloc area and would have caused
   problems on machines with small amounts of memory. It should probably
   instead be changed to set the malloc limit to be constrained by
   the amount of memory, but I didn't do this.
1996-01-29 09:58:34 +00:00
phk
63ec2c0ae9 A Major staticize sweep. Generates a couple of warnings that I'll deal
with later.
A number of unused vars removed.
A number of unused procs removed or #ifdefed.
1995-12-14 08:32:45 +00:00
dg
c30f46c534 Untangled the vm.h include file spaghetti. 1995-12-07 12:48:31 +00:00
bde
338d20055d Finished (?) cleaning up sysinit stuff. 1995-12-02 17:11:20 +00:00
dg
573c688a68 Fixed init functions argument type - caddr_t -> void *. Fixed a couple of
compiler warnings.
1995-09-09 18:10:37 +00:00
julian
ebb726ec45 Reviewed by: julian with quick glances by bruce and others
Submitted by:	terry (terry lambert)
This is  a composite of 3 patch sets submitted by terry.
they are:
New low-level init code that supports loadbal modules better
some cleanups in the namei code to help terry in 16-bit character support
some changes to the mount-root code to make it a little more
modular..

NOTE: mounting root off cdrom or NFS MIGHT be broken as I haven't been able
to test those cases..

certainly mounting root of disk still works just fine..
mfs should work but is untested. (tomorrows task)

The low level init stuff includes a total rewrite of init_main.c
to make it possible for new modules to have an init phase by simply
adding an entry to a TEXT_SET (or is it DATA_SET) list. thus a new module can
be added to the kernel without editing any other files other than the
'files' file.
1995-08-28 09:19:25 +00:00
rgrimes
c86f0c7a71 Remove trailing whitespace. 1995-05-30 08:16:23 +00:00
dg
2c660e9070 Make vegetarian and animal rights people happy and use 0xdeadc0de instead
of 0xdeadbeef as the fill pattern. Decreased MAX_COPY to 64 (256 was a bit
overzealous in most cases).
1995-04-16 11:25:15 +00:00
dg
9cd78521d8 Removed redundant newlines that were in some panic strings. 1995-03-19 14:29:26 +00:00
dg
2ff0acd8ef Added some additional DIAGNOSTIC code that makes sure that freed
memory addresses and types are with the valid range. Increased
MAX_COPY to 256 (used to verify no freed memory use with DIAGNOSTIC).
1995-03-11 22:28:16 +00:00
dg
f58a2f6357 Calling semantics for kmem_malloc() have been changed...and the third
argument is now more than just a single flag. (kern_malloc.c)
Used new M_KERNEL value for socket allocations that previous were
"M_NOWAIT". Note that this will change when we clean up the M_ namespace
mess.

Submitted by:	John Dyson
1995-02-02 08:49:08 +00:00
dg
1707d41102 These changes embody the support of the fully coherent merged VM buffer cache,
much higher filesystem I/O performance, and much better paging performance. It
represents the culmination of over 6 months of R&D.

The majority of the merged VM/cache work is by John Dyson.

The following highlights the most significant changes. Additionally, there are
(mostly minor) changes to the various filesystem modules (nfs, msdosfs, etc) to
support the new VM/buffer scheme.

vfs_bio.c:
Significant rewrite of most of vfs_bio to support the merged VM buffer cache
scheme.  The scheme is almost fully compatible with the old filesystem
interface.  Significant improvement in the number of opportunities for write
clustering.

vfs_cluster.c, vfs_subr.c
Upgrade and performance enhancements in vfs layer code to support merged
VM/buffer cache.  Fixup of vfs_cluster to eliminate the bogus pagemove stuff.

vm_object.c:
Yet more improvements in the collapse code.  Elimination of some windows that
can cause list corruption.

vm_pageout.c:
Fixed it, it really works better now.  Somehow in 2.0, some "enhancements"
broke the code.  This code has been reworked from the ground-up.

vm_fault.c, vm_page.c, pmap.c, vm_object.c
Support for small-block filesystems with merged VM/buffer cache scheme.

pmap.c vm_map.c
Dynamic kernel VM size, now we dont have to pre-allocate excessive numbers of
kernel PTs.

vm_glue.c
Much simpler and more effective swapping code.  No more gratuitous swapping.

proc.h
Fixed the problem that the p_lock flag was not being cleared on a fork.

swap_pager.c, vnode_pager.c
Removal of old vfs_bio cruft to support the past pseudo-coherency.  Now the
code doesn't need it anymore.

machdep.c
Changes to better support the parameter values for the merged VM/buffer cache
scheme.

machdep.c, kern_exec.c, vm_glue.c
Implemented a seperate submap for temporary exec string space and another one
to contain process upages. This eliminates all map fragmentation problems
that previously existed.

ffs_inode.c, ufs_inode.c, ufs_readwrite.c
Changes for merged VM/buffer cache.  Add "bypass" support for sneaking in on
busy buffers.

Submitted by:	John Dyson and David Greenman
1995-01-09 16:06:02 +00:00
dg
18911700cc Changed splimp to splhigh to close a potential hole that could lead
to corrupted malloc data structures caused by frees occurring at other
than splimp.
1994-12-17 04:04:42 +00:00
dg
8ec51aef1b Got rid of map.h. It's a leftover from the rmap code, and we use rlists.
Changed swapmap into swaplist.
1994-10-09 07:35:18 +00:00
phk
c3e4945541 All of this is cosmetic. prototypes, #includes, printfs and so on. Makes
GCC a lot more silent.
1994-10-02 17:35:40 +00:00
dg
8d205697aa Added $Id$ 1994-08-02 07:55:43 +00:00
rgrimes
2469c867a1 The big 4.4BSD Lite to FreeBSD 2.0.0 (Development) patch.
Reviewed by:	Rodney W. Grimes
Submitted by:	John Dyson and David Greenman
1994-05-25 09:21:21 +00:00
rgrimes
8fb65ce818 BSD 4.4 Lite Kernel Sources 1994-05-24 10:09:53 +00:00