Commit Graph

1337 Commits

Author SHA1 Message Date
alc
0ee9c9a99d o Lock accesses to the active page queue in vm_pageout_scan() and
vm_pageout_page_stats().
2002-07-20 18:45:25 +00:00
alc
478ee061bb o Lock page queue accesses by vm_page_cache() in vm_contig_launder().
o Micro-optimize the control flow in vm_contig_launder().
2002-07-20 06:11:16 +00:00
alc
9b15115842 o Remove dead and/or unused code. 2002-07-20 05:06:20 +00:00
peter
cc7b2e4248 Infrastructure tweaks to allow having both an Elf32 and an Elf64 executable
handler in the kernel at the same time.  Also, allow for the
exec_new_vmspace() code to build a different sized vmspace depending on
the executable environment.  This is a big help for execing i386 binaries
on ia64.   The ELF exec code grows the ability to map partial pages when
there is a page size difference, eg: emulating 4K pages on 8K or 16K
hardware pages.

Flesh out the i386 emulation support for ia64.  At this point, the only
binary that I know of that fails is cvsup, because the cvsup runtime
tries to execute code in pages not marked executable.

Obtained from:  dfr (mostly, many tweaks from me).
2002-07-20 02:56:12 +00:00
peter
febe98ab98 Set P_NOLOAD on the pagezero kthread so that it doesn't artificially skew
the loadav.  This is not real load.  If you have a nice process running in
the background, pagezero may sit in the run queue for ages and add one to
the loadav, and thereby affecting other scheduling decisions.
2002-07-19 21:06:01 +00:00
alc
083a6fe2b0 o Duplicate an odd side-effect of vm_page_wire() in vm_page_allocate()
when VM_ALLOC_WIRED is specified: set the PG_MAPPED bit in flags.
 o In both vm_page_wire() and vm_page_allocate() add a comment saying
   that setting PG_MAPPED does not belong there.
2002-07-19 03:33:04 +00:00
alc
1ba951badc o Remove the acquisition and release of Giant from the idle priority thread
that pre-zeroes free pages.
 o Remove GIANT_REQUIRED from some low-level page queue functions.  (Instead
   assertions on the page queue lock are being added to the higher-level
   functions, like vm_page_wire(), etc.)

In collaboration with:	peter
2002-07-18 17:40:07 +00:00
markm
8f4d7f20c8 Void functions cannot return values. 2002-07-18 15:53:11 +00:00
peter
5d00cd5ad7 (VM_MAX_KERNEL_ADDRESS - KERNBASE) / PAGE_SIZE may not fit in an integer.
Use lmin(long, long), not min(u_int, u_int).  This is a problem here on
ia64 which has *way* more than 2^32 pages of KVA.  281474976710655 pages
to be precice.
2002-07-18 10:28:00 +00:00
alc
bf14f2641b o Introduce an argument, VM_ALLOC_WIRED, that requests vm_page_alloc()
to return a wired page.
 o Use VM_ALLOC_WIRED within Alpha's pmap_growkernel().  Also, because
   Alpha's pmap_growkernel() calls vm_page_alloc() from within a critical
   section, specify VM_ALLOC_INTERRUPT instead of VM_ALLOC_SYSTEM.  (Only
   VM_ALLOC_INTERRUPT is implemented entirely with a spin mutex.)
 o Assert that the page queues mutex is held in vm_page_wire()
   on Alpha, just like the other platforms.
2002-07-18 04:08:10 +00:00
alc
fe71e4fa20 o Use vm_pageq_remove_nowakeup() and vm_pageq_enqueue() in
vm_page_zero_idle() instead of partially duplicated implementations.
   In particular, this change guarantees that the number of free pages
   in the free queue(s) matches the global free page count when Giant
   is released.

Submitted by:	peter (via his p4 "pmap" branch)
2002-07-16 19:39:40 +00:00
alc
dd3f128981 o Create vm_contig_launder() to replace code that appears twice
in contigmalloc1().
2002-07-15 06:33:31 +00:00
alc
ec8a106e8a o Lock page queue accesses by vm_page_wire() that aren't
within a critical section.
 o Assert that the page queues lock is held in vm_page_wire()
   unless an Alpha.
2002-07-14 23:51:55 +00:00
alc
13868892db o Lock page queue accesses by vm_page_wire(). 2002-07-14 19:36:15 +00:00
alc
5258ed77bb o Lock page queue accesses by vm_page_unmanage().
o Assert that the page queues lock is held in vm_page_unmanage().
2002-07-13 23:55:30 +00:00
alc
828e129a10 o Complete the locking of page queue accesses by vm_page_unwire().
o Assert that the page queues lock is held in vm_page_unwire().
 o Make vm_page_lock_queues() and vm_page_unlock_queues() visible
   to kernel loadable modules.
2002-07-13 20:55:21 +00:00
alc
ee4b41f6c5 o Lock some page queue accesses, in particular, those by vm_page_unwire(). 2002-07-13 19:24:04 +00:00
alc
80b0a79553 o Assert GIANT_REQUIRED on system maps in _vm_map_lock(),
_vm_map_lock_read(), and _vm_map_trylock().  Submitted by: tegge
 o Remove GIANT_REQUIRED from kmem_alloc_wait() and kmem_free_wakeup().
   (This clears the way for exec_map accesses to move outside of Giant.
   The exec_map is not a system map.)
 o Remove some premature MPSAFE comments.

Reviewed by:	tegge
2002-07-12 23:20:06 +00:00
dillon
dc5d856e71 Re-enable the idle page-zeroing code. Remove all IPIs from the idle
page-zeroing code as well as from the general page-zeroing code and use a
lazy tlb page invalidation scheme based on a callback made at the end
of mi_switch.

A number of people came up with this idea at the same time so credit
belongs to Peter, John, and Jake as well.

Two-way SMP buildworld -j 5 tests (second run, after stabilization)
    2282.76 real  2515.17 user  704.22 sys	before peter's IPI commit
    2266.69 real  2467.50 user  633.77 sys	after peter's commit
    2232.80 real  2468.99 user  615.89 sys	after this commit

Reviewed by:	peter, jhb
Approved by:	peter
2002-07-12 20:17:06 +00:00
peter
5f510a2bac Avoid a vm_page_lookup() - that uses a spinlock protected hash. We can
just use the object's memq for our nefarious purposes.
2002-07-12 04:38:51 +00:00
alc
f5dfaef158 o Lock some (unfortunately, not yet all) accesses to the page queues. 2002-07-12 03:17:22 +00:00
alc
41d34057a5 o Lock accesses to the page queues. 2002-07-12 02:55:55 +00:00
alc
9d3ad1a89f o Add a "needs wakeup" flag to the vm_map for use by kmem_alloc_wait()
and kmem_free_wakeup().  Previously, kmem_free_wakeup() always
   called wakeup().  In general, no one was sleeping.
 o Export vm_map_unlock_and_wait() and vm_map_wakeup() from vm_map.c
   for use in vm_kern.c.
2002-07-11 02:39:24 +00:00
alc
692553edd7 o Lock accesses to the page queues in vm_object_terminate().
o Eliminate some unnecessary 64-bit arithmetic in vm_object_split().
2002-07-09 18:02:03 +00:00
peter
770030e679 vm_page_queue_free_mtx is a spin mutex, not a normal sleep mutex.
I do not know why this didn't panic my box, but I have most certainly
been using it:
peter@overcee[3:14pm]~src/sys/i386/i386-110> sysctl -a | grep zero
vm.stats.misc.zero_page_count: 2235
vm.stats.misc.cnt_prezero: 638951
vm.idlezero_enable: 1
vm.idlezero_maxrun: 16

Submitted by:	Tor.Egge@cvsup.no.freebsd.org
Approved by:	Tor's patches are never wrong. :-)
2002-07-08 23:12:37 +00:00
peter
94af63d292 Turn the zeroidle process off for SMP systems, there is still a possible
TLB problem when bouncing from one cpu to another (the original cpu will
not have purged its TLB if the it simply went idle).

Pointed out by:	 Tor.Egge@cvsup.no.freebsd.org
Approved by:	Tor is never wrong. :-)
2002-07-08 23:09:11 +00:00
peter
62e40d1277 Add a special page zero entry point intended to be called via the single
threaded VM pagezero kthread outside of Giant.  For some platforms, this
is really easy since it can just use the direct mapped region.  For others,
IPI sending is involved or there are other issues, so grab Giant when
needed.

We still have preemption issues to deal with, but Alan Cox has an
interesting suggestion on how to minimize the problem on x86.

Use Luigi's hack for preserving the (lack of) priority.

Turn the idle zeroing back on since it can now actually do something useful
outside of Giant in many cases.
2002-07-08 04:24:26 +00:00
peter
61d745fe2c Avoid vm_page_lookup() [grabs a spinlock] and just process the upage
object memq instead.

Suggested by:	alc
2002-07-08 01:11:10 +00:00
peter
b73c441dad Collect all the (now equivalent) pmap_new_proc/pmap_dispose_proc/
pmap_swapin_proc/pmap_swapout_proc functions from the MD pmap code
and use a single equivalent MI version.  There are other cleanups
needed still.

While here, use the UMA zone hooks to keep a cache of preinitialized
proc structures handy, just like the thread system does.  This eliminates
one dependency on 'struct proc' being persistent even after being freed.
There are some comments about things that can be factored out into
ctor/dtor functions if it is worth it.  For now they are mostly just
doing statistics to get a feel of how it is working.
2002-07-07 23:05:27 +00:00
alc
ffff9f17c5 o Lock accesses to the free queue(s) in vm_page_zero_idle(). 2002-07-07 19:27:57 +00:00
alc
8c632c03ee o Traverse the object's memq rather than repeatedly calling vm_page_lookup()
in vm_object_split().
2002-07-07 06:01:25 +00:00
jeff
8d0c32ea13 - Hold a lock on the vnode acquired from the file table across the call to
vm_mmap() as well as the GETATTR etc.
 - If the handle is a vnode in vm_mmap() assert that it is locked.
 - Wiggle Giant around a little to account for the extra vnode operation.
2002-07-06 22:14:38 +00:00
gallatin
a1719914d4 Remove bogus vm_page_wakeup() in vm_page_cowfault() that will cause panics
in the zero-copy send path if a process attempts to write to a page
which is still in flight.

reviewed by: ken
2002-07-05 23:33:27 +00:00
jeff
28043f7b73 Fix a lock order reversal in uma_zdestroy. The uma_mtx needs to be held across
calls to zone_drain().

Noticed by:	scottl
2002-07-05 21:39:52 +00:00
alc
596ff14965 o Lock accesses to the free page queues in contigmalloc1(). 2002-07-05 06:43:32 +00:00
jeff
7b0eebbe58 Remove unnecessary includes. 2002-07-05 05:16:19 +00:00
alc
29ede8a9d4 o Resurrect vm_page_lock_queues(), vm_page_unlock_queues(), and the free
queue lock (revision 1.33 of vm/vm_page.c removed them).
 o Make the free queue lock a spin lock because it's sometimes acquired
   inside of a critical section.
2002-07-04 22:07:37 +00:00
julian
ad8d9d5fc8 A small cleanup. 2002-07-04 12:37:13 +00:00
julian
356fa88d49 Don;t call teh thread setup routines from here..
they are already called when uma calls thread_init()
2002-07-04 12:31:54 +00:00
alc
71b0d89518 o Make the reservation of KVA space for kernel map entries a function
of the KVA space's size in addition to the amount of physical memory
   and reduce it by a factor of two.

Under the old formula, our reservation amounted to one kernel map entry
per virtual page in the KVA space on a 4GB i386.
2002-07-03 19:16:37 +00:00
jeff
2faa149982 Actually use the fini callback.
Pointy hat to:	me :-(
Noticed By:	Julian
2002-07-03 00:30:51 +00:00
robert
20e451e8dc - Use (OFF_TO_IDX(off) - pi) instead of (OFF_TO_IDX(off - IDX_TO_OFF(pi))).
- Reformat a comment.
2002-07-01 14:14:07 +00:00
alc
1968255d11 o Remove some long dead code: from revision 1.41 of vm/vm_pager.c
3+ years ago.
 o Remove some unused prototypes.
2002-07-01 02:38:05 +00:00
iedowse
0129c1a874 Change the type of `tscan' in vm_object_page_clean() to vm_pindex_t,
as it stores an absolute page index that may not fit in a vm_offset_t.
2002-06-29 20:04:38 +00:00
julian
aa2dc0a5d9 Part 1 of KSE-III
The ability to schedule multiple threads per process
(one one cpu) by making ALL system calls optionally asynchronous.
to come: ia64 and power-pc patches, patches for gdb, test program (in tools)

Reviewed by:	Almost everyone who counts
	(at various times, peter, jhb, matt, alfred, mini, bernd,
	and a cast of thousands)

	NOTE: this is still Beta code, and contains lots of debugging stuff.
	expect slight instability in signals..
2002-06-29 17:26:22 +00:00
iedowse
e62ef89fc1 Avoid using the 64-bit vm_pindex_t in a few places where 64-bit
types are not required, as the overhead is unnecessary:

 o In the i386 pmap_protect(), `sindex' and `eindex' represent page
   indices within the 32-bit virtual address space.
 o In swp_pager_meta_build() and swp_pager_meta_ctl(), use a temporary
   variable to store the low few bits of a vm_pindex_t that gets used
   as an array index.
 o vm_uiomove() uses `osize' and `idx' for page offsets within a
   map entry.
 o In vm_object_split(), `idx' is a page offset within a map entry.
2002-06-26 20:32:51 +00:00
iedowse
c3868afc98 Use an explicit cast to avoid relying on sign extension to do the
right thing in code such as `vm_pindex_t x = ~SWAP_META_MASK'.

Reviewed by:	dillon
2002-06-26 19:18:14 +00:00
ken
0d3a835f3f At long last, commit the zero copy sockets code.
MAKEDEV:	Add MAKEDEV glue for the ti(4) device nodes.

ti.4:		Update the ti(4) man page to include information on the
		TI_JUMBO_HDRSPLIT and TI_PRIVATE_JUMBOS kernel options,
		and also include information about the new character
		device interface and the associated ioctls.

man9/Makefile:	Add jumbo.9 and zero_copy.9 man pages and associated
		links.

jumbo.9:	New man page describing the jumbo buffer allocator
		interface and operation.

zero_copy.9:	New man page describing the general characteristics of
		the zero copy send and receive code, and what an
		application author should do to take advantage of the
		zero copy functionality.

NOTES:		Add entries for ZERO_COPY_SOCKETS, TI_PRIVATE_JUMBOS,
		TI_JUMBO_HDRSPLIT, MSIZE, and MCLSHIFT.

conf/files:	Add uipc_jumbo.c and uipc_cow.c.

conf/options:	Add the 5 options mentioned above.

kern_subr.c:	Receive side zero copy implementation.  This takes
		"disposable" pages attached to an mbuf, gives them to
		a user process, and then recycles the user's page.
		This is only active when ZERO_COPY_SOCKETS is turned on
		and the kern.ipc.zero_copy.receive sysctl variable is
		set to 1.

uipc_cow.c:	Send side zero copy functions.  Takes a page written
		by the user and maps it copy on write and assigns it
		kernel virtual address space.  Removes copy on write
		mapping once the buffer has been freed by the network
		stack.

uipc_jumbo.c:	Jumbo disposable page allocator code.  This allocates
		(optionally) disposable pages for network drivers that
		want to give the user the option of doing zero copy
		receive.

uipc_socket.c:	Add kern.ipc.zero_copy.{send,receive} sysctls that are
		enabled if ZERO_COPY_SOCKETS is turned on.

		Add zero copy send support to sosend() -- pages get
		mapped into the kernel instead of getting copied if
		they meet size and alignment restrictions.

uipc_syscalls.c:Un-staticize some of the sf* functions so that they
		can be used elsewhere.  (uipc_cow.c)

if_media.c:	In the SIOCGIFMEDIA ioctl in ifmedia_ioctl(), avoid
		calling malloc() with M_WAITOK.  Return an error if
		the M_NOWAIT malloc fails.

		The ti(4) driver and the wi(4) driver, at least, call
		this with a mutex held.  This causes witness warnings
		for 'ifconfig -a' with a wi(4) or ti(4) board in the
		system.  (I've only verified for ti(4)).

ip_output.c:	Fragment large datagrams so that each segment contains
		a multiple of PAGE_SIZE amount of data plus headers.
		This allows the receiver to potentially do page
		flipping on receives.

if_ti.c:	Add zero copy receive support to the ti(4) driver.  If
		TI_PRIVATE_JUMBOS is not defined, it now uses the
		jumbo(9) buffer allocator for jumbo receive buffers.

		Add a new character device interface for the ti(4)
		driver for the new debugging interface.  This allows
		(a patched version of) gdb to talk to the Tigon board
		and debug the firmware.  There are also a few additional
		debugging ioctls available through this interface.

		Add header splitting support to the ti(4) driver.

		Tweak some of the default interrupt coalescing
		parameters to more useful defaults.

		Add hooks for supporting transmit flow control, but
		leave it turned off with a comment describing why it
		is turned off.

if_tireg.h:	Change the firmware rev to 12.4.11, since we're really
		at 12.4.11 plus fixes from 12.4.13.

		Add defines needed for debugging.

		Remove the ti_stats structure, it is now defined in
		sys/tiio.h.

ti_fw.h:	12.4.11 firmware.

ti_fw2.h:	12.4.11 firmware, plus selected fixes from 12.4.13,
		and my header splitting patches.  Revision 12.4.13
		doesn't handle 10/100 negotiation properly.  (This
		firmware is the same as what was in the tree previously,
		with the addition of header splitting support.)

sys/jumbo.h:	Jumbo buffer allocator interface.

sys/mbuf.h:	Add a new external mbuf type, EXT_DISPOSABLE, to
		indicate that the payload buffer can be thrown away /
		flipped to a userland process.

socketvar.h:	Add prototype for socow_setup.

tiio.h:		ioctl interface to the character portion of the ti(4)
		driver, plus associated structure/type definitions.

uio.h:		Change prototype for uiomoveco() so that we'll know
		whether the source page is disposable.

ufs_readwrite.c:Update for new prototype of uiomoveco().

vm_fault.c:	In vm_fault(), check to see whether we need to do a page
		based copy on write fault.

vm_object.c:	Add a new function, vm_object_allocate_wait().  This
		does the same thing that vm_object allocate does, except
		that it gives the caller the opportunity to specify whether
		it should wait on the uma_zalloc() of the object structre.

		This allows vm objects to be allocated while holding a
		mutex.  (Without generating WITNESS warnings.)

		vm_object_allocate() is implemented as a call to
		vm_object_allocate_wait() with the malloc flag set to
		M_WAITOK.

vm_object.h:	Add prototype for vm_object_allocate_wait().

vm_page.c:	Add page-based copy on write setup, clear and fault
		routines.

vm_page.h:	Add page based COW function prototypes and variable in
		the vm_page structure.

Many thanks to Drew Gallatin, who wrote the zero copy send and receive
code, and to all the other folks who have tested and reviewed this code
over the years.
2002-06-26 03:37:47 +00:00
dillon
371cb8c6de Enforce RLIMIT_VMEM on growable mappings (aka the primary stack or any
MAP_STACK mapping).

Suggested by:	alc
2002-06-26 03:13:46 +00:00
dillon
ff4bf46648 Part I of RLIMIT_VMEM implementation. Implement core functionality for
a new resource limit that covers a process's entire VM space, including
mmap()'d space.

(Part II will be additional code to check RLIMIT_VMEM during exec() but it
needs more fleshing out).

PR:		kern/18209
Submitted by:	Andrey Alekseyev <uitm@zenon.net>, Dmitry Kim <jason@nichego.net>
MFC after:	7 days
2002-06-26 00:29:28 +00:00
iedowse
0f26e30dae Complete the initial set of VM changes required to support full
64-bit file sizes. This step simply addresses the remaining overflows,
and does attempt to optimise performance. The details are:

 o Use a 64-bit type for the vm_object `size' and the size argument
   to vm_object_allocate().
 o Use the correct type for index variables in dev_pager_getpages(),
   vm_object_page_clean() and vm_object_page_remove().
 o Avoid an overflow in the i386 pmap_object_init_pt().
2002-06-25 22:14:06 +00:00
jeff
7083248242 Turn VM_ALLOC_ZERO into a flag.
Submitted by:	tegge
Reviewed by:	dillon
2002-06-25 22:01:12 +00:00
jeff
e9c6c8e0fd Reduce the amount of code that runs with the zone lock held in slab_zalloc().
This allows us to run the zone initialization functions without any locks held.
2002-06-25 21:04:50 +00:00
alc
698aabc226 o Eliminate vmspace::vm_minsaddr. It's initialized but never used.
o Replace stale comments in vmspace by "const until freed" annotations
   on some fields.
2002-06-25 18:14:38 +00:00
alc
1801abed32 o Remove GIANT_REQUIRED from kmem_alloc_pageable(), kmem_alloc_nofault(),
and kmem_free().  (Annotate as MPSAFE.)
 o Remove incorrect casts from kmem_alloc_pageable() and kmem_alloc_nofault().
2002-06-23 18:07:40 +00:00
alc
4bc342b553 o Remove the unnecessary acquisition and release of Giant around fdrop()
in mmap(2).
2002-06-23 01:48:22 +00:00
alc
61c5eca9e3 o Reduce the scope of Giant in vm_mmap() to just the code that manipulates
a vnode.  (Thus, MAP_ANON and MAP_STACK never acquire Giant.)
2002-06-22 19:13:56 +00:00
alc
411b7ea1b5 o Replace mtx_assert(&Giant, MA_OWNED) in dev_pager_alloc()
with the acquisition and release of Giant.  (Annotate as MPSAFE.)
 o Reorder the sanity checks in dev_pager_alloc() to reduce
   the time that Giant is held.
2002-06-22 18:36:51 +00:00
alc
173c0403c2 o In vm_map_insert(), replace GIANT_REQUIRED by the acquisition and
release of Giant around the direct manipulation of the vm_object and
   the optional call to pmap_object_init_pt().
 o In vm_map_findspace(), remove GIANT_REQUIRED.  Instead, acquire and
   release Giant around the occasional call to pmap_growkernel().
 o In vm_map_find(), remove GIANT_REQUIRED.
2002-06-22 17:47:12 +00:00
alc
ef43ad02fe o Replace GIANT_REQUIRED in swap_pager_alloc() by the acquisition and
release of Giant.  (Annotate as MPSAFE.)
2002-06-22 08:03:21 +00:00
alc
3e6234b7f8 o Remove GIANT_REQUIRED from phys_pager_alloc(). If handle isn't NULL,
acquire and release Giant.  If handle is NULL, Giant isn't needed.
 o Annotate phys_pager_alloc() and phys_pager_dealloc() as MPSAFE.
2002-06-22 07:54:42 +00:00
alc
b81ea3e3e5 o Replace GIANT_REQUIRED in vnode_pager_alloc() by the acquisition and
release of Giant.  (Annotate as MPSAFE.)
 o Also, in vnode_pager_alloc(), remove an unnecessary re-initialization
   of struct vm_object::flags and move a statement that is duplicated
   in both branches of an if-else.
2002-06-22 07:28:06 +00:00
alc
54e88d2334 o Remove GIANT_REQUIRED from vslock().
o Annotate kernacc(), useracc(), and vslock() as MPSAFE.

Motivated by:	alfred
2002-06-22 01:26:02 +00:00
alc
eacb69b019 o Remove GIANT_REQUIRED from vm_map_stack(). 2002-06-21 06:03:47 +00:00
alc
569c4cbc61 o Remove GIANT_REQUIRED from vm_pager_allocate() and vm_pager_deallocate(). 2002-06-21 05:04:56 +00:00
alc
c2afd32ef7 o Remove an incorrect cast from obreak(). This cast would,
for example, break an sbrk(>=4GB) on 64-bit architectures
   even if the resource limit allowed it.
 o Correct an off-by-one error.
 o Correct a spelling error in a comment.
 o Reorder an && expression so that the commonly FALSE expression
   comes first.

Submitted by:	bde (bullets 1 and 2)
2002-06-20 18:38:28 +00:00
alc
b4762c84fb o Acquire and release the vm_map lock instead of Giant in obreak().
Consequently, use vm_map_insert() and vm_map_delete(), which expect
   the vm_map to be locked, instead of vm_map_find() and vm_map_remove(),
   which do not.
2002-06-20 02:04:55 +00:00
jeff
4a3cef9fbf - Move the computation of pflags out of the page allocation loop in
kmem_malloc()
- zero fill pages if PG_ZERO bit is not set after allocation in kmem_malloc()

Suggested by: alc, jake
2002-06-19 23:49:57 +00:00
jeff
4df8a5cb05 - Remove bogus use of kmem_alloc that was inherited from the old zone
allocator.
- Properly set M_ZERO when talking to the back end page allocators for
  non malloc zones.  This forces us to zero fill pages when they are first
  brought into a cache.
- Properly handle M_ZERO in uma_zalloc_internal.  This fixes a problem where
  per cpu buckets weren't always getting zeroed.
2002-06-19 20:49:44 +00:00
jeff
0776d3f180 Teach kmem_malloc about M_ZERO. 2002-06-19 20:47:18 +00:00
alc
6c0c1a3de3 o Replace GIANT_REQUIRED in vm_object_coalesce() by the acquisition and
release of Giant.
 o Reduce the scope of GIANT_REQUIRED in vm_map_insert().

These changes will enable us to remove the acquisition and release
of Giant from obreak().
2002-06-19 06:02:03 +00:00
alc
40c2764a67 o Remove LK_CANRECURSE from the vm_map lock. 2002-06-18 18:31:35 +00:00
jeff
c6ac4e0b64 Honor the BUCKETCACHE flag on free as well. 2002-06-17 23:53:58 +00:00
jeff
030d3fdb72 - Introduce the new M_NOVM option which tells uma to only check the currently
allocated slabs and bucket caches for free items.  It will not go ask the vm
  for pages.  This differs from M_NOWAIT in that it not only doesn't block, it
  doesn't even ask.

- Add a new zcreate option ZONE_VM, that sets the BUCKETCACHE zflag.  This
  tells uma that it should only allocate buckets out of the bucket cache, and
  not from the VM.  It does this by using the M_NOVM option to zalloc when
  getting a new bucket.  This is so that the VM doesn't recursively enter
  itself while trying to allocate buckets for vm_map_entry zones.  If there
  are already allocated buckets when we get here we'll still use them but
  otherwise we'll skip it.

- Use the ZONE_VM flag on vm map entries and pv entries on x86.
2002-06-17 22:02:41 +00:00
alc
66c467d04e o Acquire and release Giant in vm_map_wakeup() to prevent
a lost wakeup().

Reviewed by:	tegge
2002-06-17 13:27:40 +00:00
alc
078dff0f24 o Remove GIANT_REQUIRED from vm_fault_user_wire().
o Move pmap_pageable() outside of Giant in vm_fault_unwire().
   (pmap_pageable() is a no-op on all supported architectures.)
 o Remove the acquisition and release of Giant from mlock().
2002-06-16 20:42:29 +00:00
alc
12d0065cbf o Remove GIANT_REQUIRED from useracc() and vsunlock(). Neither
vm_map_check_protection() nor vm_map_unwire() expect Giant
   to be held.
2002-06-15 19:10:19 +00:00
alc
769ca8bfda o Remove the acquisition and release of Giant from munlock().
Reviewed by:	tegge
2002-06-15 05:05:04 +00:00
alc
42cf959f18 o Use vm_map_wire() and vm_map_unwire() in place of vm_map_pageable() and
vm_map_user_pageable().
 o Remove vm_map_pageable() and vm_map_user_pageable().
 o Remove vm_map_clear_recursive() and vm_map_set_recursive().  (They were
   only used by vm_map_pageable() and vm_map_user_pageable().)

Reviewed by:	tegge
2002-06-14 18:21:01 +00:00
alc
e39b056b5d o Acquire and release Giant in vm_map_unlock_and_wait().
Submitted by:	tegge
2002-06-12 08:15:52 +00:00
alc
c3f3773d48 o Properly handle a failure by vm_fault_wire() or vm_fault_user_wire()
in vm_map_wire().
 o Make two white-space changes in vm_map_wire().

Reviewed by:	tegge
2002-06-11 19:13:59 +00:00
alc
d9429c7e6e o Teach vm_map_delete() to respect the "in-transition" flag
on a vm_map_entry by sleeping until the flag is cleared.

Submitted by:	tegge
2002-06-11 05:24:22 +00:00
alc
319e2fb2b8 o In vm_map_entry_create(), call uma_zalloc() with M_NOWAIT on system maps.
Submitted by: tegge
 o Eliminate the "!mapentzone" check from vm_map_entry_create() and
   vm_map_entry_dispose().  Reviewed by: tegge
 o Fix white-space usage in vm_map_entry_create().
2002-06-10 06:11:45 +00:00
iedowse
02040b5ae2 Correct the logic for determining whether the per-CPU locks need
to be destroyed. This fixes a problem where destroying a UMA zone
would fail to destroy all zone mutexes.

Reviewed by:	jeff
2002-06-10 03:25:23 +00:00
alc
e780ef4712 o Add vm_map_wire() for wiring contiguous regions of either kernel
or user vm_maps.  This implementation has two key benefits when compared
   to vm_map_{user_,}pageable(): (1) it avoids a race condition through
   the use of "in-transition" vm_map entries and (2) it eliminates lock
   recursion on the vm_map.

Note: there is still an error case that requires clean up.

Reviewed by:	tegge
2002-06-09 20:25:18 +00:00
alc
564b5ce457 o Simplify vm_map_unwire() by merging the second and third passes
over the caller-specified region.
2002-06-08 19:00:40 +00:00
alc
75c335726c o Remove an unnecessary call to vm_map_wakeup() from vm_map_unwire().
o Add a stub for vm_map_wire().

Note: the description of the previous commit had an error.  The in-
transition flag actually blocks the deallocation of a vm_map_entry by
vm_map_delete() and vm_map_simplify_entry().
2002-06-08 07:32:38 +00:00
alc
ee9168748e o Add vm_map_unwire() for unwiring contiguous regions of either kernel
or user vm_maps.  In accordance with the standards for munlock(2),
   and in contrast to vm_map_user_pageable(), this implementation does not
   allow holes in the specified region.  This implementation uses the
   "in transition" flag described below.
 o Introduce a new flag, "in transition," to the vm_map_entry.
   Eventually, vm_map_delete() and vm_map_simplify_entry() will respect
   this flag by deallocating in-transition vm_map_entrys, allowing
   the vm_map lock to be safely released in vm_map_unwire() and (the
   forthcoming) vm_map_wire().
 o Modify vm_map_simplify_entry() to respect the in-transition flag.

In collaboration with:	tegge
2002-06-07 18:34:23 +00:00
alfred
63b8dfa515 fix typo in _SYS_SYSPROTO_H_ case: s/mlockall_args/munlockall_args
Submitted by: Mark Santcroos <marks@ripe.net>
2002-06-06 18:51:14 +00:00
jeff
d9ab0c8dbc Add a comment describing a resource leak that occurs during a failure case
in obj_alloc.
2002-06-03 22:59:19 +00:00
alc
d75c291207 o Migrate vm_map_split() from vm_map.c to vm_object.c, renaming it
to vm_object_split().  Its interface should still be changed
   to resemble vm_object_shadow().
2002-06-02 23:54:09 +00:00
alc
02c58f2a49 o Style fixes to vm_map_split(), including the elimination of one variable
declaration that shadows another.

Note: This function should really be vm_object_split(), not vm_map_split().

Reviewed by:	md5
2002-06-02 19:32:05 +00:00
alc
6c9e59ad00 o Condition vm_object_pmap_copy_1()'s compilation on the kernel
option ENABLE_VFS_IOOPT.  Unless this option is in effect,
   vm_object_pmap_copy_1() is not used.
2002-06-02 06:31:41 +00:00
alc
2abbbe7b8a o Remove GIANT_REQUIRED from vm_map_zfini(), vm_map_zinit(),
vm_map_create(), and vm_map_submap().
 o Make further use of a local variable in vm_map_entry_splay()
   that caches a reference to one of a vm_map_entry's children.
   (This reduces code size somewhat.)
 o Revert a part of revision 1.66, deinlining vmspace_pmap().
   (This function is MPSAFE.)
2002-06-01 22:41:43 +00:00
alc
b55171a0a3 o Revert a part of revision 1.66, contrary to what that commit message says,
deinlining vm_map_entry_behavior() and vm_map_entry_set_behavior()
   actually increases the kernel's size.
 o Make vm_map_entry_set_behavior() static and add a comment describing
   its purpose.
 o Remove an unnecessary initialization statement from vm_map_entry_splay().
2002-06-01 16:59:30 +00:00
des
ca02bcf7fc Export nswapdev through sysctl(8).
Sponsored by:	DARPA, NAI Labs
2002-05-31 08:17:58 +00:00
alc
0b8e31ba85 Further work on pushing Giant out of the vm_map layer and down
into the vm_object layer:
 o Acquire and release Giant in vm_object_shadow() and
   vm_object_page_remove().
 o Remove the GIANT_REQUIRED assertion preceding vm_map_delete()'s call
   to vm_object_page_remove().
 o Remove the acquisition and release of Giant around vm_map_lookup()'s
   call to vm_object_shadow().
2002-05-31 03:48:55 +00:00
alfred
26c9c27f03 Check for defined(__i386__) instead of just defined(i386) since the compiler
will be updated to only define(__i386__) for ANSI cleanliness.
2002-05-30 07:32:58 +00:00
peter
929921851c The kernel printf does not have %i 2002-05-29 08:25:13 +00:00
alc
06c3939cfb o Remove unused #defines. 2002-05-27 22:10:28 +00:00