Commit Graph

1290 Commits

Author SHA1 Message Date
Kenneth D. Merry
98cb733c67 At long last, commit the zero copy sockets code.
MAKEDEV:	Add MAKEDEV glue for the ti(4) device nodes.

ti.4:		Update the ti(4) man page to include information on the
		TI_JUMBO_HDRSPLIT and TI_PRIVATE_JUMBOS kernel options,
		and also include information about the new character
		device interface and the associated ioctls.

man9/Makefile:	Add jumbo.9 and zero_copy.9 man pages and associated
		links.

jumbo.9:	New man page describing the jumbo buffer allocator
		interface and operation.

zero_copy.9:	New man page describing the general characteristics of
		the zero copy send and receive code, and what an
		application author should do to take advantage of the
		zero copy functionality.

NOTES:		Add entries for ZERO_COPY_SOCKETS, TI_PRIVATE_JUMBOS,
		TI_JUMBO_HDRSPLIT, MSIZE, and MCLSHIFT.

conf/files:	Add uipc_jumbo.c and uipc_cow.c.

conf/options:	Add the 5 options mentioned above.

kern_subr.c:	Receive side zero copy implementation.  This takes
		"disposable" pages attached to an mbuf, gives them to
		a user process, and then recycles the user's page.
		This is only active when ZERO_COPY_SOCKETS is turned on
		and the kern.ipc.zero_copy.receive sysctl variable is
		set to 1.

uipc_cow.c:	Send side zero copy functions.  Takes a page written
		by the user and maps it copy on write and assigns it
		kernel virtual address space.  Removes copy on write
		mapping once the buffer has been freed by the network
		stack.

uipc_jumbo.c:	Jumbo disposable page allocator code.  This allocates
		(optionally) disposable pages for network drivers that
		want to give the user the option of doing zero copy
		receive.

uipc_socket.c:	Add kern.ipc.zero_copy.{send,receive} sysctls that are
		enabled if ZERO_COPY_SOCKETS is turned on.

		Add zero copy send support to sosend() -- pages get
		mapped into the kernel instead of getting copied if
		they meet size and alignment restrictions.

uipc_syscalls.c:Un-staticize some of the sf* functions so that they
		can be used elsewhere.  (uipc_cow.c)

if_media.c:	In the SIOCGIFMEDIA ioctl in ifmedia_ioctl(), avoid
		calling malloc() with M_WAITOK.  Return an error if
		the M_NOWAIT malloc fails.

		The ti(4) driver and the wi(4) driver, at least, call
		this with a mutex held.  This causes witness warnings
		for 'ifconfig -a' with a wi(4) or ti(4) board in the
		system.  (I've only verified for ti(4)).

ip_output.c:	Fragment large datagrams so that each segment contains
		a multiple of PAGE_SIZE amount of data plus headers.
		This allows the receiver to potentially do page
		flipping on receives.

if_ti.c:	Add zero copy receive support to the ti(4) driver.  If
		TI_PRIVATE_JUMBOS is not defined, it now uses the
		jumbo(9) buffer allocator for jumbo receive buffers.

		Add a new character device interface for the ti(4)
		driver for the new debugging interface.  This allows
		(a patched version of) gdb to talk to the Tigon board
		and debug the firmware.  There are also a few additional
		debugging ioctls available through this interface.

		Add header splitting support to the ti(4) driver.

		Tweak some of the default interrupt coalescing
		parameters to more useful defaults.

		Add hooks for supporting transmit flow control, but
		leave it turned off with a comment describing why it
		is turned off.

if_tireg.h:	Change the firmware rev to 12.4.11, since we're really
		at 12.4.11 plus fixes from 12.4.13.

		Add defines needed for debugging.

		Remove the ti_stats structure, it is now defined in
		sys/tiio.h.

ti_fw.h:	12.4.11 firmware.

ti_fw2.h:	12.4.11 firmware, plus selected fixes from 12.4.13,
		and my header splitting patches.  Revision 12.4.13
		doesn't handle 10/100 negotiation properly.  (This
		firmware is the same as what was in the tree previously,
		with the addition of header splitting support.)

sys/jumbo.h:	Jumbo buffer allocator interface.

sys/mbuf.h:	Add a new external mbuf type, EXT_DISPOSABLE, to
		indicate that the payload buffer can be thrown away /
		flipped to a userland process.

socketvar.h:	Add prototype for socow_setup.

tiio.h:		ioctl interface to the character portion of the ti(4)
		driver, plus associated structure/type definitions.

uio.h:		Change prototype for uiomoveco() so that we'll know
		whether the source page is disposable.

ufs_readwrite.c:Update for new prototype of uiomoveco().

vm_fault.c:	In vm_fault(), check to see whether we need to do a page
		based copy on write fault.

vm_object.c:	Add a new function, vm_object_allocate_wait().  This
		does the same thing that vm_object allocate does, except
		that it gives the caller the opportunity to specify whether
		it should wait on the uma_zalloc() of the object structre.

		This allows vm objects to be allocated while holding a
		mutex.  (Without generating WITNESS warnings.)

		vm_object_allocate() is implemented as a call to
		vm_object_allocate_wait() with the malloc flag set to
		M_WAITOK.

vm_object.h:	Add prototype for vm_object_allocate_wait().

vm_page.c:	Add page-based copy on write setup, clear and fault
		routines.

vm_page.h:	Add page based COW function prototypes and variable in
		the vm_page structure.

Many thanks to Drew Gallatin, who wrote the zero copy send and receive
code, and to all the other folks who have tested and reviewed this code
over the years.
2002-06-26 03:37:47 +00:00
Matthew Dillon
a69ac1740f Enforce RLIMIT_VMEM on growable mappings (aka the primary stack or any
MAP_STACK mapping).

Suggested by:	alc
2002-06-26 03:13:46 +00:00
Matthew Dillon
070f64fe6f Part I of RLIMIT_VMEM implementation. Implement core functionality for
a new resource limit that covers a process's entire VM space, including
mmap()'d space.

(Part II will be additional code to check RLIMIT_VMEM during exec() but it
needs more fleshing out).

PR:		kern/18209
Submitted by:	Andrey Alekseyev <uitm@zenon.net>, Dmitry Kim <jason@nichego.net>
MFC after:	7 days
2002-06-26 00:29:28 +00:00
Ian Dowse
6395da5437 Complete the initial set of VM changes required to support full
64-bit file sizes. This step simply addresses the remaining overflows,
and does attempt to optimise performance. The details are:

 o Use a 64-bit type for the vm_object `size' and the size argument
   to vm_object_allocate().
 o Use the correct type for index variables in dev_pager_getpages(),
   vm_object_page_clean() and vm_object_page_remove().
 o Avoid an overflow in the i386 pmap_object_init_pt().
2002-06-25 22:14:06 +00:00
Jeff Roberson
e78f35b33f Turn VM_ALLOC_ZERO into a flag.
Submitted by:	tegge
Reviewed by:	dillon
2002-06-25 22:01:12 +00:00
Jeff Roberson
5c0e403ba2 Reduce the amount of code that runs with the zone lock held in slab_zalloc().
This allows us to run the zone initialization functions without any locks held.
2002-06-25 21:04:50 +00:00
Alan Cox
366838ddfe o Eliminate vmspace::vm_minsaddr. It's initialized but never used.
o Replace stale comments in vmspace by "const until freed" annotations
   on some fields.
2002-06-25 18:14:38 +00:00
Alan Cox
848d14193d o Remove GIANT_REQUIRED from kmem_alloc_pageable(), kmem_alloc_nofault(),
and kmem_free().  (Annotate as MPSAFE.)
 o Remove incorrect casts from kmem_alloc_pageable() and kmem_alloc_nofault().
2002-06-23 18:07:40 +00:00
Alan Cox
2cd301d1e1 o Remove the unnecessary acquisition and release of Giant around fdrop()
in mmap(2).
2002-06-23 01:48:22 +00:00
Alan Cox
c04c996b25 o Reduce the scope of Giant in vm_mmap() to just the code that manipulates
a vnode.  (Thus, MAP_ANON and MAP_STACK never acquire Giant.)
2002-06-22 19:13:56 +00:00
Alan Cox
c8664f82a5 o Replace mtx_assert(&Giant, MA_OWNED) in dev_pager_alloc()
with the acquisition and release of Giant.  (Annotate as MPSAFE.)
 o Reorder the sanity checks in dev_pager_alloc() to reduce
   the time that Giant is held.
2002-06-22 18:36:51 +00:00
Alan Cox
409748276e o In vm_map_insert(), replace GIANT_REQUIRED by the acquisition and
release of Giant around the direct manipulation of the vm_object and
   the optional call to pmap_object_init_pt().
 o In vm_map_findspace(), remove GIANT_REQUIRED.  Instead, acquire and
   release Giant around the occasional call to pmap_growkernel().
 o In vm_map_find(), remove GIANT_REQUIRED.
2002-06-22 17:47:12 +00:00
Alan Cox
24c46d036d o Replace GIANT_REQUIRED in swap_pager_alloc() by the acquisition and
release of Giant.  (Annotate as MPSAFE.)
2002-06-22 08:03:21 +00:00
Alan Cox
2a1618cd59 o Remove GIANT_REQUIRED from phys_pager_alloc(). If handle isn't NULL,
acquire and release Giant.  If handle is NULL, Giant isn't needed.
 o Annotate phys_pager_alloc() and phys_pager_dealloc() as MPSAFE.
2002-06-22 07:54:42 +00:00
Alan Cox
990ab7add4 o Replace GIANT_REQUIRED in vnode_pager_alloc() by the acquisition and
release of Giant.  (Annotate as MPSAFE.)
 o Also, in vnode_pager_alloc(), remove an unnecessary re-initialization
   of struct vm_object::flags and move a statement that is duplicated
   in both branches of an if-else.
2002-06-22 07:28:06 +00:00
Alan Cox
43a90f3a1b o Remove GIANT_REQUIRED from vslock().
o Annotate kernacc(), useracc(), and vslock() as MPSAFE.

Motivated by:	alfred
2002-06-22 01:26:02 +00:00
Alan Cox
27168693db o Remove GIANT_REQUIRED from vm_map_stack(). 2002-06-21 06:03:47 +00:00
Alan Cox
7942194583 o Remove GIANT_REQUIRED from vm_pager_allocate() and vm_pager_deallocate(). 2002-06-21 05:04:56 +00:00
Alan Cox
3d66f1384e o Remove an incorrect cast from obreak(). This cast would,
for example, break an sbrk(>=4GB) on 64-bit architectures
   even if the resource limit allowed it.
 o Correct an off-by-one error.
 o Correct a spelling error in a comment.
 o Reorder an && expression so that the commonly FALSE expression
   comes first.

Submitted by:	bde (bullets 1 and 2)
2002-06-20 18:38:28 +00:00
Alan Cox
5375be1861 o Acquire and release the vm_map lock instead of Giant in obreak().
Consequently, use vm_map_insert() and vm_map_delete(), which expect
   the vm_map to be locked, instead of vm_map_find() and vm_map_remove(),
   which do not.
2002-06-20 02:04:55 +00:00
Jeff Roberson
1e081f889b - Move the computation of pflags out of the page allocation loop in
kmem_malloc()
- zero fill pages if PG_ZERO bit is not set after allocation in kmem_malloc()

Suggested by: alc, jake
2002-06-19 23:49:57 +00:00
Jeff Roberson
3370c5bfd7 - Remove bogus use of kmem_alloc that was inherited from the old zone
allocator.
- Properly set M_ZERO when talking to the back end page allocators for
  non malloc zones.  This forces us to zero fill pages when they are first
  brought into a cache.
- Properly handle M_ZERO in uma_zalloc_internal.  This fixes a problem where
  per cpu buckets weren't always getting zeroed.
2002-06-19 20:49:44 +00:00
Jeff Roberson
95f24639b7 Teach kmem_malloc about M_ZERO. 2002-06-19 20:47:18 +00:00
Alan Cox
00e1854a1f o Replace GIANT_REQUIRED in vm_object_coalesce() by the acquisition and
release of Giant.
 o Reduce the scope of GIANT_REQUIRED in vm_map_insert().

These changes will enable us to remove the acquisition and release
of Giant from obreak().
2002-06-19 06:02:03 +00:00
Alan Cox
515630b12f o Remove LK_CANRECURSE from the vm_map lock. 2002-06-18 18:31:35 +00:00
Jeff Roberson
4741dcbff5 Honor the BUCKETCACHE flag on free as well. 2002-06-17 23:53:58 +00:00
Jeff Roberson
18aa2de5a7 - Introduce the new M_NOVM option which tells uma to only check the currently
allocated slabs and bucket caches for free items.  It will not go ask the vm
  for pages.  This differs from M_NOWAIT in that it not only doesn't block, it
  doesn't even ask.

- Add a new zcreate option ZONE_VM, that sets the BUCKETCACHE zflag.  This
  tells uma that it should only allocate buckets out of the bucket cache, and
  not from the VM.  It does this by using the M_NOVM option to zalloc when
  getting a new bucket.  This is so that the VM doesn't recursively enter
  itself while trying to allocate buckets for vm_map_entry zones.  If there
  are already allocated buckets when we get here we'll still use them but
  otherwise we'll skip it.

- Use the ZONE_VM flag on vm map entries and pv entries on x86.
2002-06-17 22:02:41 +00:00
Alan Cox
b49ecb86d0 o Acquire and release Giant in vm_map_wakeup() to prevent
a lost wakeup().

Reviewed by:	tegge
2002-06-17 13:27:40 +00:00
Alan Cox
042bb29940 o Remove GIANT_REQUIRED from vm_fault_user_wire().
o Move pmap_pageable() outside of Giant in vm_fault_unwire().
   (pmap_pageable() is a no-op on all supported architectures.)
 o Remove the acquisition and release of Giant from mlock().
2002-06-16 20:42:29 +00:00
Alan Cox
319490fb7b o Remove GIANT_REQUIRED from useracc() and vsunlock(). Neither
vm_map_check_protection() nor vm_map_unwire() expect Giant
   to be held.
2002-06-15 19:10:19 +00:00
Alan Cox
e30616dbfe o Remove the acquisition and release of Giant from munlock().
Reviewed by:	tegge
2002-06-15 05:05:04 +00:00
Alan Cox
1d7cf06c8c o Use vm_map_wire() and vm_map_unwire() in place of vm_map_pageable() and
vm_map_user_pageable().
 o Remove vm_map_pageable() and vm_map_user_pageable().
 o Remove vm_map_clear_recursive() and vm_map_set_recursive().  (They were
   only used by vm_map_pageable() and vm_map_user_pageable().)

Reviewed by:	tegge
2002-06-14 18:21:01 +00:00
Alan Cox
d46e7d6bee o Acquire and release Giant in vm_map_unlock_and_wait().
Submitted by:	tegge
2002-06-12 08:15:52 +00:00
Alan Cox
28c58286ef o Properly handle a failure by vm_fault_wire() or vm_fault_user_wire()
in vm_map_wire().
 o Make two white-space changes in vm_map_wire().

Reviewed by:	tegge
2002-06-11 19:13:59 +00:00
Alan Cox
73b2bace26 o Teach vm_map_delete() to respect the "in-transition" flag
on a vm_map_entry by sleeping until the flag is cleared.

Submitted by:	tegge
2002-06-11 05:24:22 +00:00
Alan Cox
2b4a2c272d o In vm_map_entry_create(), call uma_zalloc() with M_NOWAIT on system maps.
Submitted by: tegge
 o Eliminate the "!mapentzone" check from vm_map_entry_create() and
   vm_map_entry_dispose().  Reviewed by: tegge
 o Fix white-space usage in vm_map_entry_create().
2002-06-10 06:11:45 +00:00
Ian Dowse
f97d6ce396 Correct the logic for determining whether the per-CPU locks need
to be destroyed. This fixes a problem where destroying a UMA zone
would fail to destroy all zone mutexes.

Reviewed by:	jeff
2002-06-10 03:25:23 +00:00
Alan Cox
12d7cc840f o Add vm_map_wire() for wiring contiguous regions of either kernel
or user vm_maps.  This implementation has two key benefits when compared
   to vm_map_{user_,}pageable(): (1) it avoids a race condition through
   the use of "in-transition" vm_map entries and (2) it eliminates lock
   recursion on the vm_map.

Note: there is still an error case that requires clean up.

Reviewed by:	tegge
2002-06-09 20:25:18 +00:00
Alan Cox
b2f3846aef o Simplify vm_map_unwire() by merging the second and third passes
over the caller-specified region.
2002-06-08 19:00:40 +00:00
Alan Cox
e27e17b711 o Remove an unnecessary call to vm_map_wakeup() from vm_map_unwire().
o Add a stub for vm_map_wire().

Note: the description of the previous commit had an error.  The in-
transition flag actually blocks the deallocation of a vm_map_entry by
vm_map_delete() and vm_map_simplify_entry().
2002-06-08 07:32:38 +00:00
Alan Cox
acd9a301ec o Add vm_map_unwire() for unwiring contiguous regions of either kernel
or user vm_maps.  In accordance with the standards for munlock(2),
   and in contrast to vm_map_user_pageable(), this implementation does not
   allow holes in the specified region.  This implementation uses the
   "in transition" flag described below.
 o Introduce a new flag, "in transition," to the vm_map_entry.
   Eventually, vm_map_delete() and vm_map_simplify_entry() will respect
   this flag by deallocating in-transition vm_map_entrys, allowing
   the vm_map lock to be safely released in vm_map_unwire() and (the
   forthcoming) vm_map_wire().
 o Modify vm_map_simplify_entry() to respect the in-transition flag.

In collaboration with:	tegge
2002-06-07 18:34:23 +00:00
Alfred Perlstein
fa7212543f fix typo in _SYS_SYSPROTO_H_ case: s/mlockall_args/munlockall_args
Submitted by: Mark Santcroos <marks@ripe.net>
2002-06-06 18:51:14 +00:00
Jeff Roberson
494273bead Add a comment describing a resource leak that occurs during a failure case
in obj_alloc.
2002-06-03 22:59:19 +00:00
Alan Cox
c5aaa06ded o Migrate vm_map_split() from vm_map.c to vm_object.c, renaming it
to vm_object_split().  Its interface should still be changed
   to resemble vm_object_shadow().
2002-06-02 23:54:09 +00:00
Alan Cox
0d78c0dce2 o Style fixes to vm_map_split(), including the elimination of one variable
declaration that shadows another.

Note: This function should really be vm_object_split(), not vm_map_split().

Reviewed by:	md5
2002-06-02 19:32:05 +00:00
Alan Cox
72353893d4 o Condition vm_object_pmap_copy_1()'s compilation on the kernel
option ENABLE_VFS_IOOPT.  Unless this option is in effect,
   vm_object_pmap_copy_1() is not used.
2002-06-02 06:31:41 +00:00
Alan Cox
61c075b67f o Remove GIANT_REQUIRED from vm_map_zfini(), vm_map_zinit(),
vm_map_create(), and vm_map_submap().
 o Make further use of a local variable in vm_map_entry_splay()
   that caches a reference to one of a vm_map_entry's children.
   (This reduces code size somewhat.)
 o Revert a part of revision 1.66, deinlining vmspace_pmap().
   (This function is MPSAFE.)
2002-06-01 22:41:43 +00:00
Alan Cox
794316a866 o Revert a part of revision 1.66, contrary to what that commit message says,
deinlining vm_map_entry_behavior() and vm_map_entry_set_behavior()
   actually increases the kernel's size.
 o Make vm_map_entry_set_behavior() static and add a comment describing
   its purpose.
 o Remove an unnecessary initialization statement from vm_map_entry_splay().
2002-06-01 16:59:30 +00:00
Dag-Erling Smørgrav
8dcfdf3f80 Export nswapdev through sysctl(8).
Sponsored by:	DARPA, NAI Labs
2002-05-31 08:17:58 +00:00
Alan Cox
9917e01041 Further work on pushing Giant out of the vm_map layer and down
into the vm_object layer:
 o Acquire and release Giant in vm_object_shadow() and
   vm_object_page_remove().
 o Remove the GIANT_REQUIRED assertion preceding vm_map_delete()'s call
   to vm_object_page_remove().
 o Remove the acquisition and release of Giant around vm_map_lookup()'s
   call to vm_object_shadow().
2002-05-31 03:48:55 +00:00
Alfred Perlstein
99b9331a4f Check for defined(__i386__) instead of just defined(i386) since the compiler
will be updated to only define(__i386__) for ANSI cleanliness.
2002-05-30 07:32:58 +00:00
Peter Wemm
7550be9c57 The kernel printf does not have %i 2002-05-29 08:25:13 +00:00
Alan Cox
8f2ba19c90 o Remove unused #defines. 2002-05-27 22:10:28 +00:00
Alan Cox
4b9fdc2bce o Acquire and release Giant around pmap operations in vm_fault_unwire()
and vm_map_delete().  Assert GIANT_REQUIRED in vm_map_delete()
   only if operating on the kernel_object or the kmem_object.
 o Remove GIANT_REQUIRED from vm_map_remove().
 o Remove the acquisition and release of Giant from munmap().
2002-05-26 04:54:56 +00:00
Alan Cox
4e94f40222 o Replace the vm_map's hint by the root of a splay tree. By design,
the last accessed datum is moved to the root of the splay tree.
   Therefore, on lookups in which the hint resulted in O(1) access,
   the splay tree still achieves O(1) access.  In contrast, on lookups
   in which the hint failed miserably, the splay tree achieves amortized
   logarithmic complexity, resulting in dramatic improvements on vm_maps
   with a large number of entries.  For example, the execution time
   for replaying an access log from www.cs.rice.edu against the thttpd
   web server was reduced by 23.5% due to the large number of files
   simultaneously mmap()ed by this server.  (The machine in question has
   enough memory to cache most of this workload.)

   Nothing comes for free: At present, I see a 0.2% slowdown on "buildworld"
   due to the overhead of maintaining the splay tree.  I believe that
   some or all of this can be eliminated through optimizations
   to the code.

Developed in collaboration with: Juan E Navarro <jnavarro@cs.rice.edu>
Reviewed by:	jeff
2002-05-24 01:33:24 +00:00
Alan Cox
03adb816d7 o Make contigmalloc1() static. 2002-05-22 01:01:37 +00:00
John Baldwin
4c1cc01cd8 In uma_zalloc_arg(), if we are performing a M_WAITOK allocation, ensure
that td_intr_nesting_level is 0 (like malloc() does).  Since malloc() calls
uma we can probably remove the check in malloc() for this now.  Also,
perform an extra witness check in that case to make sure we don't hold
any locks when performing a M_WAITOK allocation.
2002-05-20 17:54:48 +00:00
Alan Cox
e0be79afbf o Eliminate the acquisition and release of Giant from minherit(2).
(vm_map_inherit() no longer requires Giant to be held.)
2002-05-18 18:59:00 +00:00
Alan Cox
094f6d2694 o Remove GIANT_REQUIRED from vm_map_madvise(). Instead, acquire and
release Giant around vm_map_madvise()'s call to pmap_object_init_pt().
 o Replace GIANT_REQUIRED in vm_object_madvise() with the acquisition
   and release of Giant.
 o Remove the acquisition and release of Giant from madvise().
2002-05-18 07:48:06 +00:00
Alan Cox
4328504956 o Remove the acquisition and release of Giant from mprotect(). 2002-05-18 03:58:16 +00:00
Tom Rhodes
d394511de3 More s/file system/filesystem/g 2002-05-16 21:28:32 +00:00
Poul-Henning Kamp
98b0c78978 Make daddr_t and u_daddr_t 64bits wide.
Retire daddr64_t and use daddr_t instead.

Sponsored by:	DARPA & NAI Labs.
2002-05-14 11:09:43 +00:00
Jeff Roberson
713deb3677 Don't call the uz free function while the zone lock is held. This can lead
to lock order reversals.  uma_reclaim now builds a list of freeable slabs and
then unlocks the zones to do all of the frees.
2002-05-13 05:08:18 +00:00
Jeff Roberson
0aef6126a1 Remove the hash_free() lock order reversal. This could have happened for
several reasons before.  Fixing it involved restructuring the generic hash
code to require calling code to handle locking, unlocking, and freeing hashes
on error conditions.
2002-05-13 04:39:28 +00:00
Alan Cox
a47335fdb4 o Remove GIANT_REQUIRED and an excessive number of blank lines
from vm_map_inherit().  (minherit() need not acquire Giant
   anymore.)
2002-05-12 18:42:05 +00:00
Alan Cox
47c3ccc467 o Acquire and release Giant in vm_object_reference() and
vm_object_deallocate(), replacing the assertion GIANT_REQUIRED.
 o Remove GIANT_REQUIRED from vm_map_protect() and vm_map_simplify_entry().
 o Acquire and release Giant around vm_map_protect()'s call to pmap_protect().

Altogether, these changes eliminate the need for mprotect() to acquire
and release Giant.
2002-05-12 05:22:56 +00:00
Alan Cox
b3a882e936 o Header files shouldn't depend on options: Provide prototypes
for uiomoveco(), uioread(), and vm_uiomove() regardless
   of whether ENABLE_VFS_IOOPT is defined or not.

Submitted by:	bde
2002-05-06 06:20:04 +00:00
Alan Cox
c0b6bbb80b o Condition the compilation and use of vm_freeze_copyopts()
on ENABLE_VFS_IOOPT.
2002-05-06 05:45:57 +00:00
Alan Cox
dcc5840ed5 o Some improvements to the page coloring of vm objects, particularly,
for shadow objects.

Submitted by:	bde
2002-05-06 03:34:17 +00:00
Alan Cox
e86256c1f4 o Move vm_freeze_copyopts() from vm_map.{c.h} to vm_object.{c,h}. It's plainly
an operation on a vm_object and belongs in the latter place.
2002-05-06 00:12:47 +00:00
Alan Cox
c50fe92b8d o Condition the compilation of uiomoveco() and vm_uiomove()
on ENABLE_VFS_IOOPT.
 o Add a comment to the effect that this code is experimental
   support for zero-copy I/O.
2002-05-05 22:42:40 +00:00
Poul-Henning Kamp
81e017430a Expand the one-line function pbreassignbuf() the only place it is or could
be used.
2002-05-05 20:37:08 +00:00
Alan Cox
15fdd586e3 o Remove GIANT_REQUIRED from vm_map_lookup() and vm_map_lookup_done().
o Acquire and release Giant around vm_map_lookup()'s call
   to vm_object_shadow().
2002-05-05 05:36:28 +00:00
Jeff Roberson
c7173f58fa Use pages instead of uz_maxpages, which has not been initialized yet, when
creating the vm_object.  This was broken after the code was rearranged to
grab giant itself.

Spotted by:     alc
2002-05-04 21:49:29 +00:00
Alan Cox
79660d837c o Make _vm_object_allocate() and vm_object_allocate() callable
without holding Giant.
 o Begin documenting the trivial cases of the locking protocol
   on vm_object.
2002-05-04 20:23:48 +00:00
Alan Cox
8c5c5d049f o Remove GIANT_REQUIRED from vm_map_lookup_entry() and
vm_map_check_protection().
 o Call vm_map_check_protection() without Giant held in munmap().
2002-05-04 02:07:36 +00:00
Alan Cox
bc91c5107a o Change the implementation of vm_map locking to use exclusive locks
exclusively.  The interface still, however, distinguishes
   between a shared lock and an exclusive lock.
2002-05-02 17:32:27 +00:00
Jeff Roberson
8f70816cf2 Hide a pointer to the malloc_type bucket at the end of the freed memory. If
this memory is modified after it has been freed we can now report it's
previous owner.
2002-05-02 09:07:04 +00:00
Jeff Roberson
b9ba893179 Move around the dbg code a bit so it's always under a lock. This stops a
weird potential race if we were preempted right as we were doing the dbg
checks.
2002-05-02 09:05:36 +00:00
Andrew R. Reiter
c3bdc05fb9 - Changed the size element of uma_zctor_args to be size_t instead of int.
- Changed uma_zcreate to accept the size argument as a size_t intead of
  int.

Approved by:	jeff
2002-05-02 07:36:30 +00:00
Jeff Roberson
5a34a9f089 malloc/free(9) no longer require Giant. Use the malloc_mtx to protect the
mallochash.  Mallochash is going to go away as soon as I introduce the
kfree/kmalloc api and partially overhaul the malloc wrapper.  This can't happen
until all users of the malloc api that expect memory to be aligned on the size
of the allocation are fixed.
2002-05-02 07:22:19 +00:00
Alan Cox
569687d02f o Remove dead and lockmgr()-specific debugging code. 2002-05-02 02:32:09 +00:00
Jeff Roberson
639c9550fb Remove the temporary alignment check in free().
Implement the following checks on freed memory in the bucket path:
	- Slab membership
	- Alignment
	- Duplicate free

This previously was only done if we skipped the buckets.  This code will slow
down INVARIANTS a bit, but it is smp safe.  The checks were moved out of the
normal path and into hooks supplied in uma_dbg.
2002-05-02 02:08:48 +00:00
Alan Cox
ea0f50bcf0 o Convert the vm_page buckets mutex to a spin lock. (This resolves
an issue on the Alpha platform found by jeff@.)
 o Simplify vm_page_lookup().

Reviewed by:	jhb
2002-04-30 21:24:47 +00:00
Jeff Roberson
8efc4eff00 Add a new UMA debugging facility. This will overwrite freed memory with
0xdeadc0de and then check for it just before memory is handed off as part
of a new request.  This will catch any post free/pre alloc modification of
memory, as well as introduce errors for anything that tries to dereference
it as a pointer.

This code takes the form of special init, fini, ctor and dtor routines that
are specificly used by malloc.  It is in a seperate file because additional
debugging aids will want to live here as well.
2002-04-30 07:54:25 +00:00
Jeff Roberson
2cc35ff9c6 Move the implementation of M_ZERO into UMA so that it can be passed to
uma_zalloc and friends.  Remove this functionality from the malloc wrapper.

Document this change in uma.h and adjust variable names in uma_core.
2002-04-30 04:26:34 +00:00
Alan Cox
7788e21963 o Revert vm_fault1() to its original name vm_fault(), eliminating the wrapper
that took its place for the purposes of acquiring and releasing Giant.
2002-04-30 03:44:34 +00:00
Jeff Roberson
28bc44195c Add a new zone flag UMA_ZONE_MTXCLASS. This puts the zone in it's own
mutex class.  Currently this is only used for kmapentzone because kmapents
are are potentially allocated when freeing memory.  This is not dangerous
though because no other allocations will be done while holding the
kmapentzone lock.
2002-04-29 23:45:41 +00:00
Peter Wemm
db17c6fc07 Tidy up some loose ends.
i386/ia64/alpha - catch up to sparc64/ppc:
- replace pmap_kernel() with refs to kernel_pmap
- change kernel_pmap pointer to (&kernel_pmap_store)
  (this is a speedup since ld can set these at compile/link time)
all platforms (as suggested by jake):
- gc unused pmap_reference
- gc unused pmap_destroy
- gc unused struct pmap.pm_count
(we never used pm_count - we track address space sharing at the vmspace)
2002-04-29 07:43:16 +00:00
Alan Cox
532eadef77 Document three synchronization issues in vm_fault(). 2002-04-29 05:23:01 +00:00
Alan Cox
780b1c0997 Pass the caller's file name and line number to the vm_map locking functions. 2002-04-28 23:12:52 +00:00
Alan Cox
d974f03c69 o Introduce and use vm_map_trylock() to replace several direct uses
of lockmgr().
 o Add missing synchronization to vmspace_swap_count(): Obtain a read lock
   on the vm_map before traversing it.
2002-04-28 06:07:54 +00:00
Peter Wemm
44e74ba6c3 We do not necessarily need to map/unmap pages to zero parts of them.
On systems where physical memory is also direct mapped (alpha, sparc,
ia64 etc) this is slightly harmful.
2002-04-28 00:15:48 +00:00
Alan Cox
089b073345 o Begin documenting the (existing) locking protocol on the vm_map
in the same style as sys/proc.h.
 o Undo the de-inlining of several trivial, MPSAFE methods on the vm_map.
   (Contrary to the commit message for vm_map.h revision 1.66 and vm_map.c
   revision 1.206, de-inlining these methods increased the kernel's size.)
2002-04-27 22:01:37 +00:00
Alan Cox
cbd53e95fe o Control access to the vm_page_buckets with a mutex.
o Fix some style(9) bugs.
2002-04-26 22:44:15 +00:00
Andrew R. Reiter
d4d6aee5a0 - Fix a round down bogon in uma_zone_set_max().
Submitted by: jeff@
2002-04-25 06:24:40 +00:00
Alan Cox
a569838764 Reintroduce locking on accesses to vm_object_list. 2002-04-20 07:23:22 +00:00
Alan Cox
92de35b0ce o Move the acquisition of Giant from vm_fault() to the point
after initialization in vm_fault1().
 o Fix some style problems in vm_fault1().
2002-04-19 04:20:31 +00:00
Alan Cox
ff8f4ebe22 Add a comment documenting a race condition in vm_fault(): Specifically, a
modification is made to the vm_map while only a read lock is held.
2002-04-18 03:55:50 +00:00
Alan Cox
6139043b1f o Call vm_map_growstack() from vm_fault() if vm_map_lookup() has failed
due to conditions that suggest the possible need for stack growth.
   This has two beneficial effects: (1) we can
   now remove calls to vm_map_growstack() from the MD trap handlers and (2)
   simple page faults are faster because we no longer unnecessarily perform
   vm_map_growstack() on every page fault.
 o Remove vm_map_growstack() from the i386's trap_pfault().
 o Remove the acquisition and release of Giant from i386's trap_pfault().
   (vm_fault() still acquires it.)
2002-04-18 03:28:27 +00:00