Commit Graph

112 Commits

Author SHA1 Message Date
Alan Cox
02dd83311a Make vm_object_vndeallocate() static. The external calls to it were
eliminated in ufs/ffs/ffs_vnops.c's revision 1.125.
2006-01-22 23:56:20 +00:00
Jeff Roberson
ed4fe4f4f5 - Add a new object flag "OBJ_NEEDSGIANT". We set this flag if the
underlying vnode requires Giant.
 - In vm_fault only acquire Giant if the underlying object has NEEDSGIANT
   set.
 - In vm_object_shadow inherit the NEEDSGIANT flag from the backing object.
2005-05-03 11:11:26 +00:00
John Baldwin
98df9218da - Change the vm_mmap() function to accept an objtype_t parameter specifying
the type of object represented by the handle argument.
- Allow vm_mmap() to map device memory via cdev objects in addition to
  vnodes and anonymous memory.  Note that mmaping a cdev directly does not
  currently perform any MAC checks like mapping a vnode does.
- Unbreak the DRM getbufs ioctl by having it call vm_mmap() directly on the
  cdev the ioctl is acting on rather than trying to find a suitable vnode
  to map from.

Reviewed by:	alc, arch@
2005-04-01 20:00:11 +00:00
Warner Losh
60727d8b86 /* -> /*- for license, minor formatting changes 2005-01-07 02:29:27 +00:00
Alan Cox
90688d137c With the removal of kern/uipc_jumbo.c and sys/jumbo.h,
vm_object_allocate_wait() is not used.  Remove it.
2004-12-08 05:01:47 +00:00
Alan Cox
19187819b7 Move a call to wakeup() from vm_object_terminate() to vnode_pager_dealloc()
because this call is only needed to wake threads that slept when they
discovered a dead object connected to a vnode.  To eliminate unnecessary
calls to wakeup() by vnode_pager_dealloc(), introduce a new flag,
OBJ_DISCONNECTWNT.

Reviewed by: tegge@
2004-11-06 05:33:02 +00:00
Alan Cox
57a21aba93 Make the code and comments for vm_object_coalesce() consistent. 2004-07-25 07:48:47 +00:00
Alan Cox
5285558ac2 - Change uma_zone_set_obj() to call kmem_alloc_nofault() instead of
kmem_alloc_pageable().  The difference between these is that an errant
   memory access to the zone will be detected sooner with
   kmem_alloc_nofault().

The following changes serve to eliminate the following lock-order
reversal reported by witness:

 1st 0xc1a3c084 vm object (vm object) @ vm/swap_pager.c:1311
 2nd 0xc07acb00 swap_pager swhash (swap_pager swhash) @ vm/swap_pager.c:1797
 3rd 0xc1804bdc vm object (vm object) @ vm/uma_core.c:931

There is no potential deadlock in this case.  However, witness is unable
to recognize this because vm objects used by UMA have the same type as
ordinary vm objects.  To remedy this, we make the following changes:

 - Add a mutex type argument to VM_OBJECT_LOCK_INIT().
 - Use the mutex type argument to assign distinct types to special
   vm objects such as the kernel object, kmem object, and UMA objects.
 - Define a static swap zone object for use by UMA.  (Only static
   objects are assigned a special mutex type.)
2004-07-22 19:44:49 +00:00
Warner Losh
05eb3785e7 Remove advertising clause from University of California Regent's license,
per letter dated July 22, 1999.

Approved by: core
2004-04-06 20:15:37 +00:00
Alan Cox
950f8459d4 - Rename vm_map_clean() to vm_map_sync(). This better reflects the fact
that msync(2) is its only caller.
 - Migrate the parts of the old vm_map_clean() that examined the internals
   of a vm object to a new function vm_object_sync() that is implemented in
   vm_object.c.  At the same, introduce the necessary vm object locking so
   that vm_map_sync() and vm_object_sync() can be called without Giant.

Reviewed by:	tegge
2003-11-09 05:25:35 +00:00
Alan Cox
b921a12b3b - Introduce and use vm_object_reference_locked(). Unlike
vm_object_reference(), this function must not be used to reanimate dead
   vm objects.  This restriction simplifies locking.

Reviewed by:	tegge
2003-11-02 21:30:10 +00:00
Alan Cox
3b9a4cb6a9 - Revert a part of revision 1.73: Make vm_object_set_flag() an inline
function.  This function is so trivial that inlining reduces the size
   of the kernel.
2003-10-31 20:17:00 +00:00
Alan Cox
c759a3ca06 Reduce the size of the vm object on 64-bit architectures by moving
a field within the structure.
2003-08-12 20:10:32 +00:00
Alan Cox
bc73ee3fe7 - Add VM_OBJECT_TRYLOCK(). 2003-06-04 19:59:23 +00:00
Alan Cox
3b68228cce - Add vm object locking to vm_object_deallocate(). (Still more
changes are required.)
 - Remove special-case macros for kmem object locking.  They are
   no longer used.
2003-06-04 06:00:55 +00:00
Alan Cox
cccf11b865 Change kernel_object and kmem_object to (&kernel_object_store) and
(&kmem_object_store), respectively.  This allows the address of these
objects to be resolved at link-time rather than run-time.
2003-06-01 23:59:48 +00:00
Alan Cox
1c500307d1 Reduce the size of a vm object by converting its shadow list from a TAILQ
to a LIST.

Approved by:	re (rwatson)
2003-05-18 04:10:16 +00:00
Alan Cox
ed6a786313 - Define VM_OBJECT_LOCK_INIT().
- Avoid repeatedly mtx_init()ing and mtx_destroy()ing the vm_object's lock
   using UMA's uminit callback, in this case, vm_object_zinit().
2003-04-28 03:45:35 +00:00
Alan Cox
1ca5895341 - Convert vm_object_pip_wait() from using tsleep() to msleep().
- Make vm_object_pip_sleep() static.
 - Lock the vm_object when performing vm_object_pip_wait().
2003-04-26 18:33:18 +00:00
Alan Cox
1c067f7ebc Add VM_OBJECT_LOCKED(). 2003-04-22 04:47:29 +00:00
Alan Cox
0d420ad3e6 - Lock the vm_object when performing vm_object_pip_wakeupn().
- Assert that the vm_object lock is held in vm_object_pip_wakeupn().
 - Add a new macro VM_OBJECT_LOCK_ASSERT().
2003-04-19 21:15:44 +00:00
Alan Cox
fdff41609d Add new macros for locking and unlocking a vm object. 2003-04-13 18:39:47 +00:00
Alan Cox
09c80124a3 Remove ENABLE_VFS_IOOPT. It is a long unfinished work-in-progress.
Discussed on:	arch@
2003-03-06 03:41:02 +00:00
Alan Cox
077808c588 Fuse two #ifdefs with identical conditions. 2003-02-25 06:46:08 +00:00
Alan Cox
e3a9e1b2a8 - Remove vm_object_init2(). It is unused.
- Add a mtx_destroy() to vm_object_collapse().  (This allows a bzero()
   to migrate from _vm_object_allocate() to vm_object_zinit(), where it
   will be performed less often.)
2002-12-29 21:01:14 +00:00
Alan Cox
4b420d501f Add a mutex to struct vm_object. Initialize and destroy that mutex
at appropriate times.  For the moment, the mutex is only used on
the kmem_object.
2002-12-20 05:10:32 +00:00
Alan Cox
cf3e6e4837 Remove the hash_rand field from struct vm_object. As of revision 1.215 of
vm/vm_page.c, it is unused.
2002-12-19 20:01:22 +00:00
Alan Cox
81b9ee99e7 Remove dead code that hasn't been needed since the demise of share maps
in various revisions of vm/vm_map.c between 1.148 and 1.153.
2002-11-13 19:50:06 +00:00
Matthew Dillon
b86ec922be Replace the vm_page hash table with a per-vmobject splay tree. There should
be no major change in performance from this change at this time but this
will allow other work to progress:  Giant lock removal around VM system
in favor of per-object mutexes, ranged fsyncs, more optimal COMMIT rpc's for
NFS, partial filesystem syncs by the syncer, more optimal object flushing,
etc.  Note that the buffer cache is already using a similar splay tree
mechanism.

Note that a good chunk of the old hash table code is still in the tree.
Alan or I will remove it prior to the release if the new code does not
introduce unsolvable bugs, else we can revert more easily.

Submitted by:	alc	(this is Alan's code)
Approved by:	re
2002-10-18 17:24:30 +00:00
Alan Cox
8aadcc5368 Reduce namespace pollution.
Submitted by:	bde
2002-09-21 07:51:44 +00:00
Alan Cox
d52bc3438c o Resurrect vm_object_lock() and vm_object_unlock() from revision 1.19.
(For now, they simply acquire and release Giant.)
2002-08-24 07:15:14 +00:00
Kenneth D. Merry
98cb733c67 At long last, commit the zero copy sockets code.
MAKEDEV:	Add MAKEDEV glue for the ti(4) device nodes.

ti.4:		Update the ti(4) man page to include information on the
		TI_JUMBO_HDRSPLIT and TI_PRIVATE_JUMBOS kernel options,
		and also include information about the new character
		device interface and the associated ioctls.

man9/Makefile:	Add jumbo.9 and zero_copy.9 man pages and associated
		links.

jumbo.9:	New man page describing the jumbo buffer allocator
		interface and operation.

zero_copy.9:	New man page describing the general characteristics of
		the zero copy send and receive code, and what an
		application author should do to take advantage of the
		zero copy functionality.

NOTES:		Add entries for ZERO_COPY_SOCKETS, TI_PRIVATE_JUMBOS,
		TI_JUMBO_HDRSPLIT, MSIZE, and MCLSHIFT.

conf/files:	Add uipc_jumbo.c and uipc_cow.c.

conf/options:	Add the 5 options mentioned above.

kern_subr.c:	Receive side zero copy implementation.  This takes
		"disposable" pages attached to an mbuf, gives them to
		a user process, and then recycles the user's page.
		This is only active when ZERO_COPY_SOCKETS is turned on
		and the kern.ipc.zero_copy.receive sysctl variable is
		set to 1.

uipc_cow.c:	Send side zero copy functions.  Takes a page written
		by the user and maps it copy on write and assigns it
		kernel virtual address space.  Removes copy on write
		mapping once the buffer has been freed by the network
		stack.

uipc_jumbo.c:	Jumbo disposable page allocator code.  This allocates
		(optionally) disposable pages for network drivers that
		want to give the user the option of doing zero copy
		receive.

uipc_socket.c:	Add kern.ipc.zero_copy.{send,receive} sysctls that are
		enabled if ZERO_COPY_SOCKETS is turned on.

		Add zero copy send support to sosend() -- pages get
		mapped into the kernel instead of getting copied if
		they meet size and alignment restrictions.

uipc_syscalls.c:Un-staticize some of the sf* functions so that they
		can be used elsewhere.  (uipc_cow.c)

if_media.c:	In the SIOCGIFMEDIA ioctl in ifmedia_ioctl(), avoid
		calling malloc() with M_WAITOK.  Return an error if
		the M_NOWAIT malloc fails.

		The ti(4) driver and the wi(4) driver, at least, call
		this with a mutex held.  This causes witness warnings
		for 'ifconfig -a' with a wi(4) or ti(4) board in the
		system.  (I've only verified for ti(4)).

ip_output.c:	Fragment large datagrams so that each segment contains
		a multiple of PAGE_SIZE amount of data plus headers.
		This allows the receiver to potentially do page
		flipping on receives.

if_ti.c:	Add zero copy receive support to the ti(4) driver.  If
		TI_PRIVATE_JUMBOS is not defined, it now uses the
		jumbo(9) buffer allocator for jumbo receive buffers.

		Add a new character device interface for the ti(4)
		driver for the new debugging interface.  This allows
		(a patched version of) gdb to talk to the Tigon board
		and debug the firmware.  There are also a few additional
		debugging ioctls available through this interface.

		Add header splitting support to the ti(4) driver.

		Tweak some of the default interrupt coalescing
		parameters to more useful defaults.

		Add hooks for supporting transmit flow control, but
		leave it turned off with a comment describing why it
		is turned off.

if_tireg.h:	Change the firmware rev to 12.4.11, since we're really
		at 12.4.11 plus fixes from 12.4.13.

		Add defines needed for debugging.

		Remove the ti_stats structure, it is now defined in
		sys/tiio.h.

ti_fw.h:	12.4.11 firmware.

ti_fw2.h:	12.4.11 firmware, plus selected fixes from 12.4.13,
		and my header splitting patches.  Revision 12.4.13
		doesn't handle 10/100 negotiation properly.  (This
		firmware is the same as what was in the tree previously,
		with the addition of header splitting support.)

sys/jumbo.h:	Jumbo buffer allocator interface.

sys/mbuf.h:	Add a new external mbuf type, EXT_DISPOSABLE, to
		indicate that the payload buffer can be thrown away /
		flipped to a userland process.

socketvar.h:	Add prototype for socow_setup.

tiio.h:		ioctl interface to the character portion of the ti(4)
		driver, plus associated structure/type definitions.

uio.h:		Change prototype for uiomoveco() so that we'll know
		whether the source page is disposable.

ufs_readwrite.c:Update for new prototype of uiomoveco().

vm_fault.c:	In vm_fault(), check to see whether we need to do a page
		based copy on write fault.

vm_object.c:	Add a new function, vm_object_allocate_wait().  This
		does the same thing that vm_object allocate does, except
		that it gives the caller the opportunity to specify whether
		it should wait on the uma_zalloc() of the object structre.

		This allows vm objects to be allocated while holding a
		mutex.  (Without generating WITNESS warnings.)

		vm_object_allocate() is implemented as a call to
		vm_object_allocate_wait() with the malloc flag set to
		M_WAITOK.

vm_object.h:	Add prototype for vm_object_allocate_wait().

vm_page.c:	Add page-based copy on write setup, clear and fault
		routines.

vm_page.h:	Add page based COW function prototypes and variable in
		the vm_page structure.

Many thanks to Drew Gallatin, who wrote the zero copy send and receive
code, and to all the other folks who have tested and reviewed this code
over the years.
2002-06-26 03:37:47 +00:00
Ian Dowse
6395da5437 Complete the initial set of VM changes required to support full
64-bit file sizes. This step simply addresses the remaining overflows,
and does attempt to optimise performance. The details are:

 o Use a 64-bit type for the vm_object `size' and the size argument
   to vm_object_allocate().
 o Use the correct type for index variables in dev_pager_getpages(),
   vm_object_page_clean() and vm_object_page_remove().
 o Avoid an overflow in the i386 pmap_object_init_pt().
2002-06-25 22:14:06 +00:00
Alan Cox
c5aaa06ded o Migrate vm_map_split() from vm_map.c to vm_object.c, renaming it
to vm_object_split().  Its interface should still be changed
   to resemble vm_object_shadow().
2002-06-02 23:54:09 +00:00
Alan Cox
e86256c1f4 o Move vm_freeze_copyopts() from vm_map.{c.h} to vm_object.{c,h}. It's plainly
an operation on a vm_object and belongs in the latter place.
2002-05-06 00:12:47 +00:00
Alan Cox
79660d837c o Make _vm_object_allocate() and vm_object_allocate() callable
without holding Giant.
 o Begin documenting the trivial cases of the locking protocol
   on vm_object.
2002-05-04 20:23:48 +00:00
Alan Cox
a569838764 Reintroduce locking on accesses to vm_object_list. 2002-04-20 07:23:22 +00:00
Matthew Dillon
245df27cee Implement kern.maxvnodes. adjusting kern.maxvnodes now actually has a
real effect.

Optimize vfs_msync().  Avoid having to continually drop and re-obtain
mutexes when scanning the vnode list.  Improves looping case by 500%.

Optimize ffs_sync().  Avoid having to continually drop and re-obtain
mutexes when scanning the vnode list.  This makes a couple of assumptions,
which I believe are ok, in regards to vnode stability when the mount list
mutex is held.  Improves looping case by 500%.

(more optimization work is needed on top of these fixes)

MFC after:	1 week
2001-10-26 00:08:05 +00:00
Jake Burkholder
3a9b5daf48 Oops. Last commit to vm_object.c should have got these files too.
Remove the use of atomic ops to manipulate vm_object and vm_page flags.
Giant is required here, so they are superfluous.

Discussed with:	dillon
2001-07-31 04:09:52 +00:00
Matthew Dillon
1b40f8c036 Change inlines back into mainline code in preparation for mutexing. Also,
most of these inlines had been bloated in -current far beyond their
original intent.  Normalize prototypes and function declarations to be ANSI
only (half already were).  And do some general cleanup.

(kernel size also reduced by 50-100K, but that isn't the prime intent)
2001-07-04 20:15:18 +00:00
Matthew Dillon
0cddd8f023 With Alfred's permission, remove vm_mtx in favor of a fine-grained approach
(this commit is just the first stage).  Also add various GIANT_ macros to
formalize the removal of Giant, making it easy to test in a more piecemeal
fashion. These macros will allow us to test fine-grained locks to a degree
before removing Giant, and also after, and to remove Giant in a piecemeal
fashion via sysctl's on those subsystems which the authors believe can
operate without Giant.
2001-07-04 16:20:28 +00:00
Alfred Perlstein
2395531439 Introduce a global lock for the vm subsystem (vm_mtx).
vm_mtx does not recurse and is required for most low level
vm operations.

faults can not be taken without holding Giant.

Memory subsystems can now call the base page allocators safely.

Almost all atomic ops were removed as they are covered under the
vm mutex.

Alpha and ia64 now need to catch up to i386's trap handlers.

FFS and NFS have been tested, other filesystems will need minor
changes (grabbing the vm lock when twiddling page properties).

Reviewed (partially) by: jake, jhb
2001-05-19 01:28:09 +00:00
Alfred Perlstein
cc64b484dd use TAILQ_FOREACH, fix a comment's location 2001-04-15 10:22:04 +00:00
David E. O'Brien
ee2fd03885 Make the arguments match the functionality of the functions. 2000-08-26 04:51:39 +00:00
Peter Wemm
bb663856f8 Minor cleanups:
- remove unused variables (fix warnings)
 - use a more consistant ansi style rather than a mixture
 - remove dead #if 0 code and declarations
2000-07-28 22:03:08 +00:00
Matthew Dillon
8b03c8ed5e This is a cleanup patch to Peter's new OBJT_PHYS VM object type
and sysv shared memory support for it.  It implements a new
    PG_UNMANAGED flag that has slightly different characteristics
    from PG_FICTICIOUS.

    A new sysctl, kern.ipc.shm_use_phys has been added to enable the
    use of physically-backed sysv shared memory rather then swap-backed.
    Physically backed shm segments are not tracked with PV entries,
    allowing programs which use a large shm segment as a rendezvous
    point to operate without eating an insane amount of KVM in the
    PV entry management.  Read: Oracle.

    Peter's OBJT_PHYS object will also allow us to eventually implement
    page-table sharing and/or 4MB physical page support for such segments.
    We're half way there.
2000-05-29 22:40:54 +00:00
Jake Burkholder
e39756439c Back out the previous change to the queue(3) interface.
It was not discussed and should probably not happen.

Requested by:		msmith and others
2000-05-26 02:09:24 +00:00
Jake Burkholder
740a1973a6 Change the way that the queue(3) structures are declared; don't assume that
the type argument to *_HEAD and *_ENTRY is a struct.

Suggested by:	phk
Reviewed by:	phk
Approved by:	mdodd
2000-05-23 20:41:01 +00:00
Peter Wemm
249645144d Checkpoint of a new physical memory backed object type, that does not
have pv_entries.  This is intended for very special circumstances,
eg: a certain database that has a 1GB shm segment mapped into 300
processes.  That would consume 2GB of kvm just to hold the pv_entries
alone.  This would not be used on systems unless the physical ram was
available, as it's not pageable.

This is a work-in-progress, but is a useful and functional checkpoint.
Matt has got some more fixes for it that will be committed soon.

Reviewed by:	dillon
2000-05-21 13:41:29 +00:00
Peter Wemm
c447342094 Change #ifdef KERNEL to #ifdef _KERNEL in the public headers. "KERNEL"
is an application space macro and the applications are supposed to be free
to use it as they please (but cannot).  This is consistant with the other
BSD's who made this change quite some time ago.  More commits to come.
1999-12-29 05:07:58 +00:00