Commit Graph

206 Commits

Author SHA1 Message Date
John Baldwin
b9d3b80521 Convert a remaining !fs.map->system_map to
fs.first_object->flags & OBJ_NEEDGIANT test that was missed in an earlier
revision.  This fixes mutex assertion failures in the debug.mpsafevm=0
case.

Reported by:	ps
MFC after:	3 days
2005-07-14 21:18:07 +00:00
Peter Grehan
10b00dd4f3 The final test in unlock_and_deallocate() to determine if GIANT needs to be
unlocked wasn't updated to check for OBJ_NEEDGIANT. This caused a WITNESS
panic when debug_mpsafevm was set to 0.

Approved by:	jeffr
2005-05-12 04:09:41 +00:00
Jeff Roberson
ed4fe4f4f5 - Add a new object flag "OBJ_NEEDSGIANT". We set this flag if the
underlying vnode requires Giant.
 - In vm_fault only acquire Giant if the underlying object has NEEDSGIANT
   set.
 - In vm_object_shadow inherit the NEEDSGIANT flag from the backing object.
2005-05-03 11:11:26 +00:00
Jeff Roberson
ae51ff1127 - Remove GIANT_REQUIRED where giant is no longer required.
- Use VFS_LOCK_GIANT() rather than directly acquiring giant in places
   where giant is only held because vfs requires it.

Sponsored By:   Isilon Systems, Inc.
2005-01-24 10:48:29 +00:00
Warner Losh
60727d8b86 /* -> /*- for license, minor formatting changes 2005-01-07 02:29:27 +00:00
Alan Cox
a51b084059 Continue the transition from synchronizing access to the page's PG_BUSY
flag and busy field with the global page queues lock to synchronizing their
access with the containing object's lock.  Specifically, acquire the
containing object's lock before reading the page's PG_BUSY flag and busy
field in vm_fault().

Reviewed by: tegge@
2004-12-24 19:31:54 +00:00
Alan Cox
1f70d62298 Modify pmap_enter_quick() so that it expects the page queues to be locked
on entry and it assumes the responsibility for releasing the page queues
lock if it must sleep.

Remove a bogus comment from pmap_enter_quick().

Using the first change, modify vm_map_pmap_enter() so that the page queues
lock is acquired and released once, rather than each time that a page
is mapped.
2004-12-23 20:16:11 +00:00
Alan Cox
85f5b24573 In the common case, pmap_enter_quick() completes without sleeping.
In such cases, the busying of the page and the unlocking of the
containing object by vm_map_pmap_enter() and vm_fault_prefault() is
unnecessary overhead.  To eliminate this overhead, this change
modifies pmap_enter_quick() so that it expects the object to be locked
on entry and it assumes the responsibility for busying the page and
unlocking the object if it must sleep.  Note: alpha, amd64, i386 and
ia64 are the only implementations optimized by this change; arm,
powerpc, and sparc64 still conservatively busy the page and unlock the
object within every pmap_enter_quick() call.

Additionally, this change is the first case where we synchronize
access to the page's PG_BUSY flag and busy field using the containing
object's lock rather than the global page queues lock.  (Modifications
to the page's PG_BUSY flag and busy field have asserted both locks for
several weeks, enabling an incremental transition.)
2004-12-15 19:55:05 +00:00
Alan Cox
950d5f7a99 Remove unnecessary check for curthread == NULL. 2004-10-17 20:29:28 +00:00
Alan Cox
5e4bdb57cb System maps are prohibited from mapping vnode-backed objects. Take
advantage of this restriction to avoid acquiring and releasing Giant when
wiring pages within a system map.

In collaboration with: tegge@
2004-09-11 18:49:59 +00:00
Alan Cox
94ddc7076d Push Giant deep into vm_forkproc(), acquiring it only if the process has
mapped System V shared memory segments (see shmfork_myhook()) or requires
the allocation of an ldt (see vm_fault_wire()).
2004-09-03 05:11:32 +00:00
Alan Cox
1a95d74419 In vm_fault_unwire() eliminate the acquisition and release of Giant in the
case of non-kernel pmaps.
2004-09-01 19:18:59 +00:00
Alan Cox
3268a1bf75 In the previous revision, I failed to condition an early release of Giant
in vm_fault() on debug_mpsafevm.  If debug_mpsafevm was not set, the result
was an assertion failure early in the boot process.

Reported by: green@
2004-08-22 00:08:43 +00:00
Alan Cox
b99e61353f Further reduce the use of Giant by vm_fault(): Giant is held only when
manipulating a vnode, e.g., calling vput().  This reduces contention for
Giant during many copy-on-write faults, resulting in some additional
speedup on SMPs.

Note: debug_mpsafevm must be enabled for this optimization to take effect.
2004-08-21 19:20:21 +00:00
Alan Cox
c1fbc251cd - Introduce and use a new tunable "debug.mpsafevm". At present, setting
"debug.mpsafevm" results in (almost) Giant-free execution of zero-fill
   page faults.  (Giant is held only briefly, just long enough to determine
   if there is a vnode backing the faulting address.)

   Also, condition the acquisition and release of Giant around calls to
   pmap_remove() on "debug.mpsafevm".

   The effect on performance is significant.  On my dual Opteron, I see a
   3.6% reduction in "buildworld" time.

 - Use atomic operations to update several counters in vm_fault().
2004-08-16 06:16:12 +00:00
Tor Egge
19dc560756 The vm map lock is needed in vm_fault() after the page has been found,
to avoid later changes before pmap_enter() and vm_fault_prefault()
has completed.

Simplify deadlock avoidance by not blocking on vm map relookup.

In collaboration with: alc
2004-08-12 20:14:49 +00:00
Alan Cox
8673599662 Make two changes to vm_fault().
1. Move a comment to its proper place, updating it.  (Except for white-
   space, this comment had been unchanged since revision 1.1!)
2. Remove spl calls.
2004-08-09 18:46:39 +00:00
Alan Cox
eebf3286a6 Make two changes to vm_fault().
1. Retain the map lock until after the calls to pmap_enter() and
   vm_fault_prefault().
2. Remove a stale comment.  Submitted by: tegge@
2004-08-09 06:01:46 +00:00
Alan Cox
4be14af9cf To date, unwiring a fictitious page has produced a panic. The reason
being that PHYS_TO_VM_PAGE() returns the wrong vm_page for fictitious
pages but unwiring uses PHYS_TO_VM_PAGE().  The resulting panic
reported an unexpected wired count.  Rather than attempting to fix
PHYS_TO_VM_PAGE(), this fix takes advantage of the properties of
fictitious pages.  Specifically, fictitious pages will never be
completely unwired.  Therefore, we can keep a fictitious page's wired
count forever set to one and thereby avoid the use of
PHYS_TO_VM_PAGE() when we know that we're working with a fictitious
page, just not which one.

In collaboration with: green@, tegge@
PR: kern/29915
2004-05-22 04:53:51 +00:00
Alan Cox
5a32489377 Make vm_page's PG_ZERO flag immutable between the time of the page's
allocation and deallocation.  This flag's principal use is shortly after
allocation.  For such cases, clearing the flag is pointless.  The only
unusual use of PG_ZERO is in vfs_bio_clrbuf().  However, allocbuf() never
requests a prezeroed page.  So, vfs_bio_clrbuf() never sees a prezeroed
page.

Reviewed by:	tegge@
2004-05-06 05:03:23 +00:00
Alan Cox
5d328ed44b - Make the acquisition of Giant in vm_fault_unwire() conditional on the
pmap.  For the kernel pmap, Giant is not required.  In general, for
   other pmaps, Giant is required by i386's pmap_pte() implementation.
   Specifically, the use of PMAP2/PADDR2 is synchronized by Giant.
   Note: In principle, updates to the kernel pmap's wired count could be
   lost without Giant.  However, in practice, we never use the kernel
   pmap's wired count.  This will be resolved when pmap locking appears.
 - With the above change, cpu_thread_clean() and uma_large_free() need
   not acquire Giant.  (The first case is simply the revival of
   i386/i386/vm_machdep.c's revision 1.226 by peter.)
2004-03-10 04:44:43 +00:00
Alan Cox
c6d9ef2e1f Correct a long-standing race condition in vm_fault() that could result in a
panic "vm_page_cache: caching a dirty page, ...": Access to the page must
be restricted or removed before calling vm_page_cache().  This race
condition is identical in nature to that which was addressed by
vm_pageout.c's revision 1.251 and vm_page.c's revision 1.275.

Reviewed by:	tegge
MFC after:	7 days
2004-02-15 00:42:26 +00:00
Alan Cox
bfee999d6a - Locking for the per-process resource limits structure has eliminated
the need for Giant in vm_map_growstack().
 - Use the proc * that is passed to vm_map_growstack() rather than
   curthread->td_proc.
2004-02-05 06:33:18 +00:00
Alan Cox
a976eb5e46 - Reduce Giant's scope in vm_fault().
- Use vm_object_reference_locked() instead of vm_object_reference()
   in vm_fault().
2003-12-26 23:33:37 +00:00
Jonathan Mini
8f101a2f31 NFC: Update stale comments.
Reviewed by:	alc
2003-11-10 00:44:00 +00:00
Alan Cox
c5b65a6723 - vm_fault_copy_entry() should not assume that the source object contains
every page.  If the source entry was read-only, one or more wired pages
   could be in backing objects.
 - vm_fault_copy_entry() should not set the PG_WRITEABLE flag on the page
   unless the destination entry is, in fact, writeable.
2003-10-15 08:00:45 +00:00
Alan Cox
8afcf0cc36 Lock the destination object in vm_fault_copy_entry(). 2003-10-08 07:11:19 +00:00
Alan Cox
669890eaeb Retire vm_page_copy(). Its reason for being ended when peter@ modified
pmap_copy_page() et al. to accept a vm_page_t rather than a physical
address.  Also, this change will facilitate locking access to the vm page's
valid field.
2003-10-08 05:35:12 +00:00
Alan Cox
cbfbaad8be Synchronize access to a vm page's valid field using the containing
vm object's lock.
2003-10-04 21:35:48 +00:00
Alan Cox
566526a957 Migrate pmap_prefault() into the machine-independent virtual memory layer.
A small helper function pmap_is_prefaultable() is added.  This function
encapsulate the few lines of pmap_prefault() that actually vary from
machine to machine.  Note: pmap_is_prefaultable() and pmap_mincore() have
much in common.  Going forward, it's worth considering their merger.
2003-10-03 22:46:53 +00:00
Alan Cox
417a26a154 Add vm object locking to vnode_pager_lock(). (This triggers the movement
of a VM_OBJECT_LOCK() in vm_fault().)
2003-09-18 02:26:03 +00:00
Alan Cox
8d8b9c6e70 To implement the sequential access optimization, vm_fault() may need to
reacquire the "first" object's lock while a backing object's lock is held.
Since this is a lock-order reversal, vm_fault() uses trylock to acquire
the first object's lock, skipping the sequential access optimization in
the unlikely event that the trylock fails.
2003-08-23 06:52:32 +00:00
Alan Cox
f29ba63ec9 Maintain a lock on the vm object of interest throughout vm_fault(),
releasing the lock only if we are about to sleep (e.g., vm_pager_get_pages()
or vm_pager_has_pages()).  If we sleep, we have marked the vm object with
the paging-in-progress flag.
2003-06-22 21:35:41 +00:00
Alan Cox
c8567c3a77 As vm_fault() descends the chain of backing objects, set paging-in-
progress on the next object before clearing it on the current object.
2003-06-22 05:36:53 +00:00
Alan Cox
d98ddc4615 Make some style and white-space changes to the copy-on-write path through
vm_fault(); remove a pointless assignment statement from that path.
2003-06-22 00:00:11 +00:00
Alan Cox
ebf7512532 Lock one of the vm objects involved in an optimized copy-on-write fault. 2003-06-21 06:31:42 +00:00
Alan Cox
e50346b5e0 The so-called "optimized copy-on-write fault" case should not require
the vm map lock.  What's really needed is vm object locking, which
is (for the moment) provided Giant.

Reviewed by:	tegge
2003-06-20 04:20:36 +00:00
Alan Cox
d18e8afe99 Fix a vm object reference leak in the page-based copy-on-write mechanism
used by the zero-copy sockets implementation.

Reviewed by:	gallatin
2003-06-19 01:40:44 +00:00
David E. O'Brien
874651b13c Use __FBSDID(). 2003-06-11 23:50:51 +00:00
John Baldwin
eeec6bab2e Prefer the proc lock to sched_lock when testing PS_INMEM now that it is
safe to do so.
2003-04-22 20:01:56 +00:00
Alan Cox
b009d5a0af - Lock the vm_object when performing vm_object_pip_wakeup(). 2003-04-20 19:25:28 +00:00
Alan Cox
d22bc7101c - Lock the vm_object when performing vm_object_pip_add(). 2003-04-20 03:41:21 +00:00
Jake Burkholder
227f9a1c58 - Add vm_paddr_t, a physical address type. This is required for systems
where physical addresses larger than virtual addresses, such as i386s
  with PAE.
- Use this to represent physical addresses in the MI vm system and in the
  i386 pmap code.  This also changes the paddr parameter to d_mmap_t.
- Fix printf formats to handle physical addresses >4G in the i386 memory
  detection code, and due to kvtop returning vm_paddr_t instead of u_long.

Note that this is a name change only; vm_paddr_t is still the same as
vm_offset_t on all currently supported platforms.

Sponsored by:	DARPA, Network Associates Laboratories
Discussed with:	re, phk (cdevsw change)
2003-03-25 00:07:06 +00:00
Kenneth D. Merry
9b80d344ec Zero copy send and receive fixes:
- On receive, vm_map_lookup() needs to trigger the creation of a shadow
  object.  To make that happen, call vm_map_lookup() with PROT_WRITE
  instead of PROT_READ in vm_pgmoveco().

- On send, a shadow object will be created by the vm_map_lookup() in
  vm_fault(), but vm_page_cowfault() will delete the original page from
  the backing object rather than simply letting the legacy COW mechanism
  take over.  In other words, the new page should be added to the shadow
  object rather than replacing the old page in the backing object.  (i.e.
  vm_page_cowfault() should not be called in this case.)  We accomplish
  this by making sure fs.object == fs.first_object before calling
  vm_page_cowfault() in vm_fault().

Submitted by:	gallatin, alc
Tested by:	ken
2003-03-08 06:58:18 +00:00
Alan Cox
09c80124a3 Remove ENABLE_VFS_IOOPT. It is a long unfinished work-in-progress.
Discussed on:	arch@
2003-03-06 03:41:02 +00:00
Matthew Dillon
e3669cee72 Merge all the various copies of vm_fault_quick() into a single
portable copy.
2003-01-16 00:02:21 +00:00
Alan Cox
1761f1829d vm_fault_copy_entry() needn't clear PG_ZERO because it didn't pass
VM_ALLOC_ZERO to vm_page_alloc().
2003-01-12 07:33:16 +00:00
Alan Cox
a28cc55e5b Reduce the number of times that we acquire and release the page queues
lock by making vm_page_rename()'s caller, rather than vm_page_rename(),
responsible for acquiring it.
2002-12-29 07:17:06 +00:00
Alan Cox
82ea080d88 - Hold the page queues lock around calls to vm_page_flag_clear(). 2002-12-24 19:02:03 +00:00
Alan Cox
9a96b6382a - Hold the page queues lock when performing vm_page_busy() or
vm_page_flag_set().
 - Replace vm_page_sleep_busy() with proper page queues locking
   and vm_page_sleep_if_busy().
2002-12-19 01:20:24 +00:00