Commit Graph

1154 Commits

Author SHA1 Message Date
Alan Cox
7788e21963 o Revert vm_fault1() to its original name vm_fault(), eliminating the wrapper
that took its place for the purposes of acquiring and releasing Giant.
2002-04-30 03:44:34 +00:00
Jeff Roberson
28bc44195c Add a new zone flag UMA_ZONE_MTXCLASS. This puts the zone in it's own
mutex class.  Currently this is only used for kmapentzone because kmapents
are are potentially allocated when freeing memory.  This is not dangerous
though because no other allocations will be done while holding the
kmapentzone lock.
2002-04-29 23:45:41 +00:00
Peter Wemm
db17c6fc07 Tidy up some loose ends.
i386/ia64/alpha - catch up to sparc64/ppc:
- replace pmap_kernel() with refs to kernel_pmap
- change kernel_pmap pointer to (&kernel_pmap_store)
  (this is a speedup since ld can set these at compile/link time)
all platforms (as suggested by jake):
- gc unused pmap_reference
- gc unused pmap_destroy
- gc unused struct pmap.pm_count
(we never used pm_count - we track address space sharing at the vmspace)
2002-04-29 07:43:16 +00:00
Alan Cox
532eadef77 Document three synchronization issues in vm_fault(). 2002-04-29 05:23:01 +00:00
Alan Cox
780b1c0997 Pass the caller's file name and line number to the vm_map locking functions. 2002-04-28 23:12:52 +00:00
Alan Cox
d974f03c69 o Introduce and use vm_map_trylock() to replace several direct uses
of lockmgr().
 o Add missing synchronization to vmspace_swap_count(): Obtain a read lock
   on the vm_map before traversing it.
2002-04-28 06:07:54 +00:00
Peter Wemm
44e74ba6c3 We do not necessarily need to map/unmap pages to zero parts of them.
On systems where physical memory is also direct mapped (alpha, sparc,
ia64 etc) this is slightly harmful.
2002-04-28 00:15:48 +00:00
Alan Cox
089b073345 o Begin documenting the (existing) locking protocol on the vm_map
in the same style as sys/proc.h.
 o Undo the de-inlining of several trivial, MPSAFE methods on the vm_map.
   (Contrary to the commit message for vm_map.h revision 1.66 and vm_map.c
   revision 1.206, de-inlining these methods increased the kernel's size.)
2002-04-27 22:01:37 +00:00
Alan Cox
cbd53e95fe o Control access to the vm_page_buckets with a mutex.
o Fix some style(9) bugs.
2002-04-26 22:44:15 +00:00
Andrew R. Reiter
d4d6aee5a0 - Fix a round down bogon in uma_zone_set_max().
Submitted by: jeff@
2002-04-25 06:24:40 +00:00
Alan Cox
a569838764 Reintroduce locking on accesses to vm_object_list. 2002-04-20 07:23:22 +00:00
Alan Cox
92de35b0ce o Move the acquisition of Giant from vm_fault() to the point
after initialization in vm_fault1().
 o Fix some style problems in vm_fault1().
2002-04-19 04:20:31 +00:00
Alan Cox
ff8f4ebe22 Add a comment documenting a race condition in vm_fault(): Specifically, a
modification is made to the vm_map while only a read lock is held.
2002-04-18 03:55:50 +00:00
Alan Cox
6139043b1f o Call vm_map_growstack() from vm_fault() if vm_map_lookup() has failed
due to conditions that suggest the possible need for stack growth.
   This has two beneficial effects: (1) we can
   now remove calls to vm_map_growstack() from the MD trap handlers and (2)
   simple page faults are faster because we no longer unnecessarily perform
   vm_map_growstack() on every page fault.
 o Remove vm_map_growstack() from the i386's trap_pfault().
 o Remove the acquisition and release of Giant from i386's trap_pfault().
   (vm_fault() still acquires it.)
2002-04-18 03:28:27 +00:00
Peter Wemm
334f706177 Do not free the vmspace until p->p_vmspace is set to null. Otherwise
statclock can access it in the tail end of statclock_process() at an
unfortunate time.  This bit me several times on an SMP alpha (UP2000)
and the problem went away with this change.  I'm not sure why it doesn't
break x86 as well.  Maybe it's because the clocks are much faster
on alpha (HZ=1024 by default).
2002-04-17 05:26:42 +00:00
Alan Cox
b208d0633f Remove an unused option, VM_FAULT_HOLD, to vm_fault(). 2002-04-17 02:23:57 +00:00
Peter Wemm
1a87a0da66 Pass vm_page_t instead of physical addresses to pmap_zero_page[_area]()
and pmap_copy_page().  This gets rid of a couple more physical addresses
in upper layers, with the eventual aim of supporting PAE and dealing with
the physical addressing mostly within pmap.  (We will need either 64 bit
physical addresses or page indexes, possibly both depending on the
circumstances.  Leaving this to pmap itself gives more flexibilitly.)

Reviewed by:	jake
Tested on:	i386, ia64 and (I believe) sparc64. (my alpha was hosed)
2002-04-15 16:00:03 +00:00
Jeff Roberson
5300d9dda2 Fix a witness warning when expanding a hash table. We were allocating the new
hash while holding the lock on a zone.  Fix this by doing the allocation
seperately from the actual hash expansion.

The lock is dropped before the allocation and reacquired before the expansion.
The expansion code checks to see if we lost the race and frees the new hash
if we do.  We really never will lose this race because the hash expansion is
single threaded via the timeout mechanism.
2002-04-14 13:47:10 +00:00
Jeff Roberson
0da47b2fc6 Protect the initial list traversal in sysctl_vm_zone() with the uma_mtx. 2002-04-14 12:39:38 +00:00
Jeff Roberson
af7f9b97b6 Fix the calculation that determines uz_maxpages. It was off for large zones.
Fortunately we have no large zones with maximums specified yet, so it wasn't
breaking anything.

Implement blocking when a zone exceeds the maximum and M_WAITOK is specified.
Previously this just failed like the old zone allocator did.  The old zone
allocator didn't support WAITOK/NOWAIT though so we should do what we
advertise.

While I was in there I cleaned up some more zalloc logic to further simplify
that code path and reduce redundant code.  This was needed to make the blocking
work properly anyway.
2002-04-14 01:56:25 +00:00
Jeff Roberson
bce9779110 Remember to unlock the zone if the fill count is too high.
Pointed out by:	pete, jake, jhb
2002-04-10 01:52:50 +00:00
Jeff Roberson
1d4cb54ba8 Quiet witness warnings about acquiring several zone locks. In the case that
this happens it is OK.
2002-04-08 21:08:17 +00:00
Jeff Roberson
86bbae32f4 Add a mechanism to disable buckets when the v_free_count drops below
v_free_min.  This should help performance in memory starved situations.
2002-04-08 06:20:34 +00:00
Jeff Roberson
605cbd6a08 Don't release the zone lock until after the dtor has been called. As far as I
can tell this could not have caused any problems yet because UMA is still
called with giant.

Pointy hat to:	jeff
Noticed by:	jake
2002-04-08 05:13:48 +00:00
Jeff Roberson
9c2cd7e5a9 Implement uma_zdestroy(). It's prototype changed slightly. I decided that I
didn't like the wait argument and that if you were removing a zone it had
better be empty.

Also, I broke out part of hash_expand and made a seperate hash_free() for use
in uma_zdestroy.
2002-04-08 04:48:58 +00:00
Jeff Roberson
a553d4b8eb Rework most of the bucket allocation and free code so that per cpu locks are
never held across blocking operations.  Also, fix two other lock order
reversals that were exposed by jhb's witness change.

The free path previously had a bug that would cause it to skip the free bucket
list in some cases and go straight to allocating a new bucket.  This has been
fixed as well.

These changes made the bucket handling code much cleaner and removed quite a
few lock operations.  This should be marginally faster now.

It is now possible to call malloc w/o Giant and avoid any witness warnings.
This still isn't entirely safe though because malloc_type statistics are not
protected by any lock.
2002-04-08 02:42:55 +00:00
Jeff Roberson
c235bfa551 Spelling correction; s/seperate/separate/g
Submitted by:	eric
2002-04-07 22:56:48 +00:00
Jeff Roberson
fedfeee018 There should be no remaining references to these two files in the tree. If
there are, it is an error.  vm_zone has been superseded by uma.
2002-04-07 22:51:18 +00:00
Jeff Roberson
d0b06acbe1 This fixes a bug where isitem never got set to 1 if a certain chain of events
relating to extreme low memory situations occured.  This was only ever seen on
the port build cluster, so many thanks to kris for helping me debug this.

Tested by:	kris
2002-04-07 22:47:36 +00:00
Alan Cox
aa4d062142 o Eliminate the use of grow_stack() and useracc() from sendsig(), osendsig(),
and osf1_sendsig().
 o Eliminate the prototype for the MD grow_stack() now that it has been removed
   from all platforms.
2002-04-05 00:52:15 +00:00
Matthew Dillon
80f5c8bf42 Embed a struct vmmeter in the per-cpu structure and add a macro,
PCPU_LAZY_INC() which increments elements in it for cases where we
can afford the occassional inaccuracy.  Use of per-cpu stats counters
avoids significant cache stalls in various critical paths that would
otherwise severely limit our cpu scaleability.

Adjust all sysctl's accessing cnt.* elements to now use a procedure
which aggregates the requested field for all cpus and for the global
vmmeter.

The global vmmeter is retained, since some stats counters, like v_free_min,
cannot be made per-cpu.  Also, this allows us to convert counters from
the global vmmeter to the per-cpu vmmeter in a piecemeal fashion, so
have at it!
2002-04-04 21:38:47 +00:00
John Baldwin
6008862bc2 Change callers of mtx_init() to pass in an appropriate lock type name. In
most cases NULL is passed, but in some cases such as network driver locks
(which use the MTX_NETWORK_LOCK macro) and UMA zone locks, a name is used.

Tested on:	i386, alpha, sparc64
2002-04-04 21:03:38 +00:00
Jake Burkholder
48f9a59443 Fix a long standing 32bit-ism. Don't assume that the size of a chunk of
memory in phys_avail will fit in 'int', use vm_size_t.  This fixes booting
on sparc64 machines with more than 2 gigs of ram.

Thanks to Jan Chrillesen for providing me with access to a 4 gig machine.
2002-04-03 06:57:52 +00:00
Alfred Perlstein
157d7b3538 fix comment typo, s/neccisary/necessary/g 2002-04-02 21:25:12 +00:00
John Baldwin
44731cab3b Change the suser() API to take advantage of td_ucred as well as do a
general cleanup of the API.  The entire API now consists of two functions
similar to the pre-KSE API.  The suser() function takes a thread pointer
as its only argument.  The td_ucred member of this thread must be valid
so the only valid thread pointers are curthread and a few kernel threads
such as thread0.  The suser_cred() function takes a pointer to a struct
ucred as its first argument and an integer flag as its second argument.
The flag is currently only used for the PRISON_ROOT flag.

Discussed on:	smp@
2002-04-01 21:31:13 +00:00
Jeff Roberson
f22a4b62f5 Add a new mtx_init option "MTX_DUPOK" which allows duplicate acquires of locks
with this flag.  Remove the dup_list and dup_ok code from subr_witness.  Now
we just check for the flag instead of doing string compares.

Also, switch the process lock, process group lock, and uma per cpu locks over
to this interface.  The original mechanism did not work well for uma because
per cpu lock names are unique to each zone.

Approved by:	jhb
2002-03-27 09:23:41 +00:00
Alan Cox
433b72aa12 Remove an unused prototype. 2002-03-26 05:30:59 +00:00
Jeff Roberson
f4af24d55d Reset the cachefree statistics after draining the cache. This fixes a bug
where a sysctl within 20 seconds of a cache_drain could yield negative "USED"
counts.

Also, grab the uma_mtx while in the sysctl handler.  This hadn't caused
problems yet because Giant is held all the time.

Reported by:	kkenn
2002-03-24 10:56:11 +00:00
Jeff Roberson
736ee5907f Add uma_zone_set_max() to add enforced limits to non vm obj backed zones. 2002-03-20 05:28:34 +00:00
Jeff Roberson
670d17b5c0 Remove references to vm_zone.h and switch over to the new uma API. 2002-03-20 04:02:59 +00:00
Alfred Perlstein
11caded34f Remove __P. 2002-03-19 22:20:14 +00:00
Jeff Roberson
9eb6e51923 Quit a warning introduced by UMA. This only occurs on machines where
vm_size_t != unsigned long.

Reviewed by:	phk
2002-03-19 11:49:10 +00:00
Peter Wemm
30171114b3 Fix a gcc-3.1+ warning.
warning: deprecated use of label at end of compound statement

ie: you cannot do this anymore:
switch(foo) {
....

default:
}
2002-03-19 11:02:06 +00:00
Jeff Roberson
8355f576a9 This is the first part of the new kernel memory allocator. This replaces
malloc(9) and vm_zone with a slab like allocator.

Reviewed by:	arch@
2002-03-19 09:11:49 +00:00
Brian Feldman
25adb370be Back out the modification of vm_map locks from lockmgr to sx locks. The
best path forward now is likely to change the lockmgr locks to simple
sleep mutexes, then see if any extra contention it generates is greater
than removed overhead of managing local locking state information,
cost of extra calls into lockmgr, etc.

Additionally, making the vm_map lock a mutex and respecting it properly
will put us much closer to not needing Giant magic in vm.
2002-03-18 15:08:09 +00:00
Alan Cox
9f0567f557 Remove vm_object_count: It's unused, incorrectly maintained and duplicates
information maintained by the zone allocator.
2002-03-17 18:37:37 +00:00
Alan Cox
5ee9fe6ba1 Undo part of revision 1.57: Now that (o)sendsig() doesn't call useracc(),
the motivation for saving and restoring the map->hint in useracc() is gone.
(The same tests that motivated this change in revision 1.57 now show that
there is no performance loss from removing it.)  This was really a hack and
some day we would have had to add new synchronization here on map->hint
to maintain it.
2002-03-17 07:01:42 +00:00
Alan Cox
2f6c16e1e8 Acquire a read lock on the map inside of vm_map_check_protection() rather
than expecting the caller to do so.  This (1) eliminates duplicated code in
kernacc() and useracc() and (2) fixes missing synchronization in munmap().
2002-03-17 03:19:31 +00:00
Jake Burkholder
ac59490b5e Convert all pmap_kenter/pmap_kremove pairs in MI code to use pmap_qenter/
pmap_qremove.  pmap_kenter is not safe to use in MI code because it is not
guaranteed to flush the mapping from the tlb on all cpus.  If the process
in question is preempted and migrates cpus between the call to pmap_kenter
and pmap_kremove, the original cpu will be left with stale mappings in its
tlb.  This is currently not a problem for i386 because we do not use PG_G on
SMP, and thus all mappings are flushed from the tlb on context switches, not
just user mappings.  This is not the case on all architectures, and if PG_G
is to be used with SMP on i386 it will be a problem.  This was committed by
peter earlier as part of his fine grained tlb shootdown work for i386, which
was backed out for other reasons.

Reviewed by:	peter
2002-03-17 00:56:41 +00:00
Kirk McKusick
0d2af52141 Introduce the new 64-bit size disk block, daddr64_t. Change
the bio and buffer structures to have daddr64_t bio_pblkno,
b_blkno, and b_lblkno fields which allows access to disks
larger than a Terabyte in size. This change also requires
that the VOP_BMAP vnode operation accept and return daddr64_t
blocks. This delta should not affect system operation in
any way. It merely sets up the necessary interfaces to allow
the development of disk drivers that work with these larger
disk block addresses. It also allows for the development of
UFS2 which will use 64-bit block addresses.
2002-03-15 18:49:47 +00:00