Commit Graph

1546 Commits

Author SHA1 Message Date
Poul-Henning Kamp
3ccbf2d533 Use correct _VM_SWAP_PAGER_H_ to check for multiple inclusion. 2003-01-03 14:22:52 +00:00
Poul-Henning Kamp
69fd75d094 Retire sys/dmap.h by including the two lines of it which matters
directly in vm/vm_swap.c.
2003-01-03 09:55:05 +00:00
Alan Cox
a6864937e2 Lock the vm object when performing vm_object_clear_flag(). 2003-01-03 09:15:43 +00:00
Poul-Henning Kamp
862702306b Convert calls to BUF_STRATEGY to VOP_STRATEGY calls. This is a no-op since
all BUF_STRATEGY did in the first place was call VOP_STRATEGY.
2003-01-03 06:32:15 +00:00
Alan Cox
49247edca6 Add vm map and vm object locking to vmtotal(). 2003-01-03 05:52:02 +00:00
Alan Cox
81e4e48d24 Lock the vm object when performing vm_object_clear_flag(). 2003-01-02 09:09:27 +00:00
Alan Cox
d61e1287a4 Update the assertions in vm_page_insert() and vm_page_lookup() to reflect
locking of the kmem_object.
2003-01-01 19:45:36 +00:00
Jens Schweikhardt
9d5abbddbf Correct typos, mostly s/ a / an / where appropriate. Some whitespace cleanup,
especially in troff files.
2003-01-01 18:49:04 +00:00
Alan Cox
ea0081b61e Add a needed #include.
Reported by:	ia64 tinderbox
2003-01-01 00:13:01 +00:00
Alan Cox
36daaecd04 Implement a variant locking scheme for vm maps: Access to system maps
is now synchronized by a mutex, whereas access to user maps is still
synchronized by a lockmgr()-based lock.  Why?  No single type of lock,
including sx locks, meets the requirements of both types of vm map.
Sometimes we sleep while holding the lock on a user map.  Thus, a
a mutex isn't appropriate.  On the other hand, both lockmgr()-based
and sx locks release Giant when a thread/process blocks during
contention for a lock.  This could lead to a race condition in a legacy
driver (that relies on Giant for synchronization) if it attempts to
kmem_malloc() and fails to immediately obtain the lock.  Fortunately,
we never sleep while holding a system map lock.
2002-12-31 19:38:04 +00:00
Alan Cox
c9267356b7 - Mark the kernel_map as a system map immediately after its creation.
- Correct a cast.
2002-12-30 05:55:41 +00:00
Alan Cox
3a92e5d5e9 - Increment the vm_map's timestamp if _vm_map_trylock() succeeds.
- Introduce map_sleep_mtx and use it to replace Giant in
   vm_map_unlock_and_wait() and vm_map_wakeup().  (Original
   version by: tegge.)
2002-12-30 00:41:33 +00:00
Alan Cox
e3a9e1b2a8 - Remove vm_object_init2(). It is unused.
- Add a mtx_destroy() to vm_object_collapse().  (This allows a bzero()
   to migrate from _vm_object_allocate() to vm_object_zinit(), where it
   will be performed less often.)
2002-12-29 21:01:14 +00:00
Alan Cox
a28cc55e5b Reduce the number of times that we acquire and release the page queues
lock by making vm_page_rename()'s caller, rather than vm_page_rename(),
responsible for acquiring it.
2002-12-29 07:17:06 +00:00
Alan Cox
2ee5fea7d3 Assert that the page queues lock rather than Giant is held in
vm_page_flag_clear().
2002-12-28 22:49:37 +00:00
Matthew Dillon
40bb4f4bcf vm_pager_put_pages() takes VM_PAGER_* flags, not OBJPC_* flags. It just
so happens that OBJPC_SYNC has the same value as VM_PAGER_PUT_SYNC so no
harm done.  But fix it :-)

No operational changes.

MFC after:	1 day
2002-12-28 21:15:39 +00:00
Matthew Dillon
43b7990e30 Allow the VM object flushing code to cluster. When the filesystem syncer
comes along and flushes a file which has been mmap()'d SHARED/RW, with
dirty pages, it was flushing the underlying VM object asynchronously,
resulting in thousands of 8K writes.  With this change the VM Object flushing
code will cluster dirty pages in 64K blocks.

Note that until the low memory deadlock issue is reviewed, it is not safe
to allow the pageout daemon to use this feature.  Forced pageouts still
use fs block size'd ops for the moment.

MFC after:	3 days
2002-12-28 21:03:42 +00:00
Alan Cox
a623fedef7 Two changes to kmem_malloc():
- Use VM_ALLOC_WIRED.
 - Perform vm_page_wakeup() after pmap_enter(), like we do everywhere else.
2002-12-28 19:03:54 +00:00
Alan Cox
35c016315f - Change vm_object_page_collect_flush() to assert rather than
acquire the page queues lock.
 - Acquire the page queues lock in vm_object_page_clean().
2002-12-27 20:16:13 +00:00
Alan Cox
969da54c3a Increase the scope of the page queues lock in phys_pager_getpages(). 2002-12-27 06:09:56 +00:00
Alan Cox
82ea080d88 - Hold the page queues lock around calls to vm_page_flag_clear(). 2002-12-24 19:02:03 +00:00
Alan Cox
dc907f6632 - Hold the page queues lock around vm_page_wakeup(). 2002-12-24 04:24:58 +00:00
Alan Cox
6e14fce9d9 - Hold the kernel_object's lock around vm_page_insert(..., kernel_object,
...).
2002-12-23 20:39:15 +00:00
Alan Cox
7af7dd3c6f Eliminate some dead code. (Any possible use for this code died with
vm/vm_page.c revision 1.220.)

Submitted by:	bde
2002-12-23 04:35:38 +00:00
Matthew Dillon
9991ea7178 The UP -current was not properly counting the per-cpu VM stats in the
sysctl code.  This makes 'systat -vm 1's syscall count work again.

Submitted by:	Michal Mertl <mime@traveller.cz>
Note:		also slated for 5.0
2002-12-22 05:04:30 +00:00
Alan Cox
671e427ce9 Increase the scope of the kmem_object locking in kmem_malloc(). 2002-12-20 18:59:23 +00:00
Alan Cox
4b420d501f Add a mutex to struct vm_object. Initialize and destroy that mutex
at appropriate times.  For the moment, the mutex is only used on
the kmem_object.
2002-12-20 05:10:32 +00:00
Alan Cox
cf3e6e4837 Remove the hash_rand field from struct vm_object. As of revision 1.215 of
vm/vm_page.c, it is unused.
2002-12-19 20:01:22 +00:00
Alan Cox
24c9ad6bed - Remove vm_page_sleep_busy(). The transition to vm_page_sleep_if_busy(),
which incorporates page queue and field locking, is complete.
 - Assert that the page queue lock rather than Giant is held in
   vm_page_flag_set().
2002-12-19 07:23:46 +00:00
Alan Cox
9a96b6382a - Hold the page queues lock when performing vm_page_busy() or
vm_page_flag_set().
 - Replace vm_page_sleep_busy() with proper page queues locking
   and vm_page_sleep_if_busy().
2002-12-19 01:20:24 +00:00
Alan Cox
bd82dc7460 - Hold the page queues lock when performing vm_page_busy().
- Replace vm_page_sleep_busy() with proper page queues locking
   and vm_page_sleep_if_busy().
2002-12-18 04:39:15 +00:00
Alan Cox
b365ea9e30 Hold the page queues lock when performing vm_page_flag_set(). 2002-12-18 04:02:02 +00:00
Alan Cox
d8e7c54e1e Hold the page queues lock when performing vm_page_flag_set(). 2002-12-17 19:55:28 +00:00
Matthew Dillon
fa7dd9c5bc Change the way ELF coredumps are handled. Instead of unconditionally
skipping read-only pages, which can result in valuable non-text-related
data not getting dumped, the ELF loader and the dynamic loader now mark
read-only text pages NOCORE and the coredump code only checks (primarily) for
complete inaccessibility of the page or NOCORE being set.

Certain applications which map large amounts of read-only data will
produce much larger cores.  A new sysctl has been added,
debug.elf_legacy_coredump, which will revert to the old behavior.

This commit represents collaborative work by all parties involved.
The PR contains a program demonstrating the problem.

PR:		kern/45994
Submitted by:	"Peter Edwards" <pmedwards@eircom.net>, Archie Cobbs <archie@dellroad.org>
Reviewed by:	jdp, dillon
MFC after:	7 days
2002-12-16 19:24:43 +00:00
Alan Cox
4b36fe0cbd Perform vm_object_lock() and vm_object_unlock() on kmem_object
around vm_page_lookup() and vm_page_free().
2002-12-15 21:09:09 +00:00
Matthew Dillon
92da00bb24 This is David Schultz's swapoff code which I am finally able to commit.
This should be considered highly experimental for the moment.

Submitted by:	David Schultz <dschultz@uclink.Berkeley.EDU>
MFC after:	3 weeks
2002-12-15 19:17:57 +00:00
Matthew Dillon
389d2b6e21 Fix a refcount race with the vmspace structure. In order to prevent
resource starvation we clean-up as much of the vmspace structure as we
can when the last process using it exits.  The rest of the structure
is cleaned up when it is reaped.  But since exit1() decrements the ref
count it is possible for a double-free to occur if someone else, such as
the process swapout code, references and then dereferences the structure.
Additionally, the final cleanup of the structure should not occur until
the last process referencing it is reaped.

This commit solves the problem by introducing a secondary reference count,
calling 'vm_exitingcnt'.  The normal reference count is decremented on exit
and vm_exitingcnt is incremented.  vm_exitingcnt is decremented when the
process is reaped.  When both vm_exitingcnt and vm_refcnt are 0, the
structure is freed for real.

MFC after:	3 weeks
2002-12-15 18:50:04 +00:00
Alan Cox
2840cabe6a As per the comments, vm_object_page_remove() now expects its caller to lock
the object (i.e., acquire Giant).
2002-12-15 07:30:51 +00:00
Alan Cox
5e83956af5 Perform vm_object_lock() and vm_object_unlock() around
vm_object_page_remove().
2002-12-15 07:16:51 +00:00
Alan Cox
475e8011ab Perform vm_object_lock() and vm_object_unlock() around
vm_object_page_remove().
2002-12-15 05:41:56 +00:00
Alan Cox
495bedfbd0 Assert that the page queues lock is held in vm_page_unhold(),
vm_page_remove(), and vm_page_free_toq().
2002-12-15 00:06:02 +00:00
Alan Cox
bc105a6797 Hold the page queues lock when calling pmap_protect(); it updates fields
of the vm_page structure.  Make the style of the pmap_protect() calls
consistent.

Approved by:	re (blanket)
2002-12-01 18:57:56 +00:00
Alan Cox
38857e7f73 Hold the page queues lock when calling pmap_protect(); it updates fields
of the vm_page structure.  Nearby, remove an unnecessary semicolon and
return statement.

Approved by:	re (blanket)
2002-12-01 05:40:18 +00:00
Alan Cox
78f7187d01 Increase the scope of the page queue lock in vm_pageout_scan().
Approved by:	re (blanket)
2002-12-01 00:02:39 +00:00
Alan Cox
e80b7b691e Lock page field accesses in mincore().
Approved by:	re (blanket)
2002-11-28 08:01:39 +00:00
Alan Cox
85e0124324 Hold the page queues lock when performing pmap_clear_modify().
Approved by:	re (blanket)
2002-11-27 19:51:48 +00:00
Alan Cox
3a199de3d9 Hold the page queues lock while performing pmap_page_protect().
Approved by:	re (blanket)
2002-11-27 08:03:24 +00:00
Alan Cox
85e03a7e1e Acquire and release the page queues lock around calls to pmap_protect()
because it updates flags within the vm page.

Approved by:	re (blanket)
2002-11-25 22:00:31 +00:00
Alan Cox
13dc71ed40 Extend the scope of the page queues/fields locking in vm_freeze_copyopts()
to cover pmap_remove_all().

Approved by:	re
2002-11-24 06:13:38 +00:00
Alan Cox
178949e021 Hold the page queues/flags lock when calling vm_page_set_validclean().
Approved by:	re
2002-11-23 19:10:31 +00:00
Alan Cox
ba0208b945 Assert that the page queues lock rather than Giant is held in
vm_pageout_page_free().

Approved by:	re
2002-11-23 08:08:54 +00:00
Alan Cox
e8a27959f6 Add page queue and flag locking in vnode_pager_setsize().
Approved by:	re
2002-11-23 03:58:35 +00:00
Jeff Roberson
855a310fcb - Add an event that is triggered when the system is low on memory. This is
intended to be used by significant memory consumers so that they may drain
   some of their caches.

Inspired by:	phk
Approved by:	re
Tested on:	x86, alpha
2002-11-21 09:17:56 +00:00
Jeff Roberson
74c924b553 - Wakeup the correct address when a zone is no longer full.
Spotted by:	jake
2002-11-18 08:27:14 +00:00
Alan Cox
a12cc0e489 Remove vm_page_protect(). Instead, use pmap_page_protect() directly. 2002-11-18 04:05:22 +00:00
Jeff Roberson
f3da1873bc - Don't forget the flags value when using boot pages.
Reported by:	grehan
2002-11-16 20:57:41 +00:00
Alan Cox
4fec79bef8 Now that pmap_remove_all() is exported by our pmap implementations
use it directly.
2002-11-16 07:44:25 +00:00
Alan Cox
81b9ee99e7 Remove dead code that hasn't been needed since the demise of share maps
in various revisions of vm/vm_map.c between 1.148 and 1.153.
2002-11-13 19:50:06 +00:00
Alan Cox
eea85e9bb6 Move pmap_collect() out of the machine-dependent code, rename it
to reflect its new location, and add page queue and flag locking.

Notes: (1) alpha, i386, and ia64 had identical implementations
of pmap_collect() in terms of machine-independent interfaces;
(2) sparc64 doesn't require it; (3) powerpc had it as a TODO.
2002-11-13 05:39:58 +00:00
Olivier Houchard
f64e99baa2 Remove extra #include<sys/vmmeter.h>. 2002-11-11 13:57:50 +00:00
Matt Jacob
81f71edaec atomic_set_8 isn't MI. Instead, follow Jake's suggestions about
ZONE_LOCK.
2002-11-11 11:50:03 +00:00
Alan Cox
6372d61e3e - Clear the page's PG_WRITEABLE flag in the i386's pmap_changebit()
if we're removing write access from the page's PTEs.
 - Export pmap_remove_all() on alpha, i386, and ia64.  (It's already
   exported on sparc64.)
2002-11-11 05:17:34 +00:00
Matt Jacob
7ca05a39c7 Use atomic_set_8 on the us_freelist maps as they are not otherwise
protected. Furthermore, in some RISC architectures with no normal
byte operations, the surrounding 3 bytes are also affected by the
read-modify-write that has to occur.
2002-11-10 16:16:44 +00:00
Alan Cox
d154fb4fe6 When prot is VM_PROT_NONE, call pmap_page_protect() directly rather than
indirectly through vm_page_protect().  The one remaining page flag that
is updated by vm_page_protect() is already being updated by our various
pmap implementations.

Note: A later commit will similarly change the VM_PROT_READ case and
eliminate vm_page_protect().
2002-11-10 07:12:04 +00:00
Alan Cox
f6116791a2 Fix an error case in vm_map_wire(): unwiring of an entry during cleanup
after a user wire error fails when the entry is already system wired.

Reported by:	tegge
2002-11-09 21:26:49 +00:00
Alan Cox
1f7c5f98d7 In vm_page_remove(), avoid calling vm_page_splay() if the object's memq
is empty.
2002-11-09 08:27:42 +00:00
Thomas Moestl
0fca57b8b8 Move the definitions of the hw.physmem, hw.usermem and hw.availpages
sysctls to MI code; this reduces code duplication and makes all of them
available on sparc64, and the latter two on powerpc.
The semantics by the i386 and pc98 hw.availpages is slightly changed:
previously, holes between ranges of available pages would be included,
while they are excluded now. The new behaviour should be more correct
and brings i386 in line with the other architectures.

Move physmem to vm/vm_init.c, where this variable is used in MI code.
2002-11-07 23:57:17 +00:00
Maxime Henrion
bf1001fa0f Better printf() formats. 2002-11-07 23:16:22 +00:00
Maxime Henrion
e47cd172e0 Some more printf() format fixes. 2002-11-07 23:03:04 +00:00
Maxime Henrion
cd034a5be9 Correctly print vm_offset_t types. 2002-11-07 22:49:07 +00:00
Alan Cox
ada2a050be Export the function vm_page_splay(). 2002-11-04 19:21:39 +00:00
Alan Cox
c71f01affe - Remove the memory allocation for the object/offset hash table
because it's no longer used.  (See revision 1.215.)
 - Fix a harmless bug: the number of vm_page structures allocated wasn't
   properly adjusted when uma_bootstrap() was introduced.  Consequently,
   we were allocating 30 unused vm_page structures.
 - Wrap a long line.
2002-11-03 22:20:42 +00:00
Alan Cox
02af9de6fc Remove the vm page buckets mutex. As of revision 1.215 of vm/vm_page.c,
it is unused.
2002-11-02 22:39:30 +00:00
Jeff Roberson
48eea37508 - Add support for machine dependant page allocation routines. MD code
may define UMA_MD_SMALL_ALLOC to make use of this feature.

Reviewed by:	peter, jake
2002-11-01 01:01:27 +00:00
Jeff Roberson
026aa839a4 - Add a new flag to vm_page_alloc, VM_ALLOC_NOOBJ. This tells
vm_page_alloc not to insert this page into an object.  The pindex is
   still used for colorization.
 - Rework vm_page_select_* to accept a color instead of an object and
   pindex to work with VM_PAGE_NOOBJ.
 - Document other VM_ALLOC_ flags.

Reviewed by:	peter, jake
2002-11-01 00:59:03 +00:00
Robert Watson
03ce2c0c9b Merge from MAC tree: rename mac_check_vnode_swapon() to
mac_check_system_swapon(), to reflect the fact that the primary
object of this change is the running kernel as a whole, rather
than just the vnode.  We'll drop additional checks of this
class into the same check namespace, including reboot(),
sysctl(), et al.

Obtained from:	TrustedBSD Project
Sponsored by:	DARPA, Network Associates Laboratories
2002-10-27 06:54:06 +00:00
Jeff Roberson
bbee39c629 - Now that uma_zalloc_internal is not the fast path don't be so fussy about
extra function calls.  Refactor uma_zalloc_internal into seperate functions
   for finding the most appropriate slab, filling buckets, allocating single
   items, and pulling items off of slabs.  This makes the code significantly
   cleaner.
 - This also fixes the "Returning an empty bucket." panic that a few people
   have seen.

Tested On:	alpha, x86
2002-10-24 07:59:03 +00:00
Jeff Roberson
bba739abf9 - Move the destructor calls so that they are not called with the zone lock
held.  This avoids a lock order reversal when destroying zones.
   Unfortunately, this also means that the free checks are not done before
   the destructor is called.

Reported by:	phk
2002-10-24 06:17:30 +00:00
Robert Watson
3e732e7d7d Invoke mac_check_vnode_mmap() during mmap operations on vnodes,
permitting policies to restrict access to memory mapping based on
the credential requesting the mapping, the target vnode, the
requested rights, or other policy considerations.

Approved by:	re
Obtained from:	TrustedBSD Project
Sponsored by:	DARPA, Network Associates Laboratories
2002-10-22 15:56:44 +00:00
Robert Watson
1cbfd977fd Introduce MAC_CHECK_VNODE_SWAPON, which permits MAC policies to
perform authorization checks during swapon() events; policies
might choose to enforce protections based on the credential
requesting the swap configuration, the target of the swap operation,
or other factors such as internal policy state.

Approved by:	re
Obtained from:	TrustedBSD Project
Sponsored by:	DARPA, Network Associates Laboratories
2002-10-22 15:53:43 +00:00
John Baldwin
1c865ac70e - Check that a process isn't a new process (p_state == PRS_NEW) before
trying to acquire it's proc lock since the proc lock may not have been
  constructed yet.
- Split up the one big comment at the top of the loop and put the pieces
  in the right order above the various checks.

Reported by:	kris (1)
2002-10-22 14:31:32 +00:00
Sheldon Hearn
29b4d52653 Fix typo in comments (misspelled "necessary"). 2002-10-22 12:10:27 +00:00
Alan Cox
f3b676f0ad o Reinline vm_page_undirty(), reducing the kernel size. (This reverts
a part of vm_page.h revision 1.87 and vm_page.c revision 1.167.)
2002-10-20 19:57:55 +00:00
Alan Cox
f4ecdf056e Complete the page queues locking needed for the page-based copy-
on-write (COW) mechanism.  (This mechanism is used by the zero-copy
TCP/IP implementation.)
 - Extend the scope of the page queues lock in vm_fault()
   to cover vm_page_cowfault().
 - Modify vm_page_cowfault() to release the page queues lock
   if it sleeps.
2002-10-19 18:34:39 +00:00
Matthew Dillon
b86ec922be Replace the vm_page hash table with a per-vmobject splay tree. There should
be no major change in performance from this change at this time but this
will allow other work to progress:  Giant lock removal around VM system
in favor of per-object mutexes, ranged fsyncs, more optimal COMMIT rpc's for
NFS, partial filesystem syncs by the syncer, more optimal object flushing,
etc.  Note that the buffer cache is already using a similar splay tree
mechanism.

Note that a good chunk of the old hash table code is still in the tree.
Alan or I will remove it prior to the release if the new code does not
introduce unsolvable bugs, else we can revert more easily.

Submitted by:	alc	(this is Alan's code)
Approved by:	re
2002-10-18 17:24:30 +00:00
Poul-Henning Kamp
af045176d1 Properly put macro args in ().
Spotted by:	FlexeLint.
2002-10-16 10:52:15 +00:00
Julian Elischer
d524d69b16 Remove old useless debugging code 2002-10-14 20:31:54 +00:00
Jeff Roberson
b43179fbe8 - Create a new scheduler api that is defined in sys/sched.h
- Begin moving scheduler specific functionality into sched_4bsd.c
 - Replace direct manipulation of scheduler data with hooks provided by the
   new api.
 - Remove KSE specific state modifications and single runq assumptions from
   kern_switch.c

Reviewed by:	-arch
2002-10-12 05:32:24 +00:00
John Baldwin
551cf4e150 Rename the mutex thread and process states to use a more generic 'LOCK'
name instead.  (e.g., SLOCK instead of SMTX, TD_ON_LOCK() instead of
TD_ON_MUTEX())  Eventually a turnstile abstraction will be added that
will be shared with mutexes and other types of locks.  SLOCK/TDI_LOCK will
be used internally by the turnstile code and will not be specific to
mutexes.  Making the change now ensures that turnstiles can be dropped
in at a later date without affecting the ABI of userland applications.
2002-10-02 20:31:47 +00:00
Scott Long
316ec49abd Some kernel threads try to do significant work, and the default KSTACK_PAGES
doesn't give them enough stack to do much before blowing away the pcb.
This adds MI and MD code to allow the allocation of an alternate kstack
who's size can be speficied when calling kthread_create.  Passing the
value 0 prevents the alternate kstack from being created.  Note that the
ia64 MD code is missing for now, and PowerPC was only partially written
due to the pmap.c being incomplete there.
Though this patch does not modify anything to make use of the alternate
kstack, acpi and usb are good candidates.

Reviewed by:	jake, peter, jhb
2002-10-02 07:44:29 +00:00
Poul-Henning Kamp
37c841831f Be consistent about "static" functions: if the function is marked
static in its prototype, mark it static at the definition too.

Inspired by:    FlexeLint warning #512
2002-09-28 17:15:38 +00:00
Jeff Roberson
3ef3e7c42b - Get rid of the unused LK_NOOBJ. 2002-09-25 01:24:58 +00:00
Jeff Roberson
6a2eac8acc - Lock access to numoutput on the swap devices. 2002-09-25 01:24:17 +00:00
Jeff Roberson
63e7e60dba - Add a ASSERT_VOP_LOCKED in vnode_pager_alloc.
- Lock access to v_iflags.
2002-09-25 01:23:43 +00:00
Matthew N. Dodd
4a2eca23ca Modify vm_map_clean() (and thus the msync(2) system call) to support
invalidation of cached pages for objects of type OBJT_DEVICE.

Submitted by:	Christian Zander <zander@minion.de>
Approved by:	alc
2002-09-22 08:22:32 +00:00
Alan Cox
e94ce82689 o Update some comments. 2002-09-22 04:33:43 +00:00
Jake Burkholder
05ba50f522 Use the fields in the sysentvec and in the vm map header in place of the
constants VM_MIN_ADDRESS, VM_MAXUSER_ADDRESS, USRSTACK and PS_STRINGS.
This is mainly so that they can be variable even for the native abi, based
on different machine types.  Get stack protections from the sysentvec too.
This makes it trivial to map the stack non-executable for certain abis, on
machines that support it.
2002-09-21 22:07:17 +00:00
Alan Cox
8aadcc5368 Reduce namespace pollution.
Submitted by:	bde
2002-09-21 07:51:44 +00:00
Jeff Roberson
f461cf2297 - Use my freebsd email alias in the copyright.
- Remove redundant instances of my email alias in the file summary.
2002-09-19 06:05:32 +00:00
Jeff Roberson
99571dc345 - Split UMA_ZFLAG_OFFPAGE into UMA_ZFLAG_OFFPAGE and UMA_ZFLAG_HASH.
- Remove all instances of the mallochash.
 - Stash the slab pointer in the vm page's object pointer when allocating from
   the kmem_obj.
 - Use the overloaded object pointer to find slabs for malloced memory.
2002-09-18 08:26:30 +00:00
Nate Lawson
06be2aaa83 Remove all use of vnode->v_tag, replacing with appropriate substitutes.
v_tag is now const char * and should only be used for debugging.

Additionally:
1. All users of VT_NTS now check vfsconf->vf_type VFCF_NETWORK
2. The user of VT_PROCFS now checks for the new flag VV_PROCDEP, which
is propagated by pseudofs to all child vnodes if the fs sets PFS_PROCDEP.

Suggested by:   phk
Reviewed by:    bde, rwatson (earlier version)
2002-09-14 09:02:28 +00:00
Julian Elischer
71fad9fdee Completely redo thread states.
Reviewed by:	davidxu@freebsd.org
2002-09-11 08:13:56 +00:00
Seigo Tanimura
b1f99ebe2b - Do not swap out a process if it is in creation. The process may have no
address space yet.

- Check whether a process is a system process prior to dereferencing
  its p_vmspace.  Aio assumes that only the curthread switches the address
  space of a system process.
2002-09-09 09:05:06 +00:00
Julian Elischer
1faf202ea9 Use UMA as a complex object allocator.
The process allocator now caches and hands out complete process structures
*including substructures* .

i.e. it get's the process structure with the first thread (and soon KSE)
already allocated and attached, all in one hit.

For the average non threaded program (non KSE that is) the allocated thread and its stack remain attached to the process, even when the process is
unused and in the process cache. This saves having to allocate and attach it
later, effectively bringing us (hopefully) close to the efficiency
of pre-KSE systems where these were a single structure.

Reviewed by:	davidxu@freebsd.org, peter@freebsd.org
2002-09-06 07:00:37 +00:00
Bruce Evans
6af7f1e511 Use `struct uma_zone *' instead of uma_zone_t, so that <sys/uma.h> isn't
a prerequisite.
2002-09-05 14:04:34 +00:00
David Xu
1279572a92 s/SGNL/SIG/
s/SNGL/SINGLE/
s/SNGLE/SINGLE/

Fix abbreviation for P_STOPPED_* etc flags, in original code they were
inconsistent and difficult to distinguish between them.

Approved by: julian (mentor)
2002-09-05 07:30:18 +00:00
Alan Cox
8a59b15cd4 o Synchronize updates to struct vm_page::cow with the page queues lock. 2002-09-02 04:04:12 +00:00
Matthew Dillon
ec61f55d42 Reduce the maximum KVA reserved for swap meta structures from 70 to 32 MB.
Reduce the swap meta calculation by a factor of 2, it's still massive overkill.

X-MFC after: immediately
2002-08-31 21:15:29 +00:00
Peter Wemm
447b3772dc Change hw.physmem and hw.usermem to unsigned long like they used to be
in the original hardwired sysctl implementation.

The buf size calculator still overflows an integer on machines with large
KVA (eg: ia64) where the number of pages does not fit into an int.  Use
'long' there.

Change Maxmem and physmem and related variables to 'long', mostly for
completeness.  Machines are not likely to overflow 'int' pages in the
near term, but then again, 640K ought to be enough for anybody.  This
comes for free on 32 bit machines, so why not?
2002-08-30 04:04:37 +00:00
Alan Cox
6508a194aa o Retire pmap_pageable(). It's an advisory routine that none
of our platforms implements.
2002-08-25 04:20:05 +00:00
Alan Cox
fff6062ab6 o Retire vm_page_zero_fill() and vm_page_zero_fill_area(). Ever since
pmap_zero_page() and pmap_zero_page_area() were modified to accept
   a struct vm_page * instead of a physical address, vm_page_zero_fill()
   and vm_page_zero_fill_area() have served no purpose.
2002-08-25 00:22:31 +00:00
Alan Cox
15c176c119 o Use vm_object_lock() in place of directly locking Giant.
Reviewed by:	md5
2002-08-24 18:44:52 +00:00
Alan Cox
4eaa117956 o Use vm_object_lock() in place of Giant when manipulating a vm object
in vm_map_insert().
2002-08-24 17:52:08 +00:00
Alan Cox
d52bc3438c o Resurrect vm_object_lock() and vm_object_unlock() from revision 1.19.
(For now, they simply acquire and release Giant.)
2002-08-24 07:15:14 +00:00
Archie Cobbs
55f7c614fd Don't use "NULL" when "0" is really meant. 2002-08-21 23:39:52 +00:00
Alan Cox
60582cbe6d o Assert that the page queues lock is held in vm_page_activate(). 2002-08-11 00:21:40 +00:00
Alan Cox
99cb3c4c0f o Lock page queue accesses by vm_page_activate(). 2002-08-11 00:14:10 +00:00
Alan Cox
67ef391e00 o Lock page queue accesses by vm_page_activate(). 2002-08-10 23:53:59 +00:00
Alan Cox
a9911f9a0f o Move a call to vm_page_wakeup() inside the scope of the page queues lock. 2002-08-10 23:27:06 +00:00
Alan Cox
38f612e053 o Remove the setting and clearing of the PG_MAPPED flag from the alpha and
ia64 pmap.
 o Remove the PG_MAPPED flag's declaration.
2002-08-10 18:01:39 +00:00
Alan Cox
db44450b11 o Remove the setting and clearing of the PG_MAPPED flag. (This flag is
obsolete.)
2002-08-10 07:11:16 +00:00
Alan Cox
06ec58b740 o Use pmap_page_is_mapped() in vm_page_protect() rather than the PG_MAPPED
flag.  (This is the only place in the entire kernel where the PG_MAPPED
   flag is tested.  It will be removed soon.)
2002-08-08 19:12:36 +00:00
Alan Cox
24c28f1ad6 o Acquire the page queues lock before checking the page's busy status
in vm_page_grab().  Also, replace the nearby tsleep() with an msleep()
   on the page queues lock.
2002-08-04 19:05:20 +00:00
Jeff Roberson
e6e370a7fe - Replace v_flag with v_iflag and v_vflag
- v_vflag is protected by the vnode lock and is used when synchronization
   with VOP calls is needed.
 - v_iflag is protected by interlock and is used for dealing with vnode
   management issues.  These flags include X/O LOCK, FREE, DOOMED, etc.
 - All accesses to v_iflag and v_vflag have either been locked or marked with
   mp_fixme's.
 - Many ASSERT_VOP_LOCKED calls have been added where the locking was not
   clear.
 - Many functions in vfs_subr.c were restructured to provide for stronger
   locking.

Idea stolen from:	BSD/OS
2002-08-04 10:29:36 +00:00
Alan Cox
7f0bf36a2e o Extend the scope of the page queues lock in contigmalloc1().
o Replace vm_page_sleep_busy() with vm_page_sleep_if_busy()
   in vm_contig_launder().
2002-08-04 07:07:34 +00:00
Alan Cox
aa9b1d9412 o Remove the setting of PG_MAPPED from vm_page_wire() and
vm_page_alloc(VM_ALLOC_WIRED).
2002-08-03 01:29:52 +00:00
Alan Cox
00f9e8b421 o Convert two instances of vm_page_sleep_busy() into vm_page_sleep_if_busy()
with appropriate page queue locking.
2002-08-02 18:55:29 +00:00
Alan Cox
1e7ce68ff4 o Lock page queue accesses in nwfs and smbfs.
o Assert that the page queues lock is held in vm_page_deactivate().
2002-08-02 05:23:58 +00:00
Alan Cox
91bb74a88c o Lock page queue accesses by vm_page_deactivate(). 2002-08-02 03:56:31 +00:00
Alan Cox
46086ddf91 o Acquire the page queues lock before calling vm_page_io_finish().
o Assert that the page queues lock is held in vm_page_io_finish().
2002-08-01 17:57:42 +00:00
Alan Cox
239b5b9707 o Setting PG_MAPPED and PG_WRITEABLE on pages that are mapped and unmapped
by pmap_qenter() and pmap_qremove() is pointless.  In fact, it probably
   leads to unnecessary pmap_page_protect() calls if one of these pages is
   paged out after unwiring.

Note: setting PG_MAPPED asserts that the page's pv list may be
non-empty.  Since checking the status of the page's pv list isn't any
harder than checking this flag, the flag should probably be eliminated.
Alternatively, PG_MAPPED could be set by pmap_enter() exclusively
rather than various places throughout the kernel.
2002-07-31 18:46:47 +00:00
Alan Cox
67c1fae92e o Lock page accesses by vm_page_io_start() with the page queues lock.
o Assert that the page queues lock is held in vm_page_io_start().
2002-07-31 07:27:08 +00:00
Alan Cox
32585dd617 o In vm_object_madvise() and vm_object_page_remove() replace
vm_page_sleep_busy() with vm_page_sleep_if_busy().  At the same time,
   increase the scope of the page queues lock.  (This should significantly
   reduce the locking overhead in vm_object_page_remove().)
 o Apply some style fixes.
2002-07-30 07:23:04 +00:00
Seigo Tanimura
9eb881f804 - Optimize wakeup() and its friends; if a thread waken up is being
swapped in, we do not have to ask for the scheduler thread to do
  that.

- Assert that a process is not swapped out in runq functions and
  swapout().

- Introduce thread_safetoswapout() for readability.

- In swapout_procs(), perform a test that may block (check of a
  thread working on its vm map) first.  This lets us call swapout()
  with the sched_lock held, providing a better atomicity.
2002-07-30 06:54:05 +00:00
Alan Cox
e5f8bd9418 o Introduce vm_page_sleep_if_busy() as an eventual replacement for
vm_page_sleep_busy().  vm_page_sleep_if_busy() uses the page
   queues lock.
2002-07-29 19:41:22 +00:00
Julian Elischer
b7f2cf173e Remove a XXXKSE comment. the code is no longer a problem.. 2002-07-29 18:47:19 +00:00
Julian Elischer
1d7b9ed2e6 Create a new thread state to describe threads that would be ready to run
except for the fact tha they are presently swapped out. Also add a process
flag to indicate that the process has started the struggle to swap
back in. This will be  needed for the case where multiple threads
start the swapin action top a collision. Also add code to stop
a process fropm being swapped out if one of the threads in this
process is actually off running on another CPU.. that might hurt...

Submitted by:	Seigo Tanimura <tanimura@r.dl.itc.u-tokyo.ac.jp>
2002-07-29 18:33:32 +00:00
Alan Cox
14f8ceaa07 o Pass VM_ALLOC_WIRED to vm_page_grab() rather than calling vm_page_wire()
in pmap_new_thread(), pmap_pinit(), and vm_proc_new().
 o Lock page queue accesses by vm_page_free() in pmap_object_init_pt().
2002-07-29 05:42:44 +00:00
Alan Cox
2c071f61f0 o Modify vm_page_grab() to accept VM_ALLOC_WIRED. 2002-07-28 23:46:19 +00:00
Alan Cox
e43c2eab07 o Lock page queue accesses by vm_page_free().
o Apply some style fixes.
2002-07-28 20:13:48 +00:00
Alan Cox
6a684ecf05 o Lock page queue accesses by vm_page_free(). 2002-07-28 19:01:38 +00:00
Alan Cox
299018d3b6 o Lock page queue accesses by vm_page_free().
o Increment cnt.v_dfree inside vm_pageout_page_free() rather than
   at each call.
2002-07-28 05:46:47 +00:00
Alan Cox
57123de641 o Lock page queue accesses by vm_page_free(). 2002-07-28 04:23:03 +00:00
Alan Cox
55df3298c6 o Require that the page queues lock is held on entry to vm_pageout_clean()
and vm_pageout_flush().
 o Acquire the page queues lock before calling vm_pageout_clean()
   or vm_pageout_flush().
2002-07-27 23:20:32 +00:00
Alan Cox
4abd55b296 o Lock page queue accesses by vm_page_activate(). 2002-07-27 07:20:27 +00:00
Alan Cox
ce18aebde4 o Lock page queue accesses by vm_page_activate() and vm_page_deactivate()
in vm_pageout_object_deactivate_pages().
 o Apply some style fixes to vm_pageout_object_deactivate_pages().
2002-07-27 06:41:03 +00:00
Alan Cox
9d52288860 o Lock page queue accesses by vm_page_activate() and vm_page_deactivate(). 2002-07-27 04:30:46 +00:00
Alan Cox
f4f5cb1ffb o Remove a vm_page_deactivate() that is immediately followed by a
vm_page_rename() from vm_object_backing_scan().  vm_page_rename()
   also performs vm_page_deactivate() on pages in the cache queues,
   making the removed vm_page_deactivate() redundant.
2002-07-25 19:09:07 +00:00
Alan Cox
ef594d3186 o Merge vm_fault_wire() and vm_fault_user_wire() by adding a new parameter,
user_wire.
2002-07-24 19:47:56 +00:00
Alan Cox
2999e9faca o Lock page queue accesses by vm_page_dontneed().
o Assert that the page queue lock is held in vm_page_dontneed().
2002-07-23 04:39:48 +00:00