Commit Graph

1315 Commits

Author SHA1 Message Date
alc
b53d53a590 o Lock page accesses by vm_page_io_start() with the page queues lock.
o Assert that the page queues lock is held in vm_page_io_start().
2002-07-31 07:27:08 +00:00
alc
23be708506 o In vm_object_madvise() and vm_object_page_remove() replace
vm_page_sleep_busy() with vm_page_sleep_if_busy().  At the same time,
   increase the scope of the page queues lock.  (This should significantly
   reduce the locking overhead in vm_object_page_remove().)
 o Apply some style fixes.
2002-07-30 07:23:04 +00:00
tanimura
8eb7238cad - Optimize wakeup() and its friends; if a thread waken up is being
swapped in, we do not have to ask for the scheduler thread to do
  that.

- Assert that a process is not swapped out in runq functions and
  swapout().

- Introduce thread_safetoswapout() for readability.

- In swapout_procs(), perform a test that may block (check of a
  thread working on its vm map) first.  This lets us call swapout()
  with the sched_lock held, providing a better atomicity.
2002-07-30 06:54:05 +00:00
alc
478c0ba990 o Introduce vm_page_sleep_if_busy() as an eventual replacement for
vm_page_sleep_busy().  vm_page_sleep_if_busy() uses the page
   queues lock.
2002-07-29 19:41:22 +00:00
julian
deb5624210 Remove a XXXKSE comment. the code is no longer a problem.. 2002-07-29 18:47:19 +00:00
julian
6216c4b163 Create a new thread state to describe threads that would be ready to run
except for the fact tha they are presently swapped out. Also add a process
flag to indicate that the process has started the struggle to swap
back in. This will be  needed for the case where multiple threads
start the swapin action top a collision. Also add code to stop
a process fropm being swapped out if one of the threads in this
process is actually off running on another CPU.. that might hurt...

Submitted by:	Seigo Tanimura <tanimura@r.dl.itc.u-tokyo.ac.jp>
2002-07-29 18:33:32 +00:00
alc
ca32cecfcf o Pass VM_ALLOC_WIRED to vm_page_grab() rather than calling vm_page_wire()
in pmap_new_thread(), pmap_pinit(), and vm_proc_new().
 o Lock page queue accesses by vm_page_free() in pmap_object_init_pt().
2002-07-29 05:42:44 +00:00
alc
d0cb048dcb o Modify vm_page_grab() to accept VM_ALLOC_WIRED. 2002-07-28 23:46:19 +00:00
alc
0cc69019ed o Lock page queue accesses by vm_page_free().
o Apply some style fixes.
2002-07-28 20:13:48 +00:00
alc
d1b2ba5197 o Lock page queue accesses by vm_page_free(). 2002-07-28 19:01:38 +00:00
alc
e3d769e493 o Lock page queue accesses by vm_page_free().
o Increment cnt.v_dfree inside vm_pageout_page_free() rather than
   at each call.
2002-07-28 05:46:47 +00:00
alc
a5823502e9 o Lock page queue accesses by vm_page_free(). 2002-07-28 04:23:03 +00:00
alc
d048dca979 o Require that the page queues lock is held on entry to vm_pageout_clean()
and vm_pageout_flush().
 o Acquire the page queues lock before calling vm_pageout_clean()
   or vm_pageout_flush().
2002-07-27 23:20:32 +00:00
alc
d40c5cf505 o Lock page queue accesses by vm_page_activate(). 2002-07-27 07:20:27 +00:00
alc
c9b1c04c48 o Lock page queue accesses by vm_page_activate() and vm_page_deactivate()
in vm_pageout_object_deactivate_pages().
 o Apply some style fixes to vm_pageout_object_deactivate_pages().
2002-07-27 06:41:03 +00:00
alc
abf1927809 o Lock page queue accesses by vm_page_activate() and vm_page_deactivate(). 2002-07-27 04:30:46 +00:00
alc
08c7935a56 o Remove a vm_page_deactivate() that is immediately followed by a
vm_page_rename() from vm_object_backing_scan().  vm_page_rename()
   also performs vm_page_deactivate() on pages in the cache queues,
   making the removed vm_page_deactivate() redundant.
2002-07-25 19:09:07 +00:00
alc
17db4f92a1 o Merge vm_fault_wire() and vm_fault_user_wire() by adding a new parameter,
user_wire.
2002-07-24 19:47:56 +00:00
alc
b5578712f7 o Lock page queue accesses by vm_page_dontneed().
o Assert that the page queue lock is held in vm_page_dontneed().
2002-07-23 04:39:48 +00:00
alc
cba25d9833 o Extend the scope of the page queues lock in vm_pageout_scan()
to cover the traversal of the cache queue.
2002-07-23 02:42:25 +00:00
alfred
c683b04586 Change struct vmspace->vm_shm from void * to struct shmmap_state *, this
removes the need for casts in several cases.
2002-07-22 16:22:27 +00:00
alfred
2d7017c840 Remove caddr_t. 2002-07-22 16:12:55 +00:00
alc
28f5e60e77 o Lock page queue accesses by vm_page_free() and vm_page_deactivate(). 2002-07-21 21:20:57 +00:00
alc
b9718e8d79 o Lock page queue accesses by vm_page_free(). 2002-07-21 20:38:45 +00:00
tanimura
6c7dddef24 Do not pass a thread with the state TDS_RUNQ to setrunqueue(), otherwise
assertion in setrunqueue() fails.
2002-07-21 10:55:57 +00:00
alc
d9f77c5b73 o Lock page queue accesses by vm_page_try_to_cache(). (The accesses
in kern/vfs_bio.c are already locked.)
 o Assert that the page queues lock is held in vm_page_try_to_cache().
2002-07-20 20:58:46 +00:00
alc
4de6bb3a4d o Assert that the page queues lock is held in vm_page_try_to_free(). 2002-07-20 20:12:57 +00:00
alc
a01b1feba6 o Lock page queue accesses by vm_page_cache() in vm_fault() and
vm_pageout_scan().  (The others are already locked.)
 o Assert that the page queues lock is held in vm_page_cache().
2002-07-20 19:34:21 +00:00
alc
0ee9c9a99d o Lock accesses to the active page queue in vm_pageout_scan() and
vm_pageout_page_stats().
2002-07-20 18:45:25 +00:00
alc
478ee061bb o Lock page queue accesses by vm_page_cache() in vm_contig_launder().
o Micro-optimize the control flow in vm_contig_launder().
2002-07-20 06:11:16 +00:00
alc
9b15115842 o Remove dead and/or unused code. 2002-07-20 05:06:20 +00:00
peter
cc7b2e4248 Infrastructure tweaks to allow having both an Elf32 and an Elf64 executable
handler in the kernel at the same time.  Also, allow for the
exec_new_vmspace() code to build a different sized vmspace depending on
the executable environment.  This is a big help for execing i386 binaries
on ia64.   The ELF exec code grows the ability to map partial pages when
there is a page size difference, eg: emulating 4K pages on 8K or 16K
hardware pages.

Flesh out the i386 emulation support for ia64.  At this point, the only
binary that I know of that fails is cvsup, because the cvsup runtime
tries to execute code in pages not marked executable.

Obtained from:  dfr (mostly, many tweaks from me).
2002-07-20 02:56:12 +00:00
peter
febe98ab98 Set P_NOLOAD on the pagezero kthread so that it doesn't artificially skew
the loadav.  This is not real load.  If you have a nice process running in
the background, pagezero may sit in the run queue for ages and add one to
the loadav, and thereby affecting other scheduling decisions.
2002-07-19 21:06:01 +00:00
alc
083a6fe2b0 o Duplicate an odd side-effect of vm_page_wire() in vm_page_allocate()
when VM_ALLOC_WIRED is specified: set the PG_MAPPED bit in flags.
 o In both vm_page_wire() and vm_page_allocate() add a comment saying
   that setting PG_MAPPED does not belong there.
2002-07-19 03:33:04 +00:00
alc
1ba951badc o Remove the acquisition and release of Giant from the idle priority thread
that pre-zeroes free pages.
 o Remove GIANT_REQUIRED from some low-level page queue functions.  (Instead
   assertions on the page queue lock are being added to the higher-level
   functions, like vm_page_wire(), etc.)

In collaboration with:	peter
2002-07-18 17:40:07 +00:00
markm
8f4d7f20c8 Void functions cannot return values. 2002-07-18 15:53:11 +00:00
peter
5d00cd5ad7 (VM_MAX_KERNEL_ADDRESS - KERNBASE) / PAGE_SIZE may not fit in an integer.
Use lmin(long, long), not min(u_int, u_int).  This is a problem here on
ia64 which has *way* more than 2^32 pages of KVA.  281474976710655 pages
to be precice.
2002-07-18 10:28:00 +00:00
alc
bf14f2641b o Introduce an argument, VM_ALLOC_WIRED, that requests vm_page_alloc()
to return a wired page.
 o Use VM_ALLOC_WIRED within Alpha's pmap_growkernel().  Also, because
   Alpha's pmap_growkernel() calls vm_page_alloc() from within a critical
   section, specify VM_ALLOC_INTERRUPT instead of VM_ALLOC_SYSTEM.  (Only
   VM_ALLOC_INTERRUPT is implemented entirely with a spin mutex.)
 o Assert that the page queues mutex is held in vm_page_wire()
   on Alpha, just like the other platforms.
2002-07-18 04:08:10 +00:00
alc
fe71e4fa20 o Use vm_pageq_remove_nowakeup() and vm_pageq_enqueue() in
vm_page_zero_idle() instead of partially duplicated implementations.
   In particular, this change guarantees that the number of free pages
   in the free queue(s) matches the global free page count when Giant
   is released.

Submitted by:	peter (via his p4 "pmap" branch)
2002-07-16 19:39:40 +00:00
alc
dd3f128981 o Create vm_contig_launder() to replace code that appears twice
in contigmalloc1().
2002-07-15 06:33:31 +00:00
alc
ec8a106e8a o Lock page queue accesses by vm_page_wire() that aren't
within a critical section.
 o Assert that the page queues lock is held in vm_page_wire()
   unless an Alpha.
2002-07-14 23:51:55 +00:00
alc
13868892db o Lock page queue accesses by vm_page_wire(). 2002-07-14 19:36:15 +00:00
alc
5258ed77bb o Lock page queue accesses by vm_page_unmanage().
o Assert that the page queues lock is held in vm_page_unmanage().
2002-07-13 23:55:30 +00:00
alc
828e129a10 o Complete the locking of page queue accesses by vm_page_unwire().
o Assert that the page queues lock is held in vm_page_unwire().
 o Make vm_page_lock_queues() and vm_page_unlock_queues() visible
   to kernel loadable modules.
2002-07-13 20:55:21 +00:00
alc
ee4b41f6c5 o Lock some page queue accesses, in particular, those by vm_page_unwire(). 2002-07-13 19:24:04 +00:00
alc
80b0a79553 o Assert GIANT_REQUIRED on system maps in _vm_map_lock(),
_vm_map_lock_read(), and _vm_map_trylock().  Submitted by: tegge
 o Remove GIANT_REQUIRED from kmem_alloc_wait() and kmem_free_wakeup().
   (This clears the way for exec_map accesses to move outside of Giant.
   The exec_map is not a system map.)
 o Remove some premature MPSAFE comments.

Reviewed by:	tegge
2002-07-12 23:20:06 +00:00
dillon
dc5d856e71 Re-enable the idle page-zeroing code. Remove all IPIs from the idle
page-zeroing code as well as from the general page-zeroing code and use a
lazy tlb page invalidation scheme based on a callback made at the end
of mi_switch.

A number of people came up with this idea at the same time so credit
belongs to Peter, John, and Jake as well.

Two-way SMP buildworld -j 5 tests (second run, after stabilization)
    2282.76 real  2515.17 user  704.22 sys	before peter's IPI commit
    2266.69 real  2467.50 user  633.77 sys	after peter's commit
    2232.80 real  2468.99 user  615.89 sys	after this commit

Reviewed by:	peter, jhb
Approved by:	peter
2002-07-12 20:17:06 +00:00
peter
5f510a2bac Avoid a vm_page_lookup() - that uses a spinlock protected hash. We can
just use the object's memq for our nefarious purposes.
2002-07-12 04:38:51 +00:00
alc
f5dfaef158 o Lock some (unfortunately, not yet all) accesses to the page queues. 2002-07-12 03:17:22 +00:00
alc
41d34057a5 o Lock accesses to the page queues. 2002-07-12 02:55:55 +00:00