Commit Graph

388 Commits

Author SHA1 Message Date
Bruce Evans
322dfc2bd5 Fixed undeclared variables for the !(PQ_L2_SIZE > 1) case.
Removed redundant #include.
1996-09-28 17:53:18 +00:00
John Dyson
a2f4a84696 Reviewed by:
Submitted by:
Obtained from:
1996-09-28 03:33:40 +00:00
David Greenman
cd6eea255f Fixed bug with reversed trunc/round_page() in madvise...start must be
trunced, end must be rounded.
1996-09-19 10:12:41 +00:00
Bruce Evans
dd106ca742 Removed iprintf(). It was copied to db_iprintf() in ddb. 1996-09-15 11:24:21 +00:00
Bruce Evans
c7c34a24a3 Attached vm ddb commands show map', show vmochk', `show object',
`show vmopag', `show page' and `show pageq'.  Moved all vm ddb stuff
to the ends of the vm source files.

Changed printf() to db_printf(), `indent' to db_indent, and iprintf()
to db_iprintf() in ddb commands.  Moved db_indent and db_iprintf()
from vm to ddb.

vm_page.c:
Don't use __pure.  Staticized.

db_output.c:
Reduced page width from 80 to 79 to inhibit double spacing for long
lines (there are still some problems if words are printed across
column 79).
1996-09-14 11:54:59 +00:00
John Dyson
9fea9a6f5b The whole issue of not support VOP_LOCK for VBLK devices should be
rethought.  This fixes YET another problem with unmounting filesystems.
The root cause is not fixed here, but at least the problem has gone
away.
1996-09-10 05:28:23 +00:00
John Dyson
4334b0d815 Fixed the use of the wrong variable in vm_map_madvise. 1996-09-08 23:49:47 +00:00
John Dyson
5070c7f8c5 Addition of page coloring support. Various levels of coloring are afforded.
The default level works with minimal overhead, but one can also enable
full, efficient use of a 512K cache.  (Parameters can be generated
to support arbitrary cache sizes also.)
1996-09-08 20:44:49 +00:00
John Dyson
b8e251a56d Improve the scalability of certain pmap operations. 1996-09-08 16:57:53 +00:00
John Dyson
6476c0d204 Even though this looks like it, this is not a complex code change.
The interface into the "VMIO" system has changed to be more consistant
and robust.  Essentially, it is now no longer necessary to call vn_open
to get merged VM/Buffer cache operation, and exceptional conditions
such as merged operation of VBLK devices is simpler and more correct.

This code corrects a potentially large set of problems including the
problems with ktrace output and loaded systems, file create/deletes,
etc.

Most of the changes to NFS are cosmetic and name changes, eliminating
a layer of subroutine calls.  The direct calls to vput/vrele have
been re-instituted for better cross platform compatibility.

Reviewed by: davidg
1996-08-21 21:56:23 +00:00
John Dyson
67bf686897 Backed out the recent changes/enhancements to the VM code. The
problem with the 'shell scripts' was found, but there was a 'strange'
problem found with a 486 laptop that we could not find.  This commit
backs the code back to 25-jul, and will be re-entered after the snapshot
in smaller (more easily tested) chunks.
1996-07-30 03:08:57 +00:00
David Greenman
0f281c28fa Slight performance tweak for previous commit. 1996-07-28 02:54:09 +00:00
John Dyson
f230c45cbe Undo part of the scalability commit. Many of the changes
in vm_fault had some performance enhancements not ready
for prime time.  This commit backs out some of the changes.
1996-07-28 01:14:01 +00:00
John Dyson
bf6dfc7b35 Allow sequentially created mmap'ed anonymous regions to coalesce. There
is little or no reason to create a swap pager for small mmap's.  The
vm_map_insert code will automatically create a swap pager if the object
becomes too large.  This fix, per a request from phk.
1996-07-27 17:21:41 +00:00
John Dyson
3b297e93b8 Clean up some lint. 1996-07-27 04:22:12 +00:00
John Dyson
feb32a8fa9 Remove experimental header file. My test-build must have picked it
up in an unexpected place.
Submitted by:	jkh
1996-07-27 04:06:11 +00:00
John Dyson
819c1c6f43 Missing (prototype) change from the previous commit. 1996-07-27 03:47:35 +00:00
John Dyson
4f4d35edf0 This commit is meant to solve a couple of VM system problems or
performance issues.

	1) The pmap module has had too many inlines, and so the
	   object file is simply bigger than it needs to be.
	   Some common code is also merged into subroutines.
	2) Removal of some *evil* PHYS_TO_VM_PAGE macro calls.
	   Unfortunately, a few have needed to be added also.
	   The removal caused the need for more vm_page_lookups.
	   I added lookup hints to minimize the need for the
	   page table lookup operations.
	3) Removal of some bogus performance improvements, that
	   mostly made the code more complex (tracking individual
	   page table page updates unnecessarily).  Those improvements
	   actually hurt 386 processors perf (not that people who
	   worry about perf use 386 processors anymore :-)).
	4) Changed pv queue manipulations/structures to be TAILQ's.
	5) The pv queue code has had some performance problems since
	   day one.  Some significant scalability issues are resolved
	   by threading the pv entries from the pmap AND the physical
	   address instead of just the physical address.  This makes
	   certain pmap operations run much faster.  This does
	   not affect most micro-benchmarks, but should help loaded system
	   performance *significantly*.  DG helped and came up with most
	   of the solution for this one.
	6) Most if not all pmap bit operations follow the pattern:
		pmap_test_bit();
		pmap_clear_bit();
	   That made for twice the necessary pv list traversal.   The
	   pmap interface now supports only pmap_tc_bit type operations:
	   pmap_[test/clear]_modified, pmap_[test/clear]_referenced.
	   Additionally, the modified routine now takes a vm_page_t arg
	   instead of a phys address.  This eliminates a PHYS_TO_VM_PAGE
	   operation.
	7) Several rewrites of routines that contain redundant code to
	   use common routines, so that there is a greater likelihood of
	   keeping the cache footprint smaller.
1996-07-27 03:24:10 +00:00
Bruce Evans
6ab46d52a5 Don't use NULL in non-pointer contexts. 1996-07-12 04:12:25 +00:00
John Dyson
502ba6e4a8 Back-off on the previous commit, specifically remove the look-ahead
optimization on the active queue scan.  I will do this correctly later.
1996-07-08 03:22:55 +00:00
John Dyson
c8c4b40cca Fix a problem with the pageout daemon RSS limiting, where it degrades
performance to LRU or worse when RSS limiting takes effect.  Also,
make an end condition in the active queue scan more efficient in the
case where pages are removed from the active queue as a side effect
of a pmap operation.
1996-07-08 02:25:53 +00:00
David Greenman
9579ee641a In all special cases for spl or page_alloc where kmem_map is check for,
mb_map (a submap of kmem_map) must also be checked.
Thanks to wcarchive (err...sort of) for demonstrating this bug.
1996-07-07 03:27:41 +00:00
John Dyson
a6e6bcc5f4 Properly set the PG_MAPPED and PG_WRITEABLE flags. This fixes some potential
problems with vm_map_remove/vm_map_delete.
1996-07-02 02:08:02 +00:00
John Dyson
877329e059 Make -current consistant with -stable regarding time that a process
sleeps before being swapped out.  The time is increased from 4 secs to
10 secs.  Originally I had decreased it from 20 to 4, but that is a bit
severe.  20 is too long though.
1996-06-30 21:16:18 +00:00
David Greenman
01155bd720 Make sure we have an object in the map entry before trying to trim pages
from it.
1996-06-29 09:17:17 +00:00
John Dyson
38efa82b23 This commit does a couple of things:
Re-enables the RSS limiting, and the routine is now tail-recursive,
	making it much more safe (eliminates the possiblity of kernel stack
	overflow.) Also, the RSS limiting is a little more intelligent about
	finding the likely objects that are pushing the process over the limit.

	Added some sysctls that help with VM system tuning.

New sysctl features:
	1)	Enable/disable lru pageout algorithm.
		vm.pageout_algorithm = 0, default algorithm that works
			well, especially using X windows and heavy
			memory loading.  Can have adverse effects,
			sometimes slowing down program loading.

		vm.pageout_algorithm = 1, close to true LRU.  Works much
			better than clock, etc.  Does not work as well as
			the default algorithm in general.  Certain memory
			"malloc" type benchmarks work a little better with
			this setting.

		Please give me feedback on the performance results
		associated with these.

	2)	Enable/disable swapping.
		vm.swapping_enabled = 1, default.

		vm.swapping_enabled = 0, useful for cases where swapping
			degrades performance.

		The config option "NO_SWAPPING" is still operative, and
		takes precedence over the sysctl.  If "NO_SWAPPING" is
		specified, the sysctl still exists, but "vm.swapping_enabled"
		is hard-wired to "0".

Each of these can be changed "on the fly."
1996-06-26 05:39:27 +00:00
John Dyson
f0e2953e5e Fix some serious problems with limits checking in the sbrk(2)/brk(2)
code.
Reviewed by:	bde
1996-06-25 00:36:46 +00:00
John Dyson
a001376dc3 Remove RSS limiting until I rewrite the code to be non-recursive. The
code can overrun the kernel stack under very stressful conditions.
1996-06-24 04:30:24 +00:00
John Dyson
2a4eb04bfd Improve algorithm for page hash queue. It was previously about
as bad as it could be.  This algorithm appears to improve fork
performance (barely) measurably.
1996-06-21 05:39:22 +00:00
John Dyson
ef743ce6ed Several bugfixes/improvements:
1) Make it much less likely to miss a wakeup in vm_page_free_wakeup
	2) Create a new entry point into pmap: pmap_ts_referenced, eliminates
	   the need to scan the pv lists twice in many cases.  Perhaps there
	   is alot more to do here to work on minimizing pv list manipulation
	3) Minor improvements to vm_pageout including the use of pmap_ts_ref.
	4) Major changes and code improvement to pmap.  This code has had
	   several serious bugs in page table page manipulation.  In order
	   to simplify the problem, and hopefully solve it for once and all,
	   page table pages are no longer "managed" with the pv list stuff.
	   Page table pages are only (mapped and held/wired) or
	   (free and unused) now.  Page table pages are never inactive,
	   active or cached.  These changes have probably fixed the
	   hold count problems, but if they haven't, then the code is
	   simpler anyway for future bugfixing.
	5) The pmap code has been sorely in need of re-organization, and I
	   have taken a first (of probably many) steps.  Please tell me
	   if you have any ideas.
1996-06-17 03:35:40 +00:00
John Dyson
b5b40fa62b Various bugfixes/cleanups from me and others:
1) Remove potential race conditions on waking up in vm_page_free_wakeup
   by making sure that it is at splvm().
2) Fix another bug in vm_map_simplify_entry.
3) Be more complete about converting from default to swap pager
   when an object grows to be large enough that there can be
   a problem with data structure allocation under low memory
   conditions.
4) Make some madvise code more efficient.
5) Added some comments.
1996-06-16 20:37:31 +00:00
David Greenman
664275648a Move a case of PG_MAPPED being set before a pmap_enter(). This will likely
make no difference, but it will make it consistent with other uses of
PG_MAPPED.
1996-06-14 23:26:40 +00:00
John Dyson
419702a468 Fix a very significant cnt.v_wire_count leak in vm_page.c, and some
minor leaks in pmap.c.  Bruce Evans made me aware of this problem.
1996-06-12 06:52:12 +00:00
John Dyson
5fcf66debe Fix some serious errors in vm_map_simplify_entries. 1996-06-12 04:03:21 +00:00
John Dyson
3091ee0955 Mostly superficial code improvements, add a diagnostic. The
code improvements include significant simplification of the reservation
of the swap pager control blocks for reads.  Add a panic for an inconsistent
swap pager control block count.
1996-06-10 04:58:48 +00:00
John Dyson
c82b01813e Keep the vm_fault/vm_pageout from getting into an "infinite paging loop", by
reserving "cached" pages before waking up the pageout daemon.  This will reserve
the faulted page, and keep the system from thrashing itself to death given
this condition.
1996-06-10 00:25:40 +00:00
John Dyson
886d3e1150 Adjust the threshold for blocking on movement of pages from the cache
queue in vm_fault.

Move the PG_BUSY in vm_fault to the correct place.

Remove redundant/unnecessary code in pmap.c.

Properly block on rundown of page table pages, if they are busy.

I think that the VM system is in pretty good shape now, and the following
individuals (among others, in no particular order) have helped with this
recent bunch of bugs, thanks!  If I left anyone out, I apologize!

Stephen McKay, Stephen Hocking, Eric J. Chet, Dan O'Brien, James Raynard,
Marc Fournier.
1996-06-08 06:48:35 +00:00
John Dyson
6b6f000870 Keep page-table pages from ever being sensed as dirty. This should fix
some problems with the page-table page management code, since it can't
deal with the notion of page-table pages being paged out or in transit.
Also, clean up some stylistic issues per some suggestions from
Stephen McKay.
1996-06-05 03:31:49 +00:00
John Dyson
ff97964a2e Disable madvise optimizations for device pager objects (some of the
operations don't work with FICTITIOUS pages.)  Also, close a window
between PG_MANAGED and pmap_enter that can mess up the accounting of
the managed flag.  This problem could likely cause a hold_count error
for page table pages.
1996-06-01 20:50:57 +00:00
John Dyson
f35329ac0f This commit is dual-purpose, to fix more of the pageout daemon
queue corruption problems, and to apply Gary Palmer's code cleanups.
David Greenman helped with these problems also.  There is still
a hang problem using X in small memory machines.
1996-05-31 00:38:04 +00:00
John Dyson
545901f794 Correct some unfortunately chosen constants, otherwise, not enough
pages are calculated for deferred allocation of swap pager data structures.
This is a follow-on to the previous commit to this file.
1996-05-29 06:33:30 +00:00
John Dyson
b182ec9eb4 After careful review by David Greenman and myself, David had found a
case where blocking can occur, thereby giving other process's a chance
to modify the queue where a page resides.  This could cause numerous
process and system failures.
1996-05-29 05:15:33 +00:00
John Dyson
a5b6fd29a3 Make sure that pageout deadlocks cannot occur. There is a problem
that the datastructures needed to support the swap pager can take
enough space to fully deplete system memory, and cause a deadlock.
This change keeps large objects from being filled with dirty pages
without the appropriate swap pager datastructures.  Right now,
default objects greater than 1/4 the size of available system memory
are converted to swap objects, thereby eliminating the risk of deadlock.
1996-05-29 05:12:23 +00:00
John Dyson
85a376eb93 Fix a couple of problems in the pageout_scan routine. First, there is
a condition when blocking can occur, and the daemon did not check properly
for a page remaining on the expected queue.  Additionally, the inactive
target was being set much too large for small memory machines.  It is now
being calculated based upon the amount of user memory available on every
pageout daemon run.  Another problem was that if memory was very low, the
pageout daemon could fail repeatedly to traverse the inactive queue.
1996-05-26 07:52:09 +00:00
John Dyson
0ed4376231 I think this covers (fixes) the last batch of freeing active/held/busy page
problem.  BY MISTAKE, the vm_page_unqueue (or equiv) was removed from the
vm_fault code.  Really bad things appear to happen if a page is on a queue
while it is being faulted.
1996-05-26 05:30:33 +00:00
John Dyson
f777ab7b8b Add an assert to vm_page_cache. We should never cache a dirty page. 1996-05-24 05:20:15 +00:00
John Dyson
1eeaa1e31f Add apparently needed splvm protection to the active queue, and eliminate
an unnecessary test for dirty pages if it is already known to be dirty.
1996-05-24 05:19:15 +00:00
John Dyson
3077a9c2f4 Eliminate inefficient check for dirty pages for pages in the PQ_CACHE
queue.  Also, modify the MADV_FREE policy (it probably still isn't the final
version.)
1996-05-24 05:17:21 +00:00
John Dyson
a9d4727439 Make the conversion from the default pager to swap pager more robust
in the face of low memory conditions.
1996-05-24 05:14:44 +00:00
John Dyson
99ea1af0a6 Eliminate a vm_page_free, busy panic, in kern_malloc. 1996-05-23 02:24:55 +00:00