This will make a number of things easier in the future, as well as (finally!)
avoiding the Id-smashing problem which has plagued developers for so long.
Boy, I'm glad we're not using sup anymore. This update would have been
insane otherwise.
decrease the size of buffer_map to approx 2/3 of what it used to be
(buffer_map can be smaller now.) The original commit of these changes
increased the size of buffer_map to the point where the system would
not boot on large systems -- now large systems with large caches will
have even less problems than before.
scheme. Additionally, add the capability for checking for unexpected
kernel page faults. The maximum amount of kva space for buffers hasn't
been decreased from where it is, but it will now be possible to do so.
This scheme manages the kva space similar to the buffers themselves. If
there isn't enough kva space because of usage or fragementation, buffers
will be reclaimed until a buffer allocation is successful. This scheme
should be very resistant to fragmentation problems until/if the LFS code
is fixed and uses the bogus buffer locking scheme -- but a 'fixed' LFS
is not likely to use such a scheme.
Now there should be NO problem allocating buffers up to MAXPHYS.
larger than the vfs layer can provide. We now automatically support
32K clusters if MSDOSFS is installed, and panic if a filesystem tries
to allocate a buffer larger than MAXBSIZE.
This commit is a result of some "prodding" by BDE.
substantially increasing buffer space. Specifically, we double
the number of buffers, but allocate only half the amount of memory
per buffer. Note that VDIR files aren't cached unless instantiated
in a buffer. This will significantly improve caching.
The heuristic for managment of memory backing the buffer cache was
nice, but didn't work due to some architectural problems. Simplify
and improve the algorithm.
Major: When blocking occurs in allocbuf() for VMIO files,
excess wire counts could accumulate.
Major: Pages are incorrectly accumulated into the physical
buffer for clustered reads. This happens when bogus
page is needed.
Minor: When reclaiming buffers, the async flag on the buffer
needs to be zero, or the reclaim is not optimal.
Minor: The age flag should be cleared, if a buffer is wanted.
incorrect, and correct the support for B_ORDERED. The spl window
fix was from Peter Wemm, and his questions led me to find the problem with
the interrupt time page manipulation.
B_ASYNC flag broke things pretty bad (freeing buffer already on
queue or other wierd buffer queue errors.) The broken code is
left in commented out, but this makes the problem go away for
now.
The default level works with minimal overhead, but one can also enable
full, efficient use of a 512K cache. (Parameters can be generated
to support arbitrary cache sizes also.)
Bowrite guarantees that buffers queued after a call to bowrite will
be written after the specified buffer (on a particular device).
Bowrite does this either by taking advantage of hardware ordering support
(e.g. tagged queueing on SCSI devices) or resorting to a synchronous write.
The interface into the "VMIO" system has changed to be more consistant
and robust. Essentially, it is now no longer necessary to call vn_open
to get merged VM/Buffer cache operation, and exceptional conditions
such as merged operation of VBLK devices is simpler and more correct.
This code corrects a potentially large set of problems including the
problems with ktrace output and loaded systems, file create/deletes,
etc.
Most of the changes to NFS are cosmetic and name changes, eliminating
a layer of subroutine calls. The direct calls to vput/vrele have
been re-instituted for better cross platform compatibility.
Reviewed by: davidg
The i386 pmap module uses a special area of kernel virtual memory for mapping
of page tables pages when it needs to modify another process's virtual
address space. It's called the 'alternate page table map'. There is only one
of them and it's expected that only one process will be using it at once and
that the operation is atomic.
When the merged VM/buffer cache was implemented over a year ago, it became
necessary to rundown VM pages at I/O completion. The unfortunate and
unforeseen side effect of this is that pmap functions are now called at bio
interrupt time. If there happend to be a process using the alternate page
table map when this I/O completion occurred, it was possible for a different
process's address space to be switched into the alternate page table map -
leaving the current pmap process with the wrong address space mapped when
the interrupt completed. This resulted in BAD things happening like pages
being mapped or removed from the wrong address space, etc.. Since a very
common case of a process modifying another process's address space is during
fork when the kernel stack is inserted, one of the most common manifestations
of this bug was the kernel stack not being mapped properly, resulting in a
silent hang or reboot. This made it VERY difficult to troubleshoot this bug
(I've been trying to figure out the cause of this for >6 months). Fortunately,
the set of conditions that must be true before this problem occurs is
sufficiently rare enough that most people never saw the bug occur. As I/O
rates increase, however, so does the frequency of the crashes. This problem
used to kill wcarchive about every 10 days, but in more recent times when
the traffic exceeded >100GB/day, the machine could barely manage 6 hours of
uptime.
The fix is to make certain that no process has the pages mapped that are
involved in the I/O, before the I/O is started. The pages are made busy, so
no process will be able to map them, either, until the I/O has finished.
This side-steps the issue by still allowing the pmap functions to be called
at interrupt time, but also assuring that the alternate page table map won't
be switched.
Unfortunately, this appears to not be the only cause of this problem. :-(
Reviewed by: dyson
All new code is "#ifdef PC98"ed so this should make no difference to
PC/AT (and its clones) users.
Ok'd by: core
Submitted by: FreeBSD(98) development team
contributions or ideas from Stephen McKay <syssgm@devetir.qld.gov.au>,
Alan Cox <alc@cs.rice.edu>, David Greenman <davidg@freebsd.org> and me:
More usage of the TAILQ macros. Additional minor fix to queue.h.
Performance enhancements to the pageout daemon.
Addition of a wait in the case that the pageout daemon
has to run immediately.
Slightly modify the pageout algorithm.
Significant revamp of the pmap/fork code:
1) PTE's and UPAGES's are NO LONGER in the process's map.
2) PTE's and UPAGES's reside in their own objects.
3) TOTAL elimination of recursive page table pagefaults.
4) The page directory now resides in the PTE object.
5) Implemented pmap_copy, thereby speeding up fork time.
6) Changed the pv entries so that the head is a pointer
and not an entire entry.
7) Significant cleanup of pmap_protect, and pmap_remove.
8) Removed significant amounts of machine dependent
fork code from vm_glue. Pushed much of that code into
the machine dependent pmap module.
9) Support more completely the reuse of already zeroed
pages (Page table pages and page directories) as being
already zeroed.
Performance and code cleanups in vm_map:
1) Improved and simplified allocation of map entries.
2) Improved vm_map_copy code.
3) Corrected some minor problems in the simplify code.
Implemented splvm (combo of splbio and splimp.) The VM code now
seldom uses splhigh.
Improved the speed of and simplified kmem_malloc.
Minor mod to vm_fault to avoid using pre-zeroed pages in the case
of objects with backing objects along with the already
existant condition of having a vnode. (If there is a backing
object, there will likely be a COW... With a COW, it isn't
necessary to start with a pre-zeroed page.)
Minor reorg of source to perhaps improve locality of ref.
residing in a buffer that had been dirtied by a process was being
handled incorrectly. The pages were mistakenly placed into the
cache queue. This would likely have the effect of mmaped page modifications
being lost when I/O system calls were being used simultaneously to
the same locations in a file.
Submitted by: davidg
queue type is not set to QUEUE_NONE. This appears to have
caused a hang bug that has been lurking.
2) Fix bugs that brelse'ing locked buffers do not "free" them, but the
code assumes so. This can cause hangs when LFS is used.
3) Use malloced memory for directories when applicable. The amount
of malloced memory is seriously limited, but should decrease the
amount of memory used by an average directory to 1/4 - 1/2 previous.
This capability is fully tunable. (Note that there is no config
parameter, and might never be.)
4) Bias slightly the buffer cache usage towards non-VMIO buffers. Since
the data in VMIO buffers is not lost when the buffer is reclaimed, this
will help performance. This is adjustable also.
Speed up for vfs_bio -- addition of a routine bqrelse to greatly diminish
overhead for merged cache.
Efficiency improvement for vfs_cluster. It used to do alot of redundant
calls to cluster_rbuild.
Correct the ordering for vrele of .text and release of credentials.
Use the selective tlb update for 486/586/P6.
Numerous fixes to the size of objects allocated for files. Additionally,
fixes in the various pagers.
Fixes for proper positioning of vnode_pager_setsize in msdosfs and ext2fs.
Fixes in the swap pager for exhausted resources. The pageout code
will not as readily thrash.
Change the page queue flags (PG_ACTIVE, PG_INACTIVE, PG_FREE, PG_CACHE) into
page queue indices (PQ_ACTIVE, PQ_INACTIVE, PQ_FREE, PQ_CACHE),
thereby improving efficiency of several routines.
Eliminate even more unnecessary vm_page_protect operations.
Significantly speed up process forks.
Make vm_object_page_clean more efficient, thereby eliminating the pause
that happens every 30seconds.
Make sequential clustered writes B_ASYNC instead of B_DELWRI even in the
case of filesystems mounted async.
Fix a panic with busy pages when write clustering is done for non-VMIO
buffers.
metadata and VBLK type devices. The code is currently mostly disabled,
and a work-around has been added to disabled attempted clustered writes
for VBLK type device buffers. Clustered write of meta-data is currently
a work in progress.
"getblk" hang. The B_WANTED flag was being cleared gratuitously,
also the optimization of gbincore for ignoring the B_INVAL flag was
incorrect. There is no place in the code where buffers are on the
hash list that are B_INVAL and not B_BUSY.
Move a lot of variables home to their own code (In good time before xmas :-)
Introduce the string descrition of format.
Add a couple more functions to poke into these marvels, while I try to
decide what the correct interface should look like.
Next is adding vars on the fly, and sysctl looking at them too.
Removed a tine bit of defunct and #ifdefed notused code in swapgeneric.
1) Make cluster buffer list be a non-malloced chain. This eliminates
yet another 'evil' M_WAITOK and generally cleans up the code.
2) Fix write clustering for ext2fs. It was just broken. Also, ffs
clustering had an efficiency problem that more bawrites were happening
than should have been.
3) Make changes to buf.h to support the above, plus remove b_pfcent
at the request of David Greenman.
Reviewed by: davidg (partially)
valid bytes, we must also clear the B_DONE flag. Some filesystems
depend on this (incl NFS) and is probably the cause of the biodone
error and subsequent crash. Anyway this change needs to be made.
Prototypes are located in <sys/sysproto.h>.
Add appropriate #include <sys/sysproto.h> to files that needed
protos from systm.h.
Add structure definitions to appropriate files that relied on sys/systm.h,
right before system call definition, as in the rest of the kernel source.
In kern_prot.c, instead of using the dummy structure "args", create
individual dummy structures named <syscall>_args. This makes
life easier for prototype generation.
1) "obj" was't initialized properly, resulting in an important vm_page_lookup
always failing (resulting in a panic).
2) busy pages could be put on the cache queue or freed (resulting in a panic).
support for EXT2FS. Note that the Sig-11 problems appear to be caused by
this, but there is still probably an underlying VM problem that let this
clustering bug cause vnode objects to appear to be corrupted.
The direct manifestation of this bug would have been severely mis-read
files. It is possible that processes would Sig-11 on very damaged
input files and might explain the mysterious differences in system
behaviour when phk's malloc is being used.