and will bypass transfers for more than 8k. Blocks are invalidated after
2 seconds, so removable media should not confuse the cache.
The 8k threshold is a compromise; all UFS transfers performed by
libstand are 8k or less, so large file reads thrash the cache.
However many filesystem metadata operations are also performed using
8k blocks, so using a lower threshold gives poor performance.
Those of you with an eye for cache algorithms are welcome to tell me
how badly this one sucks; you can start with the 'bcachestats' command
which will print the contents of the cache and access statistics.
the top half to do it.
Put in a dubious check for subdisk integrity when trying to bring
up a plex where others are already up. This particular kludge is
crying out for a rewrite of the whole state code.
Add code to set_plex_state and set_volume_state to defer updates when
called from an interrupt context. This doesn't happen yet, but it
could do.
the NFSv3 ACCESS RPC problems a little for busy clients that do a lot of
open/close. The nfs code could probably cache the results, but I'm not
sure whether this would be legal or useful. The problem is that with
a CPU farm, on each open there would be a lookup, getattr then access RPC
then the read/write RPC activity. Caching the access results probably
isn't going to help much if the clients access lots of files. Having the
nfs_access() routine interpret the getattr results is a bit of a hack, but
it's how NFSv2 is done and it might be OK for a mount attribute for v3.
manipulation away from the length comparison. Measurements on beast.cdrom.com
show >3X improvement over the original code on large block sizes, putting the
performance on par with the optimized assembly code in libc.
build 2.2-stable worlds on 3.0-current systems again. objformat
calls getobjformat(), which doesn't exist in 2.2's libc.
Technically there should have been a version number bump when it was
added in -current. But it's used in so few places that it hardly
seems worth that. Besides, the objformat program is very heavily
used during a make world; it won't hurt to have it load a little
faster.
and increase the tx interrupt threshold to 4. This fixes performance
problems on slower systems.
Also fix a mind-o in the rx ring init routine: I used the TX
constant instead of the RX. This isn't a problem as long as the
rings are the same size, but if they aren't hijinx will ensue.
* Embed the stack into the bss section for loader and netboot. This
is required for netboot since otherwise the stack would be inside our
heap.
* Install loader and netboot in /boot by default.
* Fix getbootfile so that it searches for a ',' instead of a ';'
when terminating the filename.
sure that this is necessary to be a sync write here since a VOP_FSYNC()
follows and it will schedule, sort and complete the writes that the
vm_object_page_clean() started (as I think I understand things).
- Use TAILQ_* macros extensively instead of internal names
- use b_xflags instead of the NOLIST magic number hack in the next pointer
- clean bufs are inserted at the tail rather than the head.
- redo dirty buffer insert so that metadata (negative lbn) goes to the
tail directly rather than at the HEAD. This makes a difference when
inserting dirty data blocks in lbn sorted order since data block
insertion will not have to bypass all the metadata cruft. data is
lbn sorted since it makes sense for clustering and writeback ordering,
while metadata sorting doesn't help much since the lbn's are
meaningless when walking the list for writebacks.
Small systems will not notice much (if any) benefit from this, but really
busy systems with large dirty block lists should get a lot more.
I've tested this with softdep, and it doesn't seem to mind the change of
queueing of metadata.
Reviewed (in princible) by: dg
Obtained from: partly from John Dyson's work-in-progress patches in June.
Add a new flags field (we get this for free because of struct packing)
for cleaner management of tailq membership.
We had two spare b_flags slots, but they are a precious resource and may
be needed for other things that are related to other b_flags bits. The two
new flags are convenient to use in a seperate location.
Reviewed (in principle) by: dg
Obtained from: John Dyson's old work-in-progress
basic i/o functions, bit-banging mechanism is implemented by dev/iicbus/iicbb.c
immio.c: some bootverbose logs to watch zip+ connect/disconnect process