His description of the problem and solution follow. My own tests show
speedups on typical filesystem intensive workloads of 5% to 12% which
is very impressive considering the small amount of code change involved.
------
One day I noticed that some file operations run much faster on
small file systems then on big ones. I've looked at the ffs
algorithms, thought about them, and redesigned the dirpref algorithm.
First I want to describe the results of my tests. These results are old
and I have improved the algorithm after these tests were done. Nevertheless
they show how big the perfomance speedup may be. I have done two file/directory
intensive tests on a two OpenBSD systems with old and new dirpref algorithm.
The first test is "tar -xzf ports.tar.gz", the second is "rm -rf ports".
The ports.tar.gz file is the ports collection from the OpenBSD 2.8 release.
It contains 6596 directories and 13868 files. The test systems are:
1. Celeron-450, 128Mb, two IDE drives, the system at wd0, file system for
test is at wd1. Size of test file system is 8 Gb, number of cg=991,
size of cg is 8m, block size = 8k, fragment size = 1k OpenBSD-current
from Dec 2000 with BUFCACHEPERCENT=35
2. PIII-600, 128Mb, two IBM DTLA-307045 IDE drives at i815e, the system
at wd0, file system for test is at wd1. Size of test file system is 40 Gb,
number of cg=5324, size of cg is 8m, block size = 8k, fragment size = 1k
OpenBSD-current from Dec 2000 with BUFCACHEPERCENT=50
You can get more info about the test systems and methods at:
http://www.ptci.ru/gluk/dirpref/old/dirpref.html
Test Results
tar -xzf ports.tar.gz rm -rf ports
mode old dirpref new dirpref speedup old dirprefnew dirpref speedup
First system
normal 667 472 1.41 477 331 1.44
async 285 144 1.98 130 14 9.29
sync 768 616 1.25 477 334 1.43
softdep 413 252 1.64 241 38 6.34
Second system
normal 329 81 4.06 263.5 93.5 2.81
async 302 25.7 11.75 112 2.26 49.56
sync 281 57.0 4.93 263 90.5 2.9
softdep 341 40.6 8.4 284 4.76 59.66
"old dirpref" and "new dirpref" columns give a test time in seconds.
speedup - speed increasement in times, ie. old dirpref / new dirpref.
------
Algorithm description
The old dirpref algorithm is described in comments:
/*
* Find a cylinder to place a directory.
*
* The policy implemented by this algorithm is to select from
* among those cylinder groups with above the average number of
* free inodes, the one with the smallest number of directories.
*/
A new directory is allocated in a different cylinder groups than its
parent directory resulting in a directory tree that is spreaded across
all the cylinder groups. This spreading out results in a non-optimal
access to the directories and files. When we have a small filesystem
it is not a problem but when the filesystem is big then perfomance
degradation becomes very apparent.
What I mean by a big file system ?
1. A big filesystem is a filesystem which occupy 20-30 or more percent
of total drive space, i.e. first and last cylinder are physically
located relatively far from each other.
2. It has a relatively large number of cylinder groups, for example
more cylinder groups than 50% of the buffers in the buffer cache.
The first results in long access times, while the second results in
many buffers being used by metadata operations. Such operations use
cylinder group blocks and on-disk inode blocks. The cylinder group
block (fs->fs_cblkno) contains struct cg, inode and block bit maps.
It is 2k in size for the default filesystem parameters. If new and
parent directories are located in different cylinder groups then the
system performs more input/output operations and uses more buffers.
On filesystems with many cylinder groups, lots of cache buffers are
used for metadata operations.
My solution for this problem is very simple. I allocate many directories
in one cylinder group. I also do some things, so that the new allocation
method does not cause excessive fragmentation and all directory inodes
will not be located at a location far from its file's inodes and data.
The algorithm is:
/*
* Find a cylinder group to place a directory.
*
* The policy implemented by this algorithm is to allocate a
* directory inode in the same cylinder group as its parent
* directory, but also to reserve space for its files inodes
* and data. Restrict the number of directories which may be
* allocated one after another in the same cylinder group
* without intervening allocation of files.
*
* If we allocate a first level directory then force allocation
* in another cylinder group.
*/
My early versions of dirpref give me a good results for a wide range of
file operations and different filesystem capacities except one case:
those applications that create their entire directory structure first
and only later fill this structure with files.
My solution for such and similar cases is to limit a number of
directories which may be created one after another in the same cylinder
group without intervening file creations. For this purpose, I allocate
an array of counters at mount time. This array is linked to the superblock
fs->fs_contigdirs[cg]. Each time a directory is created the counter
increases and each time a file is created the counter decreases. A 60Gb
filesystem with 8mb/cg requires 10kb of memory for the counters array.
The maxcontigdirs is a maximum number of directories which may be created
without an intervening file creation. I found in my tests that the best
performance occurs when I restrict the number of directories in one cylinder
group such that all its files may be located in the same cylinder group.
There may be some deterioration in performance if all the file inodes
are in the same cylinder group as its containing directory, but their
data partially resides in a different cylinder group. The maxcontigdirs
value is calculated to try to prevent this condition. Since there is
no way to know how many files and directories will be allocated later
I added two optimization parameters in superblock/tunefs. They are:
int32_t fs_avgfilesize; /* expected average file size */
int32_t fs_avgfpdir; /* expected # of files per directory */
These parameters have reasonable defaults but may be tweeked for special
uses of a filesystem. They are only necessary in rare cases like better
tuning a filesystem being used to store a squid cache.
I have been using this algorithm for about 3 months. I have done
a lot of testing on filesystems with different capacities, average
filesize, average number of files per directory, and so on. I think
this algorithm has no negative impact on filesystem perfomance. It
works better than the default one in all cases. The new dirpref
will greatly improve untarring/removing/coping of big directories,
decrease load on cvs servers and much more. The new dirpref doesn't
speedup a compilation process, but also doesn't slow it down.
Obtained from: Grigoriy Orlov <gluk@ptci.ru>
[I first added this functionality, and thought to check prior art. Seeing
OpenBSD had already done this, I changed my addition to reduce the diffs
between the two and went with their option letter.]
Obtained from: OpenBSD
So bump the default from `16' to `22', which is the largest value allowed
with the current default block size. This change increases the the
group size from 32MB/g to 44MB/g on a 4GB SCSI disk.
the mount is completely active, causing the next few commands attempting
to manipulate data on the mount to fail. mount_mfs's parent now tries
to wait for the mount point st_dev to change before returning, indicating
that the mount has gone active.
and we don't use the frags info, so why bother? More to the point, it
seems to result in an EXDEV error when the label is written out and we
lose because of it (don't know why though). This is a work-around and
is marked as such.
Add -v flag to newfs:
-v Specify that the partition does not contain any slices, and that
newfs should treat the whole partition as the file system. This
option is useful for synthetic disks such as ccd and vinum.
in rev.1.9). fsck uses the per-partition ffs-related information
in the label to find alternate superblocks when the main superblock
is hosed. Rev.1.9 broke this by deleting the code that wrote the
label.
PR: 2537
xref: fsck/setup.c rev.1.8
This makes configuration of mfs /tmp on diskless clients more intuitive
for people like me, that have used this feature on NetBSD and SunOS.
Using the -T option and /dev/null, while already supported,
is neither intuitive nor documented in the handbook.
Obtained from: NetBSD
the sd & od drivers. There is also slight changes to fdisk & newfs
in order to comply with different sectorsizes.
Currently sectors of size 512, 1024 & 2048 are supported, the only
restriction beeing in fdisk, which hunts for the sectorsize of
the device.
This is based on patches to od.c and the other system files by
John Gumb & Barry Scott, minor changes and the sd.c patches by
me.
There also exist some patches for the msdos filesys code, but I
havn't been able to test those (yet).
John Gumb (john@talisker.demon.co.uk)
Barry Scott (barry@scottb.demon.co.uk)
LKM loading if it was not configured into the system.
Note that the LKM for MFS is not enabled by default, but I got it working on
my machine.. I'll see what I did..
in a couple of cases, and it doesn't do much anyway. It used to save only
the newfs params (block/frag/cgroup.. and nothing more. Something that
don't belong in a disklabel in the first place.
We pretend we have one head with two megabyte worth of sectors per cylinder.
The code try to access another head in what it belives to the same
physical cylinder, because it belives that it would be faster than
waiting for the next free sector under this head to come around.
Most modern drives doesn't have a "classical" geometry, and thus
we end up fooling ourselves doing the above optimization. With this
change we will fill a cylinder sequentially if we can, and thus get
much more mileage from the track-buffer/cache built into the drives.
As a result a lot of seeks to the next or previous track should be
avoided by this.
(My disk is a lot less noisy actually...)
You can still get the old behaviour, by specifying zero for the
numbers.
This will also solve the problem with newfs barfing at really big
drives.
Obtained from: adult advice from Kirk.
most common cd9660 and nfs options like God intended them. (It is now
possible to say
mount -o ro,soft,bg,intr there:/foo/bar /foo/bar
again.) This whole getmntopt() business is an incredible botch;
it never should have been anything more than a wrapper around
getsubopt(3). Because if the way the current hackaround is implemented,
options which take arguments (like the old `rsize' and `wsize') are still
unavailable, and must be accessed the new, broken way.
(It's unimaginable how Berkeley managed to screw up one of the few things
about NFS that Sun actually got right to begin with!)
being output if <= 1 rpos; there is a bug in the kernel which doesn't
quite get along with this. Changed default #rpos to 1, and fixed up
manual page. Converted nrpos to 1 if user specifies 0.
the use of the rotational position table.
2) Allow specification of 0 rotational positions (disables function).
3) Make rotdelay=0 and nrpos=0 by default.
The purpose of the above is to optimize for modern SCSI (and IDE) drives
that do read-ahead/write-behind.