the new filesystem. This is intended for memory and vnode filesystems
that will never be fsck'ed or dumped.
Obtained from: St. Bernard Software RAPID
MFC after: 2 weeks
permits users of newfs to set the multilabel flag on UFS1 and UFS2
file systems from inception without using tunefs.
Obtained from: TrustedBSD Project
Sponsored by: DARPA, McAfee Research
FreeBSD 5.1-RELEASE and later:
- newfs(8) will now create UFS2 file systems unless UFS1 is specifically
requested (-O1). To do this, I just twiddled the Oflag default.
- sysinstall(8) will now select UFS2 as the default layout for new
file systems unless specifically requested (use '1' and '2' to change
the file system layout in the disk labeler). To do this, I inverted
the ufs2 flag into a ufs1 flag, since ufs2 is now the default and
ufs1 is the edge case. There's a slight semantic change in the
key behavior: '2' no longer toggles, it changes the selection to UFS2.
This is very similar to a patch David O'Brien sent me at one point, and
that I couldn't find.
Approved by: re (telecon)
Reviewed by: mckusick, phk, bmah
version of such. Differences in filesystems generated were found to be
from 1) sbwrite with the "all" parameter 2) removal of writecache. The
sbwrite call was made to perform as the original version, and otherwise
this was checked against a version of newfs with the write cache removed.
that the kernel will refuse to mount. Specifically it now enforces
the MAXBSIZE blocksize limit. This update also fixes a problem where
newfs could segment fault if the selected fragment size was too large.
PR: bin/30959
Submitted by: Ceri Davies <setantae@submonkey.net>
Sponsored by: DARPA & NAI Labs.
filesystem expands the inode to 256 bytes to make space for 64-bit
block pointers. It also adds a file-creation time field, an ability
to use jumbo blocks per inode to allow extent like pointer density,
and space for extended attributes (up to twice the filesystem block
size worth of attributes, e.g., on a 16K filesystem, there is space
for 32K of attributes). UFS2 fully supports and runs existing UFS1
filesystems. New filesystems built using newfs can be built in either
UFS1 or UFS2 format using the -O option. In this commit UFS1 is
the default format, so if you want to build UFS2 format filesystems,
you must specify -O 2. This default will be changed to UFS2 when
UFS2 proves itself to be stable. In this commit the boot code for
reading UFS2 filesystems is not compiled (see /sys/boot/common/ufsread.c)
as there is insufficient space in the boot block. Once the size of the
boot block is increased, this code can be defined.
Things to note: the definition of SBSIZE has changed to SBLOCKSIZE.
The header file <ufs/ufs/dinode.h> must be included before
<ufs/ffs/fs.h> so as to get the definitions of ufs2_daddr_t and
ufs_lbn_t.
Still TODO:
Verify that the first level bootstraps work for all the architectures.
Convert the utility ffsinfo to understand UFS2 and test growfs.
Add support for the extended attribute storage. Update soft updates
to ensure integrity of extended attribute storage. Switch the
current extended attribute interfaces to use the extended attribute
storage. Add the extent like functionality (framework is there,
but is currently never used).
Sponsored by: DARPA & NAI Labs.
Reviewed by: Poul-Henning Kamp <phk@freebsd.org>
Use only one filedescriptor. Open in R/O or R/W based in the '-N' option.
Make the filedescriptor a global variable instead of passing it around
as semi-global variable(s).
Remove the undocumented ability to specify type without '-T' option.
Replace fatal() with straight err(3)/errx(3). Save calls to strerror()
where applicable. Loose the progname variable.
Get the sense of the cpgflag test correct so we only issue warnings if
people specify cpg and can't get that. It can be argued that this
should be an error.
Remove the check to see if the disk is mounted: Open for writing
would fail if it were mounted.
Attempt to get the sectorsize and mediasize with the generic disk
ioctls, fall back to disklabel and /etc/disktab as we can.
Notice that on-disk labels still take precedence over /etc/disktab,
this is probably wrong, but not as wrong as the entire concept of
/etc/disktab is.
Sponsored by: DARPA & NAI Labs.
particular as there may not be one. Remove #if 0'ed code which might
mislead people to think otherwise.
unifdef -ULOSTDIR, fsck can make lost+found on the fly.
Sponsored by: DARPA & NAI Labs
diskdrives do neither need nor want:
-O create a 4.3BSD format filesystem
-d rotational delay between contiguous blocks
-k sector 0 skew, per track
-l hardware sector interleave
-n number of distinguished rotational positions
-p spare sectors per track
-r revolutions/minute
-t tracks/cylinder
-x spare sectors per cylinder
No change in the produced filesystem image unless one or more of
these options were used.
Approved by: mckusick
Add a couple of simple regression tests accessible with "make test", they
depend on the md(4) driver.
FYI I have also tried running the test against a week old newfs and it
passed.
16384/2048.
Following recent discussions on the -arch mailing list, involving dillon
and mckusick, this change parallels the one made over a decade ago when
the default was bumped up from 4096/512.
This should provide significant performance improvements for most
folks, less significant performance losses for a few folks and
wasted space lost to large fragments for many folks.
For discussion, please see the following thread in the -arch archive:
Subject: Using a larger block size on large filesystems
The discussion ceases to be relevant when the issue of partitioning
schemes is raised.
His description of the problem and solution follow. My own tests show
speedups on typical filesystem intensive workloads of 5% to 12% which
is very impressive considering the small amount of code change involved.
------
One day I noticed that some file operations run much faster on
small file systems then on big ones. I've looked at the ffs
algorithms, thought about them, and redesigned the dirpref algorithm.
First I want to describe the results of my tests. These results are old
and I have improved the algorithm after these tests were done. Nevertheless
they show how big the perfomance speedup may be. I have done two file/directory
intensive tests on a two OpenBSD systems with old and new dirpref algorithm.
The first test is "tar -xzf ports.tar.gz", the second is "rm -rf ports".
The ports.tar.gz file is the ports collection from the OpenBSD 2.8 release.
It contains 6596 directories and 13868 files. The test systems are:
1. Celeron-450, 128Mb, two IDE drives, the system at wd0, file system for
test is at wd1. Size of test file system is 8 Gb, number of cg=991,
size of cg is 8m, block size = 8k, fragment size = 1k OpenBSD-current
from Dec 2000 with BUFCACHEPERCENT=35
2. PIII-600, 128Mb, two IBM DTLA-307045 IDE drives at i815e, the system
at wd0, file system for test is at wd1. Size of test file system is 40 Gb,
number of cg=5324, size of cg is 8m, block size = 8k, fragment size = 1k
OpenBSD-current from Dec 2000 with BUFCACHEPERCENT=50
You can get more info about the test systems and methods at:
http://www.ptci.ru/gluk/dirpref/old/dirpref.html
Test Results
tar -xzf ports.tar.gz rm -rf ports
mode old dirpref new dirpref speedup old dirprefnew dirpref speedup
First system
normal 667 472 1.41 477 331 1.44
async 285 144 1.98 130 14 9.29
sync 768 616 1.25 477 334 1.43
softdep 413 252 1.64 241 38 6.34
Second system
normal 329 81 4.06 263.5 93.5 2.81
async 302 25.7 11.75 112 2.26 49.56
sync 281 57.0 4.93 263 90.5 2.9
softdep 341 40.6 8.4 284 4.76 59.66
"old dirpref" and "new dirpref" columns give a test time in seconds.
speedup - speed increasement in times, ie. old dirpref / new dirpref.
------
Algorithm description
The old dirpref algorithm is described in comments:
/*
* Find a cylinder to place a directory.
*
* The policy implemented by this algorithm is to select from
* among those cylinder groups with above the average number of
* free inodes, the one with the smallest number of directories.
*/
A new directory is allocated in a different cylinder groups than its
parent directory resulting in a directory tree that is spreaded across
all the cylinder groups. This spreading out results in a non-optimal
access to the directories and files. When we have a small filesystem
it is not a problem but when the filesystem is big then perfomance
degradation becomes very apparent.
What I mean by a big file system ?
1. A big filesystem is a filesystem which occupy 20-30 or more percent
of total drive space, i.e. first and last cylinder are physically
located relatively far from each other.
2. It has a relatively large number of cylinder groups, for example
more cylinder groups than 50% of the buffers in the buffer cache.
The first results in long access times, while the second results in
many buffers being used by metadata operations. Such operations use
cylinder group blocks and on-disk inode blocks. The cylinder group
block (fs->fs_cblkno) contains struct cg, inode and block bit maps.
It is 2k in size for the default filesystem parameters. If new and
parent directories are located in different cylinder groups then the
system performs more input/output operations and uses more buffers.
On filesystems with many cylinder groups, lots of cache buffers are
used for metadata operations.
My solution for this problem is very simple. I allocate many directories
in one cylinder group. I also do some things, so that the new allocation
method does not cause excessive fragmentation and all directory inodes
will not be located at a location far from its file's inodes and data.
The algorithm is:
/*
* Find a cylinder group to place a directory.
*
* The policy implemented by this algorithm is to allocate a
* directory inode in the same cylinder group as its parent
* directory, but also to reserve space for its files inodes
* and data. Restrict the number of directories which may be
* allocated one after another in the same cylinder group
* without intervening allocation of files.
*
* If we allocate a first level directory then force allocation
* in another cylinder group.
*/
My early versions of dirpref give me a good results for a wide range of
file operations and different filesystem capacities except one case:
those applications that create their entire directory structure first
and only later fill this structure with files.
My solution for such and similar cases is to limit a number of
directories which may be created one after another in the same cylinder
group without intervening file creations. For this purpose, I allocate
an array of counters at mount time. This array is linked to the superblock
fs->fs_contigdirs[cg]. Each time a directory is created the counter
increases and each time a file is created the counter decreases. A 60Gb
filesystem with 8mb/cg requires 10kb of memory for the counters array.
The maxcontigdirs is a maximum number of directories which may be created
without an intervening file creation. I found in my tests that the best
performance occurs when I restrict the number of directories in one cylinder
group such that all its files may be located in the same cylinder group.
There may be some deterioration in performance if all the file inodes
are in the same cylinder group as its containing directory, but their
data partially resides in a different cylinder group. The maxcontigdirs
value is calculated to try to prevent this condition. Since there is
no way to know how many files and directories will be allocated later
I added two optimization parameters in superblock/tunefs. They are:
int32_t fs_avgfilesize; /* expected average file size */
int32_t fs_avgfpdir; /* expected # of files per directory */
These parameters have reasonable defaults but may be tweeked for special
uses of a filesystem. They are only necessary in rare cases like better
tuning a filesystem being used to store a squid cache.
I have been using this algorithm for about 3 months. I have done
a lot of testing on filesystems with different capacities, average
filesize, average number of files per directory, and so on. I think
this algorithm has no negative impact on filesystem perfomance. It
works better than the default one in all cases. The new dirpref
will greatly improve untarring/removing/coping of big directories,
decrease load on cvs servers and much more. The new dirpref doesn't
speedup a compilation process, but also doesn't slow it down.
Obtained from: Grigoriy Orlov <gluk@ptci.ru>
[I first added this functionality, and thought to check prior art. Seeing
OpenBSD had already done this, I changed my addition to reduce the diffs
between the two and went with their option letter.]
Obtained from: OpenBSD
So bump the default from `16' to `22', which is the largest value allowed
with the current default block size. This change increases the the
group size from 32MB/g to 44MB/g on a 4GB SCSI disk.