2001-11-21 13:28:50 +00:00
|
|
|
.hlm 0
|
2001-05-27 23:14:27 +00:00
|
|
|
.\" Copyright (c) 2001, Matthew Dillon. Terms and conditions are those of
|
|
|
|
.\" the BSD Copyright as specified in the file "/usr/src/COPYRIGHT" in
|
|
|
|
.\" the source tree.
|
|
|
|
.\"
|
|
|
|
.\" $FreeBSD$
|
|
|
|
.\"
|
|
|
|
.Dd May 25, 2001
|
|
|
|
.Dt TUNING 7
|
2001-07-10 15:31:11 +00:00
|
|
|
.Os
|
2001-05-27 23:14:27 +00:00
|
|
|
.Sh NAME
|
|
|
|
.Nm tuning
|
|
|
|
.Nd performance tuning under FreeBSD
|
|
|
|
.Sh SYSTEM SETUP - DISKLABEL, NEWFS, TUNEFS, SWAP
|
|
|
|
When using
|
|
|
|
.Xr disklabel 8
|
|
|
|
to lay out your filesystems on a hard disk it is important to remember
|
|
|
|
that hard drives can transfer data much more quickly from outer tracks
|
2001-11-21 13:28:50 +00:00
|
|
|
than they can from inner tracks.
|
|
|
|
To take advantage of this you should
|
2001-05-27 23:14:27 +00:00
|
|
|
try to pack your smaller filesystems and swap closer to the outer tracks,
|
|
|
|
follow with the larger filesystems, and end with the largest filesystems.
|
|
|
|
It is also important to size system standard filesystems such that you
|
|
|
|
will not be forced to resize them later as you scale the machine up.
|
2001-11-21 13:28:50 +00:00
|
|
|
I usually create, in order, a 128M root, 1G swap, 128M
|
|
|
|
.Pa /var ,
|
|
|
|
128M
|
|
|
|
.Pa /var/tmp ,
|
|
|
|
3G
|
|
|
|
.Pa /usr ,
|
|
|
|
and use any remaining space for
|
|
|
|
.Pa /home .
|
2001-05-27 23:14:27 +00:00
|
|
|
.Pp
|
|
|
|
You should typically size your swap space to approximately 2x main memory.
|
2001-11-21 13:28:50 +00:00
|
|
|
If you do not have a lot of RAM, though, you will generally want a lot
|
|
|
|
more swap.
|
|
|
|
It is not recommended that you configure any less than
|
2001-05-27 23:14:27 +00:00
|
|
|
256M of swap on a system and you should keep in mind future memory
|
|
|
|
expansion when sizing the swap partition.
|
|
|
|
The kernel's VM paging algorithms are tuned to perform best when there is
|
2001-11-21 13:28:50 +00:00
|
|
|
at least 2x swap versus main memory.
|
|
|
|
Configuring too little swap can lead
|
2001-05-27 23:14:27 +00:00
|
|
|
to inefficiencies in the VM page scanning code as well as create issues
|
2001-11-21 13:28:50 +00:00
|
|
|
later on if you add more memory to your machine.
|
|
|
|
Finally, on larger systems
|
2001-05-27 23:14:27 +00:00
|
|
|
with multiple SCSI disks (or multiple IDE disks operating on different
|
|
|
|
controllers), we strongly recommend that you configure swap on each drive
|
2001-11-21 13:28:50 +00:00
|
|
|
(up to four drives).
|
|
|
|
The swap partitions on the drives should be approximately the same size.
|
|
|
|
The kernel can handle arbitrary sizes but
|
|
|
|
internal data structures scale to 4 times the largest swap partition.
|
|
|
|
Keeping
|
2001-05-27 23:14:27 +00:00
|
|
|
the swap partitions near the same size will allow the kernel to optimally
|
2001-11-21 13:28:50 +00:00
|
|
|
stripe swap space across the N disks.
|
|
|
|
Don't worry about overdoing it a
|
2001-05-27 23:14:27 +00:00
|
|
|
little, swap space is the saving grace of
|
|
|
|
.Ux
|
|
|
|
and even if you don't normally use much swap, it can give you more time to
|
|
|
|
recover from a runaway program before being forced to reboot.
|
|
|
|
.Pp
|
|
|
|
How you size your
|
2001-11-21 13:28:50 +00:00
|
|
|
.Pa /var
|
|
|
|
partition depends heavily on what you intend to use the machine for.
|
|
|
|
This
|
2001-05-27 23:14:27 +00:00
|
|
|
partition is primarily used to hold mailboxes, the print spool, and log
|
2001-11-21 13:28:50 +00:00
|
|
|
files.
|
|
|
|
Some people even make
|
|
|
|
.Pa /var/log
|
2001-05-27 23:14:27 +00:00
|
|
|
its own partition (but except for extreme cases it isn't worth the waste
|
2001-11-21 13:28:50 +00:00
|
|
|
of a partition ID).
|
|
|
|
If your machine is intended to act as a mail
|
2001-05-27 23:14:27 +00:00
|
|
|
or print server,
|
|
|
|
or you are running a heavily visited web server, you should consider
|
2001-11-21 13:28:50 +00:00
|
|
|
creating a much larger partition \(en perhaps a gig or more.
|
|
|
|
It is very easy
|
2001-07-14 19:41:16 +00:00
|
|
|
to underestimate log file storage requirements.
|
2001-05-27 23:14:27 +00:00
|
|
|
.Pp
|
|
|
|
Sizing
|
2001-11-21 13:28:50 +00:00
|
|
|
.Pa /var/tmp
|
|
|
|
depends on the kind of temporary file usage you think you will need.
|
|
|
|
128M is
|
|
|
|
the minimum we recommend.
|
|
|
|
Also note that sysinstall will create a
|
|
|
|
.Pa /tmp
|
2001-07-07 17:43:20 +00:00
|
|
|
directory, but it is usually a good idea to make
|
2001-11-21 13:28:50 +00:00
|
|
|
.Pa /tmp
|
2001-05-27 23:14:27 +00:00
|
|
|
a softlink to
|
2001-11-21 13:28:50 +00:00
|
|
|
.Pa /var/tmp
|
2001-07-07 17:43:20 +00:00
|
|
|
after the fact.
|
2001-05-27 23:14:27 +00:00
|
|
|
Dedicating a partition for temporary file storage is important for
|
2001-11-21 13:28:50 +00:00
|
|
|
two reasons: first, it reduces the possibility of filesystem corruption
|
2001-05-27 23:14:27 +00:00
|
|
|
in a crash, and second it reduces the chance of a runaway process that
|
2001-11-21 13:28:50 +00:00
|
|
|
fills up
|
|
|
|
.Oo Pa /var Oc Ns Pa /tmp
|
|
|
|
from blowing up more critical subsystems (mail,
|
|
|
|
logging, etc).
|
|
|
|
Filling up
|
|
|
|
.Oo Pa /var Oc Ns Pa /tmp
|
|
|
|
is a very common problem to have.
|
2001-05-27 23:14:27 +00:00
|
|
|
.Pp
|
2001-11-21 13:28:50 +00:00
|
|
|
In the old days there were differences between
|
|
|
|
.Pa /tmp
|
|
|
|
and
|
|
|
|
.Pa /var/tmp ,
|
|
|
|
but the introduction of
|
|
|
|
.Pa /var
|
|
|
|
(and
|
|
|
|
.Pa /var/tmp )
|
|
|
|
led to massive confusion
|
2001-07-07 17:43:20 +00:00
|
|
|
by program writers so today programs haphazardly use one or the
|
2001-11-21 13:28:50 +00:00
|
|
|
other and thus no real distinction can be made between the two.
|
|
|
|
So it makes sense to have just one temporary directory.
|
|
|
|
However you handle
|
|
|
|
.Pa /tmp ,
|
|
|
|
the one thing you do not want to do is leave it sitting
|
2001-05-27 23:14:27 +00:00
|
|
|
on the root partition where it might cause root to fill up or possibly
|
|
|
|
corrupt root in a crash/reboot situation.
|
|
|
|
.Pp
|
|
|
|
The
|
2001-11-21 13:28:50 +00:00
|
|
|
.Pa /usr
|
2001-05-27 23:14:27 +00:00
|
|
|
partition holds the bulk of the files required to support the system and
|
|
|
|
a subdirectory within it called
|
2001-11-21 13:28:50 +00:00
|
|
|
.Pa /usr/local
|
2001-05-27 23:14:27 +00:00
|
|
|
holds the bulk of the files installed from the
|
|
|
|
.Xr ports 7
|
2001-11-21 13:28:50 +00:00
|
|
|
hierarchy.
|
|
|
|
If you do not use ports all that much and do not intend to keep
|
|
|
|
system source
|
|
|
|
.Pq Pa /usr/src
|
|
|
|
on the machine, you can get away with
|
|
|
|
a 1 gigabyte
|
|
|
|
.Pa /usr
|
|
|
|
partition.
|
|
|
|
However, if you install a lot of ports
|
2001-05-27 23:14:27 +00:00
|
|
|
(especially window managers and linux-emulated binaries), we recommend
|
2001-11-21 13:28:50 +00:00
|
|
|
at least a 2 gigabyte
|
|
|
|
.Pa /usr
|
|
|
|
and if you also intend to keep system source
|
|
|
|
on the machine, we recommend a 3 gigabyte
|
|
|
|
.Pa /usr .
|
|
|
|
Do not underestimate the
|
2001-07-14 19:41:16 +00:00
|
|
|
amount of space you will need in this partition, it can creep up and
|
2001-05-27 23:14:27 +00:00
|
|
|
surprise you!
|
|
|
|
.Pp
|
|
|
|
The
|
2001-11-21 13:28:50 +00:00
|
|
|
.Pa /home
|
|
|
|
partition is typically used to hold user-specific data.
|
|
|
|
I usually size it to the remainder of the disk.
|
2001-05-27 23:14:27 +00:00
|
|
|
.Pp
|
2001-11-21 13:28:50 +00:00
|
|
|
Why partition at all?
|
|
|
|
Why not create one big
|
|
|
|
.Pa /
|
|
|
|
partition and be done with it?
|
|
|
|
Then I don't have to worry about undersizing things!
|
|
|
|
Well, there are several reasons this isn't a good idea.
|
|
|
|
First,
|
2001-05-27 23:14:27 +00:00
|
|
|
each partition has different operational characteristics and separating them
|
2001-11-21 13:28:50 +00:00
|
|
|
allows the filesystem to tune itself to those characteristics.
|
|
|
|
For example,
|
|
|
|
the root and
|
|
|
|
.Pa /usr
|
|
|
|
partitions are read-mostly, with very little writing, while
|
|
|
|
a lot of reading and writing could occur in
|
|
|
|
.Pa /var
|
|
|
|
and
|
|
|
|
.Pa /var/tmp .
|
|
|
|
By properly
|
2001-07-07 17:43:20 +00:00
|
|
|
partitioning your system fragmentation introduced in the smaller more
|
2001-05-27 23:14:27 +00:00
|
|
|
heavily write-loaded partitions will not bleed over into the mostly-read
|
2001-11-21 13:28:50 +00:00
|
|
|
partitions.
|
|
|
|
Additionally, keeping the write-loaded partitions closer to
|
2001-05-27 23:14:27 +00:00
|
|
|
the edge of the disk (i.e. before the really big partitions instead of after
|
2001-07-14 19:41:16 +00:00
|
|
|
in the partition table) will increase I/O performance in the partitions
|
2001-11-21 13:28:50 +00:00
|
|
|
where you need it the most.
|
|
|
|
Now it is true that you might also need I/O
|
2001-05-27 23:14:27 +00:00
|
|
|
performance in the larger partitions, but they are so large that shifting
|
2001-06-05 05:59:21 +00:00
|
|
|
them more towards the edge of the disk will not lead to a significant
|
2001-11-21 13:28:50 +00:00
|
|
|
performance improvement whereas moving
|
|
|
|
.Pa /var
|
|
|
|
to the edge can have a huge impact.
|
|
|
|
Finally, there are safety concerns.
|
|
|
|
Having a small neat root partition that
|
2001-05-27 23:14:27 +00:00
|
|
|
is essentially read-only gives it a greater chance of surviving a bad crash
|
|
|
|
intact.
|
|
|
|
.Pp
|
|
|
|
Properly partitioning your system also allows you to tune
|
|
|
|
.Xr newfs 8 ,
|
|
|
|
and
|
|
|
|
.Xr tunefs 8
|
2001-11-21 13:28:50 +00:00
|
|
|
parameters.
|
|
|
|
Tuning
|
|
|
|
.Xr newfs 8
|
2001-07-14 19:41:16 +00:00
|
|
|
requires more experience but can lead to significant improvements in
|
2001-11-21 13:28:50 +00:00
|
|
|
performance.
|
|
|
|
There are three parameters that are relatively safe to tune:
|
|
|
|
.Em blocksize , bytes/inode ,
|
2001-05-27 23:14:27 +00:00
|
|
|
and
|
|
|
|
.Em cylinders/group .
|
|
|
|
.Pp
|
|
|
|
.Fx
|
2001-11-21 13:28:50 +00:00
|
|
|
performs best when using 8K or 16K filesystem block sizes.
|
|
|
|
The default filesystem block size is 8K.
|
|
|
|
For larger partitions it is usually a good
|
|
|
|
idea to use a 16K block size.
|
|
|
|
This also requires you to specify a larger
|
|
|
|
fragment size.
|
|
|
|
We recommend always using a fragment size that is 1/8
|
2001-05-27 23:14:27 +00:00
|
|
|
the block size (less testing has been done on other fragment size factors).
|
|
|
|
The
|
2001-11-21 13:28:50 +00:00
|
|
|
.Xr newfs 8
|
2001-05-27 23:14:27 +00:00
|
|
|
options for this would be
|
2001-11-21 13:28:50 +00:00
|
|
|
.Dq Li "newfs -f 2048 -b 16384 ..." .
|
2001-05-27 23:14:27 +00:00
|
|
|
Using a larger block size can cause fragmentation of the buffer cache and
|
|
|
|
lead to lower performance.
|
|
|
|
.Pp
|
|
|
|
If a large partition is intended to be used to hold fewer, larger files, such
|
|
|
|
as a database files, you can increase the
|
|
|
|
.Em bytes/inode
|
2001-07-14 22:41:05 +00:00
|
|
|
ratio which reduces the number of inodes (maximum number of files and
|
2001-11-21 13:28:50 +00:00
|
|
|
directories that can be created) for that partition.
|
|
|
|
Decreasing the number
|
2001-05-27 23:14:27 +00:00
|
|
|
of inodes in a filesystem can greatly reduce
|
|
|
|
.Xr fsck 8
|
2001-11-21 13:28:50 +00:00
|
|
|
recovery times after a crash.
|
|
|
|
Do not use this option
|
2001-05-27 23:14:27 +00:00
|
|
|
unless you are actually storing large files on the partition, because if you
|
|
|
|
overcompensate you can wind up with a filesystem that has lots of free
|
2001-11-21 13:28:50 +00:00
|
|
|
space remaining but cannot accommodate any more files.
|
|
|
|
Using 32768, 65536, or 262144 bytes/inode is recommended.
|
|
|
|
You can go higher but
|
|
|
|
it will have only incremental effects on
|
|
|
|
.Xr fsck 8
|
|
|
|
recovery times.
|
|
|
|
For example,
|
|
|
|
.Dq Li "newfs -i 32768 ..." .
|
2001-05-27 23:14:27 +00:00
|
|
|
.Pp
|
|
|
|
Finally, increasing the
|
|
|
|
.Em cylinders/group
|
2001-11-21 13:28:50 +00:00
|
|
|
ratio has the effect of packing the inodes closer together.
|
|
|
|
This can increase directory performance and also decrease
|
|
|
|
.Xr fsck 8
|
|
|
|
times.
|
|
|
|
If you use this option at all, we recommend maxing it out.
|
|
|
|
Use
|
|
|
|
.Dq Li "newfs -c 999"
|
|
|
|
and
|
|
|
|
.Xr newfs 8
|
|
|
|
will error out and tell you what the maximum is, then use that.
|
2001-05-27 23:14:27 +00:00
|
|
|
.Pp
|
|
|
|
.Xr tunefs 8
|
2001-11-21 13:28:50 +00:00
|
|
|
may be used to further tune a filesystem.
|
|
|
|
This command can be run in
|
|
|
|
single-user mode without having to reformat the filesystem.
|
|
|
|
However, this is possibly the most abused program in the system.
|
|
|
|
Many people attempt to
|
2001-05-27 23:14:27 +00:00
|
|
|
increase available filesystem space by setting the min-free percentage to 0.
|
|
|
|
This can lead to severe filesystem fragmentation and we do not recommend
|
2001-11-21 13:28:50 +00:00
|
|
|
that you do this.
|
|
|
|
Really the only
|
|
|
|
.Xr tunefs 8
|
|
|
|
option worthwhile here is turning on
|
2001-05-27 23:14:27 +00:00
|
|
|
.Em softupdates
|
|
|
|
with
|
2001-11-21 13:28:50 +00:00
|
|
|
.Dq Li "tunefs -n enable /filesystem" .
|
|
|
|
(Note: in
|
|
|
|
.Fx
|
|
|
|
5.x
|
|
|
|
softupdates can be turned on using the
|
|
|
|
.Fl U
|
|
|
|
option to
|
|
|
|
.Xr newfs 8 ) .
|
2001-05-27 23:14:27 +00:00
|
|
|
Softupdates drastically improves meta-data performance, mainly file
|
2001-11-21 13:28:50 +00:00
|
|
|
creation and deletion.
|
|
|
|
We recommend enabling softupdates on all of your
|
|
|
|
filesystems.
|
|
|
|
There are two downsides to softupdates that you should be
|
|
|
|
aware of.
|
|
|
|
First, softupdates guarantees filesystem consistency in the
|
2001-05-27 23:14:27 +00:00
|
|
|
case of a crash but could very easily be several seconds (even a minute!)
|
2001-11-21 13:28:50 +00:00
|
|
|
behind updating the physical disk.
|
|
|
|
If you crash you may lose more work
|
|
|
|
than otherwise.
|
|
|
|
Secondly, softupdates delays the freeing of filesystem
|
|
|
|
blocks.
|
|
|
|
If you have a filesystem (such as the root filesystem) which is
|
2001-08-10 13:45:36 +00:00
|
|
|
close to full, doing a major update of it, e.g.\&
|
2001-11-21 13:28:50 +00:00
|
|
|
.Dq Li "make installworld" ,
|
2001-05-27 23:14:27 +00:00
|
|
|
can run it out of space and cause the update to fail.
|
2001-07-07 17:43:20 +00:00
|
|
|
.Pp
|
2001-11-21 13:28:50 +00:00
|
|
|
A number of run-time
|
|
|
|
.Xr mount 8
|
|
|
|
options exist that can help you tune the system.
|
2001-07-07 17:43:20 +00:00
|
|
|
The most obvious and most dangerous one is
|
2001-11-21 13:28:50 +00:00
|
|
|
.Cm async .
|
|
|
|
Don't ever use it, it is far too dangerous.
|
|
|
|
A less dangerous and more
|
|
|
|
useful
|
|
|
|
.Xr mount 8
|
|
|
|
option is called
|
|
|
|
.Cm noatime .
|
|
|
|
.Ux
|
|
|
|
filesystems normally update the last-accessed time of a file or
|
|
|
|
directory whenever it is accessed.
|
|
|
|
This operation is handled in
|
|
|
|
.Fx
|
2001-07-07 17:43:20 +00:00
|
|
|
with a delayed write and normally does not create a burden on the system.
|
|
|
|
However, if your system is accessing a huge number of files on a continuing
|
|
|
|
basis the buffer cache can wind up getting polluted with atime updates,
|
2001-11-21 13:28:50 +00:00
|
|
|
creating a burden on the system.
|
|
|
|
For example, if you are running a heavily
|
2001-07-07 17:43:20 +00:00
|
|
|
loaded web site, or a news server with lots of readers, you might want to
|
|
|
|
consider turning off atime updates on your larger partitions with this
|
2001-11-21 13:28:50 +00:00
|
|
|
.Xr mount 8
|
|
|
|
option.
|
|
|
|
However, you should not gratuitously turn off atime
|
|
|
|
updates everywhere.
|
|
|
|
For example, the
|
|
|
|
.Pa /var
|
|
|
|
filesystem customarily
|
2001-09-24 07:35:37 +00:00
|
|
|
holds mailboxes, and atime (in combination with mtime) is used to
|
2001-11-21 13:28:50 +00:00
|
|
|
determine whether a mailbox has new mail.
|
|
|
|
You might as well leave
|
|
|
|
atime turned on for mostly read-only partitions such as
|
|
|
|
.Pa /
|
|
|
|
and
|
|
|
|
.Pa /usr
|
|
|
|
as well.
|
|
|
|
This is especially useful for
|
|
|
|
.Pa /
|
|
|
|
since some system utilities
|
2001-09-24 07:35:37 +00:00
|
|
|
use the atime field for reporting.
|
2001-05-27 23:14:27 +00:00
|
|
|
.Sh STRIPING DISKS
|
|
|
|
In larger systems you can stripe partitions from several drives together
|
2001-11-21 13:28:50 +00:00
|
|
|
to create a much larger overall partition.
|
|
|
|
Striping can also improve
|
2001-05-27 23:14:27 +00:00
|
|
|
the performance of a filesystem by splitting I/O operations across two
|
2001-11-21 13:28:50 +00:00
|
|
|
or more disks.
|
|
|
|
The
|
2001-07-14 19:41:16 +00:00
|
|
|
.Xr vinum 8
|
2001-05-27 23:14:27 +00:00
|
|
|
and
|
2001-11-21 13:28:50 +00:00
|
|
|
.Xr ccdconfig 8
|
|
|
|
utilities may be used to create simple striped filesystems.
|
|
|
|
Generally
|
|
|
|
speaking, striping smaller partitions such as the root and
|
|
|
|
.Pa /var/tmp ,
|
|
|
|
or essentially read-only partitions such as
|
|
|
|
.Pa /usr
|
|
|
|
is a complete waste of time.
|
|
|
|
You should only stripe partitions that require serious I/O performance,
|
|
|
|
typically
|
|
|
|
.Pa /var , /home ,
|
|
|
|
or custom partitions used to hold databases and web pages.
|
|
|
|
Choosing the proper stripe size is also
|
|
|
|
important.
|
|
|
|
Filesystems tend to store meta-data on power-of-2 boundaries
|
|
|
|
and you usually want to reduce seeking rather than increase seeking.
|
|
|
|
This
|
2001-05-27 23:14:27 +00:00
|
|
|
means you want to use a large off-center stripe size such as 1152 sectors
|
|
|
|
so sequential I/O does not seek both disks and so meta-data is distributed
|
2001-11-21 13:28:50 +00:00
|
|
|
across both disks rather than concentrated on a single disk.
|
|
|
|
If
|
2001-05-27 23:14:27 +00:00
|
|
|
you really need to get sophisticated, we recommend using a real hardware
|
2001-11-21 13:28:50 +00:00
|
|
|
RAID controller from the list of
|
2001-05-27 23:14:27 +00:00
|
|
|
.Fx
|
|
|
|
supported controllers.
|
|
|
|
.Sh SYSCTL TUNING
|
|
|
|
There are several hundred
|
|
|
|
.Xr sysctl 8
|
|
|
|
variables in the system, including many that appear to be candidates for
|
2001-11-21 13:28:50 +00:00
|
|
|
tuning but actually aren't.
|
|
|
|
In this document we will only cover the ones
|
2001-05-27 23:14:27 +00:00
|
|
|
that have the greatest effect on the system.
|
|
|
|
.Pp
|
|
|
|
The
|
2001-11-21 13:28:50 +00:00
|
|
|
.Va kern.ipc.shm_use_phys
|
|
|
|
sysctl defaults to 0 (off) and may be set to 0 (off) or 1 (on).
|
|
|
|
Setting
|
|
|
|
this parameter to 1 will cause all System V shared memory segments to be
|
|
|
|
mapped to unpageable physical RAM.
|
|
|
|
This feature only has an effect if you
|
2001-05-27 23:14:27 +00:00
|
|
|
are either (A) mapping small amounts of shared memory across many (hundreds)
|
|
|
|
of processes, or (B) mapping large amounts of shared memory across any
|
2001-11-21 13:28:50 +00:00
|
|
|
number of processes.
|
|
|
|
This feature allows the kernel to remove a great deal
|
2001-05-27 23:14:27 +00:00
|
|
|
of internal memory management page-tracking overhead at the cost of wiring
|
|
|
|
the shared memory into core, making it unswappable.
|
|
|
|
.Pp
|
|
|
|
The
|
2001-11-21 13:28:50 +00:00
|
|
|
.Va vfs.vmiodirenable
|
2001-05-27 23:14:27 +00:00
|
|
|
sysctl defaults to 0 (off) (though soon it will default to 1) and may be
|
2001-11-21 13:28:50 +00:00
|
|
|
set to 0 (off) or 1 (on).
|
|
|
|
This parameter controls how directories are cached
|
|
|
|
by the system.
|
|
|
|
Most directories are small and use but a single fragment
|
2001-05-27 23:14:27 +00:00
|
|
|
(typically 1K) in the filesystem and even less (typically 512 bytes) in
|
2001-11-21 13:28:50 +00:00
|
|
|
the buffer cache.
|
|
|
|
However, when operating in the default mode the buffer
|
2001-05-27 23:14:27 +00:00
|
|
|
cache will only cache a fixed number of directories even if you have a huge
|
2001-11-21 13:28:50 +00:00
|
|
|
amount of memory.
|
|
|
|
Turning on this sysctl allows the buffer cache to use
|
|
|
|
the VM Page Cache to cache the directories.
|
|
|
|
The advantage is that all of
|
|
|
|
memory is now available for caching directories.
|
|
|
|
The disadvantage is that
|
2001-05-27 23:14:27 +00:00
|
|
|
the minimum in-core memory used to cache a directory is the physical page
|
2001-11-21 13:28:50 +00:00
|
|
|
size (typically 4K) rather than 512 bytes.
|
|
|
|
We recommend turning this option
|
2001-05-27 23:14:27 +00:00
|
|
|
on if you are running any services which manipulate large numbers of files.
|
|
|
|
Such services can include web caches, large mail systems, and news systems.
|
|
|
|
Turning on this option will generally not reduce performance even with the
|
|
|
|
wasted memory but you should experiment to find out.
|
|
|
|
.Pp
|
2001-11-21 13:28:50 +00:00
|
|
|
There are various buffer-cache and VM page cache related sysctls.
|
|
|
|
We do not recommend messing around with these at all.
|
|
|
|
As of
|
2001-05-27 23:14:27 +00:00
|
|
|
.Fx 4.3 ,
|
|
|
|
the VM system does an extremely good job tuning itself.
|
|
|
|
.Pp
|
|
|
|
The
|
2001-11-21 13:28:50 +00:00
|
|
|
.Va net.inet.tcp.sendspace
|
2001-05-27 23:14:27 +00:00
|
|
|
and
|
2001-11-21 13:28:50 +00:00
|
|
|
.Va net.inet.tcp.recvspace
|
2001-05-27 23:14:27 +00:00
|
|
|
sysctls are of particular interest if you are running network intensive
|
2001-11-21 13:28:50 +00:00
|
|
|
applications.
|
|
|
|
This controls the amount of send and receive buffer space
|
|
|
|
allowed for any given TCP connection.
|
|
|
|
The default is 16K.
|
|
|
|
You can often
|
2001-07-14 19:41:16 +00:00
|
|
|
improve bandwidth utilization by increasing the default at the cost of
|
2001-11-21 13:28:50 +00:00
|
|
|
eating up more kernel memory for each connection.
|
|
|
|
We do not recommend
|
2001-05-27 23:14:27 +00:00
|
|
|
increasing the defaults if you are serving hundreds or thousands of
|
2001-06-05 05:59:21 +00:00
|
|
|
simultaneous connections because it is possible to quickly run the system
|
2001-11-21 13:28:50 +00:00
|
|
|
out of memory due to stalled connections building up.
|
|
|
|
But if you need
|
2001-05-27 23:14:27 +00:00
|
|
|
high bandwidth over a fewer number of connections, especially if you have
|
|
|
|
gigabit ethernet, increasing these defaults can make a huge difference.
|
|
|
|
You can adjust the buffer size for incoming and outgoing data separately.
|
|
|
|
For example, if your machine is primarily doing web serving you may want
|
2001-11-21 13:28:50 +00:00
|
|
|
to decrease the recvspace in order to be able to increase the
|
|
|
|
sendspace without eating too much kernel memory.
|
|
|
|
Note that the routing table (see
|
|
|
|
.Xr route 8 )
|
2001-05-27 23:14:27 +00:00
|
|
|
can be used to introduce route-specific send and receive buffer size
|
2001-11-21 13:28:50 +00:00
|
|
|
defaults.
|
|
|
|
As an additional management tool you can use pipes in your
|
|
|
|
firewall rules (see
|
|
|
|
.Xr ipfw 8 )
|
2001-05-27 23:14:27 +00:00
|
|
|
to limit the bandwidth going to or from particular IP blocks or ports.
|
|
|
|
For example, if you have a T1 you might want to limit your web traffic
|
|
|
|
to 70% of the T1's bandwidth in order to leave the remainder available
|
2001-11-21 13:28:50 +00:00
|
|
|
for mail and interactive use.
|
|
|
|
Normally a heavily loaded web server
|
2001-07-14 19:41:16 +00:00
|
|
|
will not introduce significant latencies into other services even if
|
2001-05-27 23:14:27 +00:00
|
|
|
the network link is maxed out, but enforcing a limit can smooth things
|
2001-11-21 13:28:50 +00:00
|
|
|
out and lead to longer term stability.
|
|
|
|
Many people also enforce artificial
|
2001-05-27 23:14:27 +00:00
|
|
|
bandwidth limitations in order to ensure that they are not charged for
|
|
|
|
using too much bandwidth.
|
|
|
|
.Pp
|
2001-06-05 05:59:21 +00:00
|
|
|
Setting the send or receive TCP buffer to values larger then 65535 will result
|
2001-09-17 03:49:51 +00:00
|
|
|
in a marginal performance improvement unless both hosts support the window
|
2001-11-21 13:28:50 +00:00
|
|
|
scaling extension of the TCP protocol, which is controlled by the
|
|
|
|
.Va net.inet.tcp.rfc1323
|
|
|
|
sysctl.
|
2001-09-17 03:49:51 +00:00
|
|
|
These extensions should be enabled and the TCP buffer size should be set
|
|
|
|
to a value larger than 65536 in order to obtain good performance out of
|
|
|
|
certain types of network links; specifically, gigabit WAN links and
|
|
|
|
high-latency satellite links.
|
2001-06-05 05:59:21 +00:00
|
|
|
.Pp
|
2001-07-14 19:41:16 +00:00
|
|
|
We recommend that you turn on (set to 1) and leave on the
|
2001-11-21 13:28:50 +00:00
|
|
|
.Va net.inet.tcp.always_keepalive
|
|
|
|
control.
|
|
|
|
The default is usually off.
|
|
|
|
This introduces a small amount of
|
|
|
|
additional network bandwidth but guarantees that dead TCP connections
|
|
|
|
will eventually be recognized and cleared.
|
|
|
|
Dead TCP connections are a
|
2001-06-05 05:59:21 +00:00
|
|
|
particular problem on systems accessed by users operating over dialups,
|
2001-05-27 23:14:27 +00:00
|
|
|
because users often disconnect their modems without properly closing active
|
|
|
|
connections.
|
|
|
|
.Pp
|
|
|
|
The
|
2001-11-21 13:28:50 +00:00
|
|
|
.Va kern.ipc.somaxconn
|
|
|
|
sysctl limits the size of the listen queue for accepting new TCP connections.
|
2001-05-27 23:14:27 +00:00
|
|
|
The default value of 128 is typically too low for robust handling of new
|
2001-11-21 13:28:50 +00:00
|
|
|
connections in a heavily loaded web server environment.
|
|
|
|
For such environments,
|
|
|
|
we recommend increasing this value to 1024 or higher.
|
|
|
|
The service daemon
|
|
|
|
may itself limit the listen queue size (e.g.\&
|
|
|
|
.Xr sendmail 8 ,
|
|
|
|
apache) but will
|
2001-05-27 23:14:27 +00:00
|
|
|
often have a directive in its configuration file to adjust the queue size up.
|
2001-07-14 22:41:05 +00:00
|
|
|
Larger listen queues also do a better job of fending off denial of service
|
2001-05-27 23:14:27 +00:00
|
|
|
attacks.
|
2001-07-07 17:43:20 +00:00
|
|
|
.Pp
|
|
|
|
The
|
2001-11-21 13:28:50 +00:00
|
|
|
.Va kern.maxfiles
|
|
|
|
sysctl determines how many open files the system supports.
|
|
|
|
The default is
|
2001-07-07 17:43:20 +00:00
|
|
|
typically a few thousand but you may need to bump this up to ten or twenty
|
|
|
|
thousand if you are running databases or large descriptor-heavy daemons.
|
|
|
|
.Pp
|
|
|
|
The
|
2001-11-21 13:28:50 +00:00
|
|
|
.Va vm.swap_idle_enabled
|
2001-07-07 17:43:20 +00:00
|
|
|
sysctl is useful in large multi-user systems where you have lots of users
|
2001-11-21 13:28:50 +00:00
|
|
|
entering and leaving the system and lots of idle processes.
|
|
|
|
Such systems
|
2001-07-07 17:43:20 +00:00
|
|
|
tend to generate a great deal of continuous pressure on free memory reserves.
|
|
|
|
Turning this feature on and adjusting the swapout hysteresis (in idle
|
|
|
|
seconds) via
|
2001-11-21 13:28:50 +00:00
|
|
|
.Va vm.swap_idle_threshold1
|
2001-07-07 17:43:20 +00:00
|
|
|
and
|
2001-11-21 13:28:50 +00:00
|
|
|
.Va vm.swap_idle_threshold2
|
2001-07-07 17:43:20 +00:00
|
|
|
allows you to depress the priority of pages associated with idle processes
|
2001-11-21 13:28:50 +00:00
|
|
|
more quickly then the normal pageout algorithm.
|
|
|
|
This gives a helping hand
|
|
|
|
to the pageout daemon.
|
|
|
|
Do not turn this option on unless you need it,
|
2001-07-07 17:43:20 +00:00
|
|
|
because the tradeoff you are making is to essentially pre-page memory sooner
|
2001-11-21 13:28:50 +00:00
|
|
|
rather then later, eating more swap and disk bandwidth.
|
|
|
|
In a small system
|
2001-07-07 17:43:20 +00:00
|
|
|
this option will have a detrimental effect but in a large system that is
|
|
|
|
already doing moderate paging this option allows the VM system to stage
|
|
|
|
whole processes into and out of memory more easily.
|
2001-10-29 22:29:01 +00:00
|
|
|
.Sh BOOT-TIME SYSCTL TUNING
|
|
|
|
Some sysctls may not be tunable at runtime because the memory allocations
|
2001-11-21 13:28:50 +00:00
|
|
|
they perform must occur early in the boot process.
|
|
|
|
To change these sysctls,
|
2001-10-29 22:29:01 +00:00
|
|
|
you must set their value in
|
|
|
|
.Xr loader.conf 5
|
|
|
|
and reboot the system.
|
2001-05-27 23:14:27 +00:00
|
|
|
.Pp
|
|
|
|
The
|
2001-11-21 13:28:50 +00:00
|
|
|
.Va kern.maxusers
|
|
|
|
sysctl defaults to an incredibly low value.
|
|
|
|
For most modern machines,
|
|
|
|
you probably want to increase this value to 64, 128, or 256.
|
|
|
|
We do not
|
2001-05-27 23:14:27 +00:00
|
|
|
recommend going above 256 unless you need a huge number of file descriptors.
|
|
|
|
Network buffers are also affected but can be controlled with a separate
|
2001-11-21 13:28:50 +00:00
|
|
|
kernel option.
|
|
|
|
Do not increase maxusers just to get more network mbufs.
|
|
|
|
Systems older than
|
|
|
|
.Fx 4.4
|
|
|
|
do not have this sysctl and require that
|
|
|
|
the kernel
|
|
|
|
.Xr config 8
|
|
|
|
option
|
|
|
|
.Cd maxusers
|
|
|
|
be set instead.
|
2001-05-27 23:14:27 +00:00
|
|
|
.Pp
|
2001-11-21 13:28:50 +00:00
|
|
|
.Va kern.ipc.nmbclusters
|
2001-05-27 23:14:27 +00:00
|
|
|
may be adjusted to increase the number of network mbufs the system is
|
2001-11-21 13:28:50 +00:00
|
|
|
willing to allocate.
|
|
|
|
Each cluster represents approximately 2K of memory,
|
2001-05-27 23:14:27 +00:00
|
|
|
so a value of 1024 represents 2M of kernel memory reserved for network
|
2001-11-21 13:28:50 +00:00
|
|
|
buffers.
|
|
|
|
You can do a simple calculation to figure out how many you need.
|
2001-06-05 05:59:21 +00:00
|
|
|
If you have a web server which maxes out at 1000 simultaneous connections,
|
2001-05-27 23:14:27 +00:00
|
|
|
and each connection eats a 16K receive and 16K send buffer, you need
|
2001-11-21 13:28:50 +00:00
|
|
|
approximate 32MB worth of network buffers to deal with it.
|
|
|
|
A good rule of
|
|
|
|
thumb is to multiply by 2, so 32MBx2 = 64MB/2K = 32768.
|
|
|
|
So for this case
|
|
|
|
you would want to set
|
|
|
|
.Va kern.ipc.nmbclusters
|
|
|
|
to 32768.
|
|
|
|
We recommend values between
|
2001-05-27 23:14:27 +00:00
|
|
|
1024 and 4096 for machines with moderates amount of memory, and between 4096
|
2001-11-21 13:28:50 +00:00
|
|
|
and 32768 for machines with greater amounts of memory.
|
|
|
|
Under no circumstances
|
2001-05-27 23:14:27 +00:00
|
|
|
should you specify an arbitrarily high value for this parameter, it could
|
2001-11-21 13:28:50 +00:00
|
|
|
lead to a boot-time crash.
|
|
|
|
The
|
|
|
|
.Fl m
|
|
|
|
option to
|
2001-05-27 23:14:27 +00:00
|
|
|
.Xr netstat 1
|
|
|
|
may be used to observe network cluster use.
|
2001-11-21 13:28:50 +00:00
|
|
|
Older versions of
|
|
|
|
.Fx
|
|
|
|
do not have this sysctl and require that the
|
|
|
|
kernel
|
|
|
|
.Xr config 8
|
|
|
|
option
|
|
|
|
.Dv NMBCLUSTERS
|
|
|
|
be set instead.
|
2001-05-27 23:14:27 +00:00
|
|
|
.Pp
|
|
|
|
More and more programs are using the
|
2001-11-21 13:28:50 +00:00
|
|
|
.Xr sendfile 2
|
|
|
|
system call to transmit files over the network.
|
|
|
|
The
|
|
|
|
.Va kern.ipc.nsfbufs
|
2001-10-29 22:29:01 +00:00
|
|
|
sysctl controls the number of filesystem buffers
|
2001-11-21 13:28:50 +00:00
|
|
|
.Xr sendfile 2
|
|
|
|
is allowed to use to perform its work.
|
|
|
|
This parameter nominally scales
|
2001-05-27 23:14:27 +00:00
|
|
|
with
|
2001-11-21 13:28:50 +00:00
|
|
|
.Va kern.maxusers
|
2001-05-27 23:14:27 +00:00
|
|
|
so you should not need to mess with this parameter except under extreme
|
|
|
|
circumstances.
|
2001-10-29 22:29:01 +00:00
|
|
|
.Sh KERNEL CONFIG TUNING
|
|
|
|
There are a number of kernel options that you may have to fiddle with in
|
2001-11-21 13:28:50 +00:00
|
|
|
a large scale system.
|
|
|
|
In order to change these options you need to be
|
|
|
|
able to compile a new kernel from source.
|
|
|
|
The
|
2001-10-29 22:29:01 +00:00
|
|
|
.Xr config 8
|
|
|
|
manual page and the handbook are good starting points for learning how to
|
2001-11-21 13:28:50 +00:00
|
|
|
do this.
|
|
|
|
Generally the first thing you do when creating your own custom
|
|
|
|
kernel is to strip out all the drivers and services you don't use.
|
|
|
|
Removing things like
|
|
|
|
.Dv INET6
|
2001-10-29 22:29:01 +00:00
|
|
|
and drivers you don't have will reduce the size of your kernel, sometimes
|
|
|
|
by a megabyte or more, leaving more memory available for applications.
|
|
|
|
.Pp
|
2001-11-21 13:28:50 +00:00
|
|
|
.Dv SCSI_DELAY
|
2001-05-27 23:14:27 +00:00
|
|
|
and
|
2001-11-21 13:28:50 +00:00
|
|
|
.Dv IDE_DELAY
|
|
|
|
may be used to reduce system boot times.
|
|
|
|
The defaults are fairly high and
|
|
|
|
can be responsible for 15+ seconds of delay in the boot process.
|
|
|
|
Reducing
|
|
|
|
.Dv SCSI_DELAY
|
|
|
|
to 5 seconds usually works (especially with modern drives).
|
|
|
|
Reducing
|
|
|
|
.Dv IDE_DELAY
|
|
|
|
also works but you have to be a little more careful.
|
2001-05-27 23:14:27 +00:00
|
|
|
.Pp
|
|
|
|
There are a number of
|
2001-11-21 13:28:50 +00:00
|
|
|
.Dv *_CPU
|
|
|
|
options that can be commented out.
|
|
|
|
If you only want the kernel to run
|
|
|
|
on a Pentium class CPU, you can easily remove
|
|
|
|
.Dv I386_CPU
|
2001-05-27 23:14:27 +00:00
|
|
|
and
|
2001-11-21 13:28:50 +00:00
|
|
|
.Dv I486_CPU ,
|
2001-05-27 23:14:27 +00:00
|
|
|
but only remove
|
2001-11-21 13:28:50 +00:00
|
|
|
.Dv I586_CPU
|
|
|
|
if you are sure your CPU is being recognized as a Pentium II or better.
|
2001-07-14 22:41:05 +00:00
|
|
|
Some clones may be recognized as a Pentium or even a 486 and not be able
|
2001-11-21 13:28:50 +00:00
|
|
|
to boot without those options.
|
|
|
|
If it works, great!
|
|
|
|
The operating system
|
|
|
|
will be able to better-use higher-end CPU features for MMU, task switching,
|
|
|
|
timebase, and even device operations.
|
|
|
|
Additionally, higher-end CPUs support
|
2001-05-27 23:14:27 +00:00
|
|
|
4MB MMU pages which the kernel uses to map the kernel itself into memory,
|
|
|
|
which increases its efficiency under heavy syscall loads.
|
|
|
|
.Sh IDE WRITE CACHING
|
2001-07-14 19:41:16 +00:00
|
|
|
.Fx 4.3
|
2001-11-21 13:28:50 +00:00
|
|
|
flirted with turning off IDE write caching.
|
|
|
|
This reduced write bandwidth
|
2001-06-05 05:59:21 +00:00
|
|
|
to IDE disks but was considered necessary due to serious data consistency
|
2001-11-21 13:28:50 +00:00
|
|
|
issues introduced by hard drive vendors.
|
|
|
|
Basically the problem is that
|
|
|
|
IDE drives lie about when a write completes.
|
|
|
|
With IDE write caching turned
|
2001-05-27 23:14:27 +00:00
|
|
|
on, IDE hard drives will not only write data to disk out of order, they
|
|
|
|
will sometimes delay some of the blocks indefinitely when under heavy disk
|
2001-11-21 13:28:50 +00:00
|
|
|
loads.
|
|
|
|
A crash or power failure can result in serious filesystem
|
|
|
|
corruption.
|
|
|
|
So our default was changed to be safe.
|
|
|
|
Unfortunately, the
|
2001-06-05 05:59:21 +00:00
|
|
|
result was such a huge loss in performance that we caved in and changed the
|
2001-11-21 13:28:50 +00:00
|
|
|
default back to on after the release.
|
|
|
|
You should check the default on
|
2001-06-05 05:59:21 +00:00
|
|
|
your system by observing the
|
2001-11-21 13:28:50 +00:00
|
|
|
.Va hw.ata.wc
|
|
|
|
sysctl variable.
|
|
|
|
If IDE write caching is turned off, you can turn it back
|
2001-06-05 05:59:21 +00:00
|
|
|
on by setting the
|
2001-11-21 13:28:50 +00:00
|
|
|
.Va hw.ata.wc
|
|
|
|
kernel variable back to 1.
|
|
|
|
This must be done from the boot
|
|
|
|
.Xr loader 8
|
|
|
|
at boot time.
|
|
|
|
Attempting to do it after the kernel boots will have no effect.
|
2001-06-05 05:59:21 +00:00
|
|
|
Please see
|
2001-11-21 13:28:50 +00:00
|
|
|
.Xr ata 4
|
2001-05-27 23:14:27 +00:00
|
|
|
and
|
|
|
|
.Xr loader 8 .
|
|
|
|
.Pp
|
2001-11-21 13:28:50 +00:00
|
|
|
There is a new experimental feature for IDE hard drives called
|
|
|
|
.Va hw.ata.tags
|
|
|
|
(you also set this in the boot loader) which allows write caching to be safely
|
|
|
|
turned on.
|
|
|
|
This brings SCSI tagging features to IDE drives.
|
|
|
|
As of this
|
|
|
|
writing only IBM DPTA and DTLA drives support the feature.
|
|
|
|
Warning!
|
|
|
|
These
|
2001-06-05 05:59:21 +00:00
|
|
|
drives apparently have quality control problems and I do not recommend
|
2001-11-21 13:28:50 +00:00
|
|
|
purchasing them at this time.
|
|
|
|
If you need performance, go with SCSI.
|
2001-05-27 23:14:27 +00:00
|
|
|
.Sh CPU, MEMORY, DISK, NETWORK
|
|
|
|
The type of tuning you do depends heavily on where your system begins to
|
2001-11-21 13:28:50 +00:00
|
|
|
bottleneck as load increases.
|
|
|
|
If your system runs out of CPU (idle times
|
|
|
|
are perpetually 0%) then you need to consider upgrading the CPU or moving to
|
|
|
|
an SMP motherboard (multiple CPU's), or perhaps you need to revisit the
|
|
|
|
programs that are causing the load and try to optimize them.
|
|
|
|
If your system
|
|
|
|
is paging to swap a lot you need to consider adding more memory.
|
|
|
|
If your
|
|
|
|
system is saturating the disk you typically see high CPU idle times and
|
2001-05-27 23:14:27 +00:00
|
|
|
total disk saturation.
|
|
|
|
.Xr systat 1
|
2001-11-21 13:28:50 +00:00
|
|
|
can be used to monitor this.
|
|
|
|
There are many solutions to saturated disks:
|
2001-05-27 23:14:27 +00:00
|
|
|
increasing memory for caching, mirroring disks, distributing operations across
|
2001-11-21 13:28:50 +00:00
|
|
|
several machines, and so forth.
|
|
|
|
If disk performance is an issue and you
|
|
|
|
are using IDE drives, switching to SCSI can help a great deal.
|
|
|
|
While modern
|
2001-05-27 23:14:27 +00:00
|
|
|
IDE drives compare with SCSI in raw sequential bandwidth, the moment you
|
|
|
|
start seeking around the disk SCSI drives usually win.
|
|
|
|
.Pp
|
2001-11-21 13:28:50 +00:00
|
|
|
Finally, you might run out of network suds.
|
|
|
|
The first line of defense for
|
2001-05-27 23:14:27 +00:00
|
|
|
improving network performance is to make sure you are using switches instead
|
2001-11-21 13:28:50 +00:00
|
|
|
of hubs, especially these days where switches are almost as cheap.
|
|
|
|
Hubs
|
2001-05-27 23:14:27 +00:00
|
|
|
have severe problems under heavy loads due to collision backoff and one bad
|
2001-11-21 13:28:50 +00:00
|
|
|
host can severely degrade the entire LAN.
|
|
|
|
Second, optimize the network path
|
|
|
|
as much as possible.
|
|
|
|
For example, in
|
2001-05-27 23:14:27 +00:00
|
|
|
.Xr firewall 7
|
|
|
|
we describe a firewall protecting internal hosts with a topology where
|
2001-11-21 13:28:50 +00:00
|
|
|
the externally visible hosts are not routed through it.
|
|
|
|
Use 100BaseT rather
|
2001-06-05 05:59:21 +00:00
|
|
|
than 10BaseT, or use 1000BaseT rather then 100BaseT, depending on your needs.
|
2001-11-21 13:28:50 +00:00
|
|
|
Most bottlenecks occur at the WAN link (e.g.\&
|
|
|
|
modem, T1, DSL, whatever).
|
|
|
|
If expanding the link is not an option it may be possible to use
|
|
|
|
.Xr dummynet 4
|
2001-05-27 23:14:27 +00:00
|
|
|
feature to implement peak shaving or other forms of traffic shaping to
|
2001-07-14 22:41:05 +00:00
|
|
|
prevent the overloaded service (such as web services) from affecting other
|
2001-11-21 13:28:50 +00:00
|
|
|
services (such as email), or vice versa.
|
|
|
|
In home installations this could
|
|
|
|
be used to give interactive traffic (your browser,
|
|
|
|
.Xr ssh 1
|
|
|
|
logins) priority
|
2001-05-27 23:14:27 +00:00
|
|
|
over services you export from your box (web services, email).
|
|
|
|
.Sh SEE ALSO
|
2001-06-24 01:17:07 +00:00
|
|
|
.Xr netstat 1 ,
|
|
|
|
.Xr systat 1 ,
|
2001-05-27 23:14:27 +00:00
|
|
|
.Xr ata 4 ,
|
2001-11-21 13:28:50 +00:00
|
|
|
.Xr dummynet 4 ,
|
2001-06-24 01:17:07 +00:00
|
|
|
.Xr login.conf 5 ,
|
|
|
|
.Xr firewall 7 ,
|
2001-07-10 17:52:29 +00:00
|
|
|
.Xr hier 7 ,
|
2001-06-24 01:17:07 +00:00
|
|
|
.Xr ports 7 ,
|
|
|
|
.Xr boot 8 ,
|
2001-11-21 13:28:50 +00:00
|
|
|
.Xr ccdconfig 8 ,
|
2001-05-27 23:14:27 +00:00
|
|
|
.Xr config 8 ,
|
|
|
|
.Xr disklabel 8 ,
|
|
|
|
.Xr fsck 8 ,
|
|
|
|
.Xr ifconfig 8 ,
|
|
|
|
.Xr ipfw 8 ,
|
|
|
|
.Xr loader 8 ,
|
2001-11-21 13:28:50 +00:00
|
|
|
.Xr mount 8 ,
|
2001-05-27 23:14:27 +00:00
|
|
|
.Xr newfs 8 ,
|
|
|
|
.Xr route 8 ,
|
|
|
|
.Xr sysctl 8 ,
|
|
|
|
.Xr tunefs 8 ,
|
|
|
|
.Xr vinum 8
|
|
|
|
.Sh HISTORY
|
|
|
|
The
|
|
|
|
.Nm
|
|
|
|
manual page was originally written by
|
|
|
|
.An Matthew Dillon
|
2001-07-14 19:41:16 +00:00
|
|
|
and first appeared
|
2001-05-27 23:14:27 +00:00
|
|
|
in
|
|
|
|
.Fx 4.3 ,
|
|
|
|
May 2001.
|