2004-08-07 04:40:20 +00:00
|
|
|
.\" Copyright (C) 2001 Matthew Dillon. All rights reserved.
|
2018-05-19 20:35:15 +00:00
|
|
|
.\" Copyright (C) 2012 Eitan Adler.
|
2004-08-07 04:40:20 +00:00
|
|
|
.\"
|
|
|
|
.\" Redistribution and use in source and binary forms, with or without
|
|
|
|
.\" modification, are permitted provided that the following conditions
|
|
|
|
.\" are met:
|
|
|
|
.\" 1. Redistributions of source code must retain the above copyright
|
|
|
|
.\" notice, this list of conditions and the following disclaimer.
|
|
|
|
.\" 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
.\" notice, this list of conditions and the following disclaimer in the
|
|
|
|
.\" documentation and/or other materials provided with the distribution.
|
|
|
|
.\"
|
|
|
|
.\" THIS SOFTWARE IS PROVIDED BY AUTHOR AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
.\" ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE
|
|
|
|
.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
.\" SUCH DAMAGE.
|
2001-05-27 23:14:27 +00:00
|
|
|
.\"
|
|
|
|
.\" $FreeBSD$
|
|
|
|
.\"
|
2017-10-31 06:36:33 +00:00
|
|
|
.Dd October 30, 2017
|
2001-05-27 23:14:27 +00:00
|
|
|
.Dt TUNING 7
|
2001-07-10 15:31:11 +00:00
|
|
|
.Os
|
2001-05-27 23:14:27 +00:00
|
|
|
.Sh NAME
|
|
|
|
.Nm tuning
|
|
|
|
.Nd performance tuning under FreeBSD
|
|
|
|
.Sh SYSTEM SETUP - DISKLABEL, NEWFS, TUNEFS, SWAP
|
2012-11-16 01:22:56 +00:00
|
|
|
The swap partition should typically be approximately 2x the size of
|
|
|
|
main memory
|
|
|
|
for systems with less than 4GB of RAM, or approximately equal to
|
|
|
|
the size of main memory
|
2009-03-09 05:41:04 +00:00
|
|
|
if you have more.
|
2012-11-16 01:22:56 +00:00
|
|
|
Keep in mind future memory
|
2001-05-27 23:14:27 +00:00
|
|
|
expansion when sizing the swap partition.
|
2001-11-21 13:28:50 +00:00
|
|
|
Configuring too little swap can lead
|
2001-05-27 23:14:27 +00:00
|
|
|
to inefficiencies in the VM page scanning code as well as create issues
|
2001-11-21 13:28:50 +00:00
|
|
|
later on if you add more memory to your machine.
|
2012-11-16 01:22:56 +00:00
|
|
|
On larger systems
|
2017-10-31 06:35:17 +00:00
|
|
|
with multiple disks, configure swap on each drive.
|
2001-11-21 13:28:50 +00:00
|
|
|
The swap partitions on the drives should be approximately the same size.
|
|
|
|
The kernel can handle arbitrary sizes but
|
|
|
|
internal data structures scale to 4 times the largest swap partition.
|
|
|
|
Keeping
|
2001-05-27 23:14:27 +00:00
|
|
|
the swap partitions near the same size will allow the kernel to optimally
|
2001-11-21 13:28:50 +00:00
|
|
|
stripe swap space across the N disks.
|
2002-01-10 16:27:25 +00:00
|
|
|
Do not worry about overdoing it a
|
2001-05-27 23:14:27 +00:00
|
|
|
little, swap space is the saving grace of
|
|
|
|
.Ux
|
2002-01-10 16:27:25 +00:00
|
|
|
and even if you do not normally use much swap, it can give you more time to
|
2001-05-27 23:14:27 +00:00
|
|
|
recover from a runaway program before being forced to reboot.
|
|
|
|
.Pp
|
2012-11-16 01:22:56 +00:00
|
|
|
It is not a good idea to make one large partition.
|
2001-11-21 13:28:50 +00:00
|
|
|
First,
|
2001-05-27 23:14:27 +00:00
|
|
|
each partition has different operational characteristics and separating them
|
2002-12-12 17:26:04 +00:00
|
|
|
allows the file system to tune itself to those characteristics.
|
2001-11-21 13:28:50 +00:00
|
|
|
For example,
|
|
|
|
the root and
|
|
|
|
.Pa /usr
|
|
|
|
partitions are read-mostly, with very little writing, while
|
|
|
|
a lot of reading and writing could occur in
|
|
|
|
.Pa /var/tmp .
|
|
|
|
By properly
|
2001-07-07 17:43:20 +00:00
|
|
|
partitioning your system fragmentation introduced in the smaller more
|
2001-05-27 23:14:27 +00:00
|
|
|
heavily write-loaded partitions will not bleed over into the mostly-read
|
2001-11-21 13:28:50 +00:00
|
|
|
partitions.
|
2001-05-27 23:14:27 +00:00
|
|
|
.Pp
|
|
|
|
Properly partitioning your system also allows you to tune
|
|
|
|
.Xr newfs 8 ,
|
|
|
|
and
|
|
|
|
.Xr tunefs 8
|
2001-11-21 13:28:50 +00:00
|
|
|
parameters.
|
2012-11-16 01:22:56 +00:00
|
|
|
The only
|
2001-05-27 23:14:27 +00:00
|
|
|
.Xr tunefs 8
|
2012-11-16 01:22:56 +00:00
|
|
|
option worthwhile turning on is
|
2001-05-27 23:14:27 +00:00
|
|
|
.Em softupdates
|
|
|
|
with
|
2001-11-21 13:28:50 +00:00
|
|
|
.Dq Li "tunefs -n enable /filesystem" .
|
2001-05-27 23:14:27 +00:00
|
|
|
Softupdates drastically improves meta-data performance, mainly file
|
2001-11-21 13:28:50 +00:00
|
|
|
creation and deletion.
|
2002-12-12 17:26:04 +00:00
|
|
|
We recommend enabling softupdates on most file systems; however, there
|
2001-12-22 21:01:07 +00:00
|
|
|
are two limitations to softupdates that you should be aware of when
|
2002-12-12 17:26:04 +00:00
|
|
|
determining whether to use it on a file system.
|
|
|
|
First, softupdates guarantees file system consistency in the
|
2002-11-29 11:39:20 +00:00
|
|
|
case of a crash but could very easily be several seconds (even a minute!\&)
|
2002-08-29 20:34:06 +00:00
|
|
|
behind on pending write to the physical disk.
|
2001-11-21 13:28:50 +00:00
|
|
|
If you crash you may lose more work
|
|
|
|
than otherwise.
|
2002-12-12 17:26:04 +00:00
|
|
|
Secondly, softupdates delays the freeing of file system
|
2001-11-21 13:28:50 +00:00
|
|
|
blocks.
|
2002-12-12 17:26:04 +00:00
|
|
|
If you have a file system (such as the root file system) which is
|
2012-11-16 01:22:56 +00:00
|
|
|
close to full, doing a major update of it, e.g.,\&
|
2001-11-21 13:28:50 +00:00
|
|
|
.Dq Li "make installworld" ,
|
2001-05-27 23:14:27 +00:00
|
|
|
can run it out of space and cause the update to fail.
|
2002-12-12 17:26:04 +00:00
|
|
|
For this reason, softupdates will not be enabled on the root file system
|
2002-11-29 11:39:20 +00:00
|
|
|
during a typical install.
|
|
|
|
There is no loss of performance since the root
|
2002-12-12 17:26:04 +00:00
|
|
|
file system is rarely written to.
|
2001-07-07 17:43:20 +00:00
|
|
|
.Pp
|
2001-11-21 13:28:50 +00:00
|
|
|
A number of run-time
|
|
|
|
.Xr mount 8
|
|
|
|
options exist that can help you tune the system.
|
2001-07-07 17:43:20 +00:00
|
|
|
The most obvious and most dangerous one is
|
2001-11-21 13:28:50 +00:00
|
|
|
.Cm async .
|
2007-11-09 00:50:08 +00:00
|
|
|
Only use this option in conjunction with
|
|
|
|
.Xr gjournal 8 ,
|
|
|
|
as it is far too dangerous on a normal file system.
|
2001-11-21 13:28:50 +00:00
|
|
|
A less dangerous and more
|
|
|
|
useful
|
|
|
|
.Xr mount 8
|
|
|
|
option is called
|
|
|
|
.Cm noatime .
|
|
|
|
.Ux
|
2002-12-12 17:26:04 +00:00
|
|
|
file systems normally update the last-accessed time of a file or
|
2001-11-21 13:28:50 +00:00
|
|
|
directory whenever it is accessed.
|
|
|
|
This operation is handled in
|
|
|
|
.Fx
|
2001-07-07 17:43:20 +00:00
|
|
|
with a delayed write and normally does not create a burden on the system.
|
|
|
|
However, if your system is accessing a huge number of files on a continuing
|
|
|
|
basis the buffer cache can wind up getting polluted with atime updates,
|
2001-11-21 13:28:50 +00:00
|
|
|
creating a burden on the system.
|
|
|
|
For example, if you are running a heavily
|
2001-07-07 17:43:20 +00:00
|
|
|
loaded web site, or a news server with lots of readers, you might want to
|
|
|
|
consider turning off atime updates on your larger partitions with this
|
2001-11-21 13:28:50 +00:00
|
|
|
.Xr mount 8
|
|
|
|
option.
|
|
|
|
However, you should not gratuitously turn off atime
|
|
|
|
updates everywhere.
|
|
|
|
For example, the
|
|
|
|
.Pa /var
|
2002-12-12 17:26:04 +00:00
|
|
|
file system customarily
|
2001-09-24 07:35:37 +00:00
|
|
|
holds mailboxes, and atime (in combination with mtime) is used to
|
2001-11-21 13:28:50 +00:00
|
|
|
determine whether a mailbox has new mail.
|
|
|
|
You might as well leave
|
|
|
|
atime turned on for mostly read-only partitions such as
|
|
|
|
.Pa /
|
|
|
|
and
|
|
|
|
.Pa /usr
|
|
|
|
as well.
|
|
|
|
This is especially useful for
|
|
|
|
.Pa /
|
|
|
|
since some system utilities
|
2001-09-24 07:35:37 +00:00
|
|
|
use the atime field for reporting.
|
2001-05-27 23:14:27 +00:00
|
|
|
.Sh STRIPING DISKS
|
|
|
|
In larger systems you can stripe partitions from several drives together
|
2001-11-21 13:28:50 +00:00
|
|
|
to create a much larger overall partition.
|
|
|
|
Striping can also improve
|
2002-12-12 17:26:04 +00:00
|
|
|
the performance of a file system by splitting I/O operations across two
|
2001-11-21 13:28:50 +00:00
|
|
|
or more disks.
|
|
|
|
The
|
2007-11-09 00:50:08 +00:00
|
|
|
.Xr gstripe 8 ,
|
|
|
|
.Xr gvinum 8 ,
|
2001-05-27 23:14:27 +00:00
|
|
|
and
|
2001-11-21 13:28:50 +00:00
|
|
|
.Xr ccdconfig 8
|
2002-12-12 17:26:04 +00:00
|
|
|
utilities may be used to create simple striped file systems.
|
2001-11-21 13:28:50 +00:00
|
|
|
Generally
|
|
|
|
speaking, striping smaller partitions such as the root and
|
|
|
|
.Pa /var/tmp ,
|
|
|
|
or essentially read-only partitions such as
|
|
|
|
.Pa /usr
|
|
|
|
is a complete waste of time.
|
|
|
|
You should only stripe partitions that require serious I/O performance,
|
|
|
|
typically
|
|
|
|
.Pa /var , /home ,
|
|
|
|
or custom partitions used to hold databases and web pages.
|
|
|
|
Choosing the proper stripe size is also
|
|
|
|
important.
|
2002-12-12 17:26:04 +00:00
|
|
|
File systems tend to store meta-data on power-of-2 boundaries
|
2001-11-21 13:28:50 +00:00
|
|
|
and you usually want to reduce seeking rather than increase seeking.
|
|
|
|
This
|
2001-05-27 23:14:27 +00:00
|
|
|
means you want to use a large off-center stripe size such as 1152 sectors
|
|
|
|
so sequential I/O does not seek both disks and so meta-data is distributed
|
2001-11-21 13:28:50 +00:00
|
|
|
across both disks rather than concentrated on a single disk.
|
2001-05-27 23:14:27 +00:00
|
|
|
.Sh SYSCTL TUNING
|
|
|
|
.Xr sysctl 8
|
2001-12-22 14:25:31 +00:00
|
|
|
variables permit system behavior to be monitored and controlled at
|
|
|
|
run-time.
|
|
|
|
Some sysctls simply report on the behavior of the system; others allow
|
|
|
|
the system behavior to be modified;
|
|
|
|
some may be set at boot time using
|
|
|
|
.Xr rc.conf 5 ,
|
|
|
|
but most will be set via
|
|
|
|
.Xr sysctl.conf 5 .
|
|
|
|
There are several hundred sysctls in the system, including many that appear
|
2002-01-10 16:27:25 +00:00
|
|
|
to be candidates for tuning but actually are not.
|
2001-12-22 14:25:31 +00:00
|
|
|
In this document we will only cover the ones that have the greatest effect
|
|
|
|
on the system.
|
2001-05-27 23:14:27 +00:00
|
|
|
.Pp
|
|
|
|
The
|
2009-06-23 20:57:27 +00:00
|
|
|
.Va vm.overcommit
|
|
|
|
sysctl defines the overcommit behaviour of the vm subsystem.
|
|
|
|
The virtual memory system always does accounting of the swap space
|
2009-09-29 10:50:02 +00:00
|
|
|
reservation, both total for system and per-user.
|
|
|
|
Corresponding values
|
2009-06-23 20:57:27 +00:00
|
|
|
are available through sysctl
|
2009-09-29 10:50:02 +00:00
|
|
|
.Va vm.swap_total ,
|
2009-06-23 20:57:27 +00:00
|
|
|
that gives the total bytes available for swapping, and
|
2009-09-29 10:50:02 +00:00
|
|
|
.Va vm.swap_reserved ,
|
2009-06-23 20:57:27 +00:00
|
|
|
that gives number of bytes that may be needed to back all currently
|
|
|
|
allocated anonymous memory.
|
|
|
|
.Pp
|
|
|
|
Setting bit 0 of the
|
|
|
|
.Va vm.overcommit
|
|
|
|
sysctl causes the virtual memory system to return failure
|
2009-09-29 10:50:02 +00:00
|
|
|
to the process when allocation of memory causes
|
|
|
|
.Va vm.swap_reserved
|
|
|
|
to exceed
|
|
|
|
.Va vm.swap_total .
|
|
|
|
Bit 1 of the sysctl enforces
|
|
|
|
.Dv RLIMIT_SWAP
|
|
|
|
limit
|
2009-06-23 20:57:27 +00:00
|
|
|
(see
|
2009-09-29 10:50:02 +00:00
|
|
|
.Xr getrlimit 2 ) .
|
2009-06-23 20:57:27 +00:00
|
|
|
Root is exempt from this limit.
|
|
|
|
Bit 2 allows to count most of the physical
|
|
|
|
memory as allocatable, except wired and free reserved pages
|
|
|
|
(accounted by
|
|
|
|
.Va vm.stats.vm.v_free_target
|
|
|
|
and
|
|
|
|
.Va vm.stats.vm.v_wire_count
|
|
|
|
sysctls, respectively).
|
|
|
|
.Pp
|
|
|
|
The
|
2009-01-24 01:46:46 +00:00
|
|
|
.Va kern.ipc.maxpipekva
|
2009-01-26 02:15:22 +00:00
|
|
|
loader tunable is used to set a hard limit on the
|
|
|
|
amount of kernel address space allocated to mapping of pipe buffers.
|
|
|
|
Use of the mapping allows the kernel to eliminate a copy of the
|
|
|
|
data from writer address space into the kernel, directly copying
|
|
|
|
the content of mapped buffer to the reader.
|
2009-01-24 01:46:46 +00:00
|
|
|
Increasing this value to a higher setting, such as `25165824' might
|
2009-01-26 02:15:22 +00:00
|
|
|
improve performance on systems where space for mapping pipe buffers
|
|
|
|
is quickly exhausted.
|
2012-05-12 03:46:43 +00:00
|
|
|
This exhaustion is not fatal; however, and it will only cause pipes
|
2009-01-27 00:23:43 +00:00
|
|
|
to fall back to using double-copy.
|
2009-01-24 01:46:46 +00:00
|
|
|
.Pp
|
|
|
|
The
|
2001-11-21 13:28:50 +00:00
|
|
|
.Va kern.ipc.shm_use_phys
|
|
|
|
sysctl defaults to 0 (off) and may be set to 0 (off) or 1 (on).
|
|
|
|
Setting
|
|
|
|
this parameter to 1 will cause all System V shared memory segments to be
|
|
|
|
mapped to unpageable physical RAM.
|
|
|
|
This feature only has an effect if you
|
2001-05-27 23:14:27 +00:00
|
|
|
are either (A) mapping small amounts of shared memory across many (hundreds)
|
|
|
|
of processes, or (B) mapping large amounts of shared memory across any
|
2001-11-21 13:28:50 +00:00
|
|
|
number of processes.
|
|
|
|
This feature allows the kernel to remove a great deal
|
2001-05-27 23:14:27 +00:00
|
|
|
of internal memory management page-tracking overhead at the cost of wiring
|
|
|
|
the shared memory into core, making it unswappable.
|
|
|
|
.Pp
|
|
|
|
The
|
2001-11-21 13:28:50 +00:00
|
|
|
.Va vfs.vmiodirenable
|
2001-12-06 19:39:33 +00:00
|
|
|
sysctl defaults to 1 (on).
|
2001-11-21 13:28:50 +00:00
|
|
|
This parameter controls how directories are cached
|
|
|
|
by the system.
|
|
|
|
Most directories are small and use but a single fragment
|
2010-08-12 21:21:50 +00:00
|
|
|
(typically 2K) in the file system and even less (typically 512 bytes) in
|
2001-11-21 13:28:50 +00:00
|
|
|
the buffer cache.
|
|
|
|
However, when operating in the default mode the buffer
|
2001-05-27 23:14:27 +00:00
|
|
|
cache will only cache a fixed number of directories even if you have a huge
|
2001-11-21 13:28:50 +00:00
|
|
|
amount of memory.
|
|
|
|
Turning on this sysctl allows the buffer cache to use
|
|
|
|
the VM Page Cache to cache the directories.
|
|
|
|
The advantage is that all of
|
|
|
|
memory is now available for caching directories.
|
|
|
|
The disadvantage is that
|
2001-05-27 23:14:27 +00:00
|
|
|
the minimum in-core memory used to cache a directory is the physical page
|
2001-11-21 13:28:50 +00:00
|
|
|
size (typically 4K) rather than 512 bytes.
|
2001-12-06 19:39:33 +00:00
|
|
|
We recommend turning this option off in memory-constrained environments;
|
|
|
|
however, when on, it will substantially improve the performance of services
|
2001-12-12 15:58:04 +00:00
|
|
|
that manipulate a large number of files.
|
2001-05-27 23:14:27 +00:00
|
|
|
Such services can include web caches, large mail systems, and news systems.
|
|
|
|
Turning on this option will generally not reduce performance even with the
|
|
|
|
wasted memory but you should experiment to find out.
|
|
|
|
.Pp
|
2002-06-25 02:47:55 +00:00
|
|
|
The
|
|
|
|
.Va vfs.write_behind
|
2002-08-13 14:37:41 +00:00
|
|
|
sysctl defaults to 1 (on).
|
2002-12-12 17:26:04 +00:00
|
|
|
This tells the file system to issue media
|
2002-06-25 02:47:55 +00:00
|
|
|
writes as full clusters are collected, which typically occurs when writing
|
2002-08-13 14:37:41 +00:00
|
|
|
large sequential files.
|
|
|
|
The idea is to avoid saturating the buffer
|
|
|
|
cache with dirty buffers when it would not benefit I/O performance.
|
|
|
|
However,
|
2002-06-25 02:47:55 +00:00
|
|
|
this may stall processes and under certain circumstances you may wish to turn
|
|
|
|
it off.
|
|
|
|
.Pp
|
|
|
|
The
|
|
|
|
.Va vfs.hirunningspace
|
|
|
|
sysctl determines how much outstanding write I/O may be queued to
|
2010-08-12 21:21:50 +00:00
|
|
|
disk controllers system-wide at any given time.
|
|
|
|
It is used by the UFS file system.
|
|
|
|
The default is self-tuned and
|
|
|
|
usually sufficient but on machines with advanced controllers and lots
|
|
|
|
of disks this may be tuned up to match what the controllers buffer.
|
|
|
|
Configuring this setting to match tagged queuing capabilities of
|
|
|
|
controllers or drives with average IO size used in production works
|
|
|
|
best (for example: 16 MiB will use 128 tags with IO requests of 128 KiB).
|
2002-08-13 14:37:41 +00:00
|
|
|
Note that setting too high a value
|
2002-06-25 02:47:55 +00:00
|
|
|
(exceeding the buffer cache's write threshold) can lead to extremely
|
2002-08-13 14:37:41 +00:00
|
|
|
bad clustering performance.
|
|
|
|
Do not set this value arbitrarily high!
|
2012-11-16 01:22:56 +00:00
|
|
|
Higher write queuing values may also add latency to reads occurring at
|
2010-08-12 21:21:50 +00:00
|
|
|
the same time.
|
|
|
|
.Pp
|
|
|
|
The
|
|
|
|
.Va vfs.read_max
|
|
|
|
sysctl governs VFS read-ahead and is expressed as the number of blocks
|
|
|
|
to pre-read if the heuristics algorithm decides that the reads are
|
|
|
|
issued sequentially.
|
|
|
|
It is used by the UFS, ext2fs and msdosfs file systems.
|
2012-05-11 10:13:34 +00:00
|
|
|
With the default UFS block size of 32 KiB, a setting of 64 will allow
|
|
|
|
speculatively reading up to 2 MiB.
|
2010-08-12 21:21:50 +00:00
|
|
|
This setting may be increased to get around disk I/O latencies, especially
|
|
|
|
where these latencies are large such as in virtual machine emulated
|
|
|
|
environments.
|
|
|
|
It may be tuned down in specific cases where the I/O load is such that
|
|
|
|
read-ahead adversely affects performance or where system memory is really
|
|
|
|
low.
|
2002-06-25 02:47:55 +00:00
|
|
|
.Pp
|
2010-10-16 09:46:03 +00:00
|
|
|
The
|
|
|
|
.Va vfs.ncsizefactor
|
|
|
|
sysctl defines how large VFS namecache may grow.
|
|
|
|
The number of currently allocated entries in namecache is provided by
|
|
|
|
.Va debug.numcache
|
|
|
|
sysctl and the condition
|
|
|
|
debug.numcache < kern.maxvnodes * vfs.ncsizefactor
|
|
|
|
is adhered to.
|
|
|
|
.Pp
|
|
|
|
The
|
|
|
|
.Va vfs.ncnegfactor
|
|
|
|
sysctl defines how many negative entries VFS namecache is allowed to create.
|
|
|
|
The number of currently allocated negative entries is provided by
|
|
|
|
.Va debug.numneg
|
|
|
|
sysctl and the condition
|
|
|
|
vfs.ncnegfactor * debug.numneg < debug.numcache
|
|
|
|
is adhered to.
|
|
|
|
.Pp
|
2002-06-25 02:47:55 +00:00
|
|
|
There are various other buffer-cache and VM page cache related sysctls.
|
2001-12-12 15:58:04 +00:00
|
|
|
We do not recommend modifying these values.
|
2001-05-27 23:14:27 +00:00
|
|
|
.Pp
|
|
|
|
The
|
2001-11-21 13:28:50 +00:00
|
|
|
.Va net.inet.tcp.sendspace
|
2001-05-27 23:14:27 +00:00
|
|
|
and
|
2001-11-21 13:28:50 +00:00
|
|
|
.Va net.inet.tcp.recvspace
|
2001-05-27 23:14:27 +00:00
|
|
|
sysctls are of particular interest if you are running network intensive
|
2001-11-21 13:28:50 +00:00
|
|
|
applications.
|
2002-08-29 20:34:06 +00:00
|
|
|
They control the amount of send and receive buffer space
|
2001-11-21 13:28:50 +00:00
|
|
|
allowed for any given TCP connection.
|
2001-12-12 15:58:04 +00:00
|
|
|
The default sending buffer is 32K; the default receiving buffer
|
|
|
|
is 64K.
|
2001-11-21 13:28:50 +00:00
|
|
|
You can often
|
2001-07-14 19:41:16 +00:00
|
|
|
improve bandwidth utilization by increasing the default at the cost of
|
2001-11-21 13:28:50 +00:00
|
|
|
eating up more kernel memory for each connection.
|
|
|
|
We do not recommend
|
2001-05-27 23:14:27 +00:00
|
|
|
increasing the defaults if you are serving hundreds or thousands of
|
2001-06-05 05:59:21 +00:00
|
|
|
simultaneous connections because it is possible to quickly run the system
|
2001-11-21 13:28:50 +00:00
|
|
|
out of memory due to stalled connections building up.
|
|
|
|
But if you need
|
2001-05-27 23:14:27 +00:00
|
|
|
high bandwidth over a fewer number of connections, especially if you have
|
2002-01-09 12:34:01 +00:00
|
|
|
gigabit Ethernet, increasing these defaults can make a huge difference.
|
2001-05-27 23:14:27 +00:00
|
|
|
You can adjust the buffer size for incoming and outgoing data separately.
|
|
|
|
For example, if your machine is primarily doing web serving you may want
|
2001-11-21 13:28:50 +00:00
|
|
|
to decrease the recvspace in order to be able to increase the
|
|
|
|
sendspace without eating too much kernel memory.
|
|
|
|
Note that the routing table (see
|
|
|
|
.Xr route 8 )
|
2001-05-27 23:14:27 +00:00
|
|
|
can be used to introduce route-specific send and receive buffer size
|
2001-11-21 13:28:50 +00:00
|
|
|
defaults.
|
2001-12-06 19:50:35 +00:00
|
|
|
.Pp
|
2001-11-21 13:28:50 +00:00
|
|
|
As an additional management tool you can use pipes in your
|
|
|
|
firewall rules (see
|
|
|
|
.Xr ipfw 8 )
|
2001-05-27 23:14:27 +00:00
|
|
|
to limit the bandwidth going to or from particular IP blocks or ports.
|
|
|
|
For example, if you have a T1 you might want to limit your web traffic
|
|
|
|
to 70% of the T1's bandwidth in order to leave the remainder available
|
2001-11-21 13:28:50 +00:00
|
|
|
for mail and interactive use.
|
|
|
|
Normally a heavily loaded web server
|
2001-07-14 19:41:16 +00:00
|
|
|
will not introduce significant latencies into other services even if
|
2001-05-27 23:14:27 +00:00
|
|
|
the network link is maxed out, but enforcing a limit can smooth things
|
2001-11-21 13:28:50 +00:00
|
|
|
out and lead to longer term stability.
|
|
|
|
Many people also enforce artificial
|
2001-05-27 23:14:27 +00:00
|
|
|
bandwidth limitations in order to ensure that they are not charged for
|
|
|
|
using too much bandwidth.
|
|
|
|
.Pp
|
2002-12-24 16:52:31 +00:00
|
|
|
Setting the send or receive TCP buffer to values larger than 65535 will result
|
2001-09-17 03:49:51 +00:00
|
|
|
in a marginal performance improvement unless both hosts support the window
|
2001-11-21 13:28:50 +00:00
|
|
|
scaling extension of the TCP protocol, which is controlled by the
|
|
|
|
.Va net.inet.tcp.rfc1323
|
|
|
|
sysctl.
|
2001-09-17 03:49:51 +00:00
|
|
|
These extensions should be enabled and the TCP buffer size should be set
|
2002-08-29 20:34:06 +00:00
|
|
|
to a value larger than 65536 in order to obtain good performance from
|
2001-09-17 03:49:51 +00:00
|
|
|
certain types of network links; specifically, gigabit WAN links and
|
|
|
|
high-latency satellite links.
|
2001-12-07 18:17:37 +00:00
|
|
|
RFC1323 support is enabled by default.
|
2001-06-05 05:59:21 +00:00
|
|
|
.Pp
|
2001-12-06 19:57:34 +00:00
|
|
|
The
|
2001-11-21 13:28:50 +00:00
|
|
|
.Va net.inet.tcp.always_keepalive
|
2001-12-06 19:57:34 +00:00
|
|
|
sysctl determines whether or not the TCP implementation should attempt
|
2001-12-14 09:09:21 +00:00
|
|
|
to detect dead TCP connections by intermittently delivering
|
2001-12-12 15:58:04 +00:00
|
|
|
.Dq keepalives
|
2001-12-06 19:57:34 +00:00
|
|
|
on the connection.
|
2001-12-06 20:24:38 +00:00
|
|
|
By default, this is enabled for all applications; by setting this
|
|
|
|
sysctl to 0, only applications that specifically request keepalives
|
|
|
|
will use them.
|
|
|
|
In most environments, TCP keepalives will improve the management of
|
|
|
|
system state by expiring dead TCP connections, particularly for
|
|
|
|
systems serving dialup users who may not always terminate individual
|
|
|
|
TCP connections before disconnecting from the network.
|
2001-12-06 19:57:34 +00:00
|
|
|
However, in some environments, temporary network outages may be
|
2001-12-06 20:24:38 +00:00
|
|
|
incorrectly identified as dead sessions, resulting in unexpectedly
|
2001-12-06 19:57:34 +00:00
|
|
|
terminated TCP connections.
|
2001-12-06 20:24:38 +00:00
|
|
|
In such environments, setting the sysctl to 0 may reduce the occurrence of
|
|
|
|
TCP session disconnections.
|
2001-05-27 23:14:27 +00:00
|
|
|
.Pp
|
|
|
|
The
|
2002-08-29 20:34:06 +00:00
|
|
|
.Va net.inet.tcp.delayed_ack
|
2002-12-27 12:15:40 +00:00
|
|
|
TCP feature is largely misunderstood.
|
2002-11-29 11:39:20 +00:00
|
|
|
Historically speaking, this feature
|
2002-08-29 20:34:06 +00:00
|
|
|
was designed to allow the acknowledgement to transmitted data to be returned
|
2002-11-29 11:39:20 +00:00
|
|
|
along with the response.
|
|
|
|
For example, when you type over a remote shell,
|
2002-08-29 20:34:06 +00:00
|
|
|
the acknowledgement to the character you send can be returned along with the
|
2002-11-29 11:39:20 +00:00
|
|
|
data representing the echo of the character.
|
|
|
|
With delayed acks turned off,
|
|
|
|
the acknowledgement may be sent in its own packet, before the remote service
|
|
|
|
has a chance to echo the data it just received.
|
|
|
|
This same concept also
|
2012-11-16 01:22:56 +00:00
|
|
|
applies to any interactive protocol (e.g.,\& SMTP, WWW, POP3), and can cut the
|
2002-11-29 11:39:20 +00:00
|
|
|
number of tiny packets flowing across the network in half.
|
|
|
|
The
|
|
|
|
.Fx
|
|
|
|
delayed ACK implementation also follows the TCP protocol rule that
|
2002-08-29 20:34:06 +00:00
|
|
|
at least every other packet be acknowledged even if the standard 100ms
|
2002-11-29 11:39:20 +00:00
|
|
|
timeout has not yet passed.
|
|
|
|
Normally the worst a delayed ACK can do is
|
2002-08-29 20:34:06 +00:00
|
|
|
slightly delay the teardown of a connection, or slightly delay the ramp-up
|
2002-11-29 11:39:20 +00:00
|
|
|
of a slow-start TCP connection.
|
|
|
|
While we are not sure we believe that
|
2002-08-29 20:34:06 +00:00
|
|
|
the several FAQs related to packages such as SAMBA and SQUID which advise
|
2002-12-27 12:15:40 +00:00
|
|
|
turning off delayed acks may be referring to the slow-start issue.
|
2002-08-29 20:34:06 +00:00
|
|
|
.Pp
|
|
|
|
The
|
2002-08-25 18:34:48 +00:00
|
|
|
.Va net.inet.ip.portrange.*
|
|
|
|
sysctls control the port number ranges automatically bound to TCP and UDP
|
2002-11-29 11:39:20 +00:00
|
|
|
sockets.
|
|
|
|
There are three ranges: a low range, a default range, and a
|
|
|
|
high range, selectable via the
|
|
|
|
.Dv IP_PORTRANGE
|
|
|
|
.Xr setsockopt 2
|
|
|
|
call.
|
|
|
|
Most
|
2002-08-25 18:34:48 +00:00
|
|
|
network programs use the default range which is controlled by
|
|
|
|
.Va net.inet.ip.portrange.first
|
|
|
|
and
|
|
|
|
.Va net.inet.ip.portrange.last ,
|
2004-11-06 13:24:53 +00:00
|
|
|
which default to 49152 and 65535, respectively.
|
2002-11-29 11:39:20 +00:00
|
|
|
Bound port ranges are
|
|
|
|
used for outgoing connections, and it is possible to run the system out
|
|
|
|
of ports under certain circumstances.
|
|
|
|
This most commonly occurs when you are
|
|
|
|
running a heavily loaded web proxy.
|
|
|
|
The port range is not an issue
|
2004-11-06 13:24:53 +00:00
|
|
|
when running a server which handles mainly incoming connections, such as a
|
2002-11-29 11:39:20 +00:00
|
|
|
normal web server, or has a limited number of outgoing connections, such
|
|
|
|
as a mail relay.
|
2004-11-06 13:24:53 +00:00
|
|
|
For situations where you may run out of ports,
|
|
|
|
we recommend decreasing
|
|
|
|
.Va net.inet.ip.portrange.first
|
2002-11-29 11:39:20 +00:00
|
|
|
modestly.
|
2004-11-06 13:24:53 +00:00
|
|
|
A range of 10000 to 30000 ports may be reasonable.
|
2002-11-29 11:39:20 +00:00
|
|
|
You should also consider firewall effects when changing the port range.
|
|
|
|
Some firewalls
|
2002-08-25 18:34:48 +00:00
|
|
|
may block large ranges of ports (usually low-numbered ports) and expect systems
|
2002-11-29 11:39:20 +00:00
|
|
|
to use higher ranges of ports for outgoing connections.
|
2004-11-06 13:24:53 +00:00
|
|
|
By default
|
|
|
|
.Va net.inet.ip.portrange.last
|
|
|
|
is set at the maximum allowable port number.
|
2002-08-25 18:34:48 +00:00
|
|
|
.Pp
|
|
|
|
The
|
2001-11-21 13:28:50 +00:00
|
|
|
.Va kern.ipc.somaxconn
|
|
|
|
sysctl limits the size of the listen queue for accepting new TCP connections.
|
2001-05-27 23:14:27 +00:00
|
|
|
The default value of 128 is typically too low for robust handling of new
|
2001-11-21 13:28:50 +00:00
|
|
|
connections in a heavily loaded web server environment.
|
|
|
|
For such environments,
|
|
|
|
we recommend increasing this value to 1024 or higher.
|
|
|
|
The service daemon
|
2012-11-16 01:22:56 +00:00
|
|
|
may itself limit the listen queue size (e.g.,\&
|
2001-11-21 13:28:50 +00:00
|
|
|
.Xr sendmail 8 ,
|
|
|
|
apache) but will
|
2001-05-27 23:14:27 +00:00
|
|
|
often have a directive in its configuration file to adjust the queue size up.
|
2001-07-14 22:41:05 +00:00
|
|
|
Larger listen queues also do a better job of fending off denial of service
|
2001-05-27 23:14:27 +00:00
|
|
|
attacks.
|
2001-07-07 17:43:20 +00:00
|
|
|
.Pp
|
|
|
|
The
|
2001-11-21 13:28:50 +00:00
|
|
|
.Va kern.maxfiles
|
|
|
|
sysctl determines how many open files the system supports.
|
|
|
|
The default is
|
2001-07-07 17:43:20 +00:00
|
|
|
typically a few thousand but you may need to bump this up to ten or twenty
|
|
|
|
thousand if you are running databases or large descriptor-heavy daemons.
|
2001-12-07 18:02:16 +00:00
|
|
|
The read-only
|
|
|
|
.Va kern.openfiles
|
|
|
|
sysctl may be interrogated to determine the current number of open files
|
|
|
|
on the system.
|
2001-07-07 17:43:20 +00:00
|
|
|
.Pp
|
|
|
|
The
|
2001-11-21 13:28:50 +00:00
|
|
|
.Va vm.swap_idle_enabled
|
2001-07-07 17:43:20 +00:00
|
|
|
sysctl is useful in large multi-user systems where you have lots of users
|
2001-11-21 13:28:50 +00:00
|
|
|
entering and leaving the system and lots of idle processes.
|
|
|
|
Such systems
|
2001-07-07 17:43:20 +00:00
|
|
|
tend to generate a great deal of continuous pressure on free memory reserves.
|
|
|
|
Turning this feature on and adjusting the swapout hysteresis (in idle
|
|
|
|
seconds) via
|
2001-11-21 13:28:50 +00:00
|
|
|
.Va vm.swap_idle_threshold1
|
2001-07-07 17:43:20 +00:00
|
|
|
and
|
2001-11-21 13:28:50 +00:00
|
|
|
.Va vm.swap_idle_threshold2
|
2001-07-07 17:43:20 +00:00
|
|
|
allows you to depress the priority of pages associated with idle processes
|
2001-11-21 13:28:50 +00:00
|
|
|
more quickly then the normal pageout algorithm.
|
|
|
|
This gives a helping hand
|
|
|
|
to the pageout daemon.
|
|
|
|
Do not turn this option on unless you need it,
|
2001-07-07 17:43:20 +00:00
|
|
|
because the tradeoff you are making is to essentially pre-page memory sooner
|
2002-11-29 11:39:20 +00:00
|
|
|
rather than later, eating more swap and disk bandwidth.
|
2001-11-21 13:28:50 +00:00
|
|
|
In a small system
|
2001-07-07 17:43:20 +00:00
|
|
|
this option will have a detrimental effect but in a large system that is
|
|
|
|
already doing moderate paging this option allows the VM system to stage
|
|
|
|
whole processes into and out of memory more easily.
|
2001-12-06 19:36:21 +00:00
|
|
|
.Sh LOADER TUNABLES
|
|
|
|
Some aspects of the system behavior may not be tunable at runtime because
|
2001-12-06 20:27:44 +00:00
|
|
|
memory allocations they perform must occur early in the boot process.
|
2001-12-12 15:58:04 +00:00
|
|
|
To change loader tunables, you must set their values in
|
2001-10-29 22:29:01 +00:00
|
|
|
.Xr loader.conf 5
|
|
|
|
and reboot the system.
|
2001-05-27 23:14:27 +00:00
|
|
|
.Pp
|
2012-11-05 02:34:52 +00:00
|
|
|
.Va kern.maxusers
|
2001-12-22 14:19:20 +00:00
|
|
|
controls the scaling of a number of static system tables, including defaults
|
2002-01-09 12:34:01 +00:00
|
|
|
for the maximum number of open files, sizing of network memory resources, etc.
|
2012-11-05 02:34:52 +00:00
|
|
|
.Va kern.maxusers
|
2001-12-22 14:19:20 +00:00
|
|
|
is automatically sized at boot based on the amount of memory available in
|
|
|
|
the system, and may be determined at run-time by inspecting the value of the
|
|
|
|
read-only
|
2012-11-05 02:34:52 +00:00
|
|
|
.Va kern.maxusers
|
2001-12-22 14:19:20 +00:00
|
|
|
sysctl.
|
2001-05-27 23:14:27 +00:00
|
|
|
.Pp
|
2007-01-17 22:23:28 +00:00
|
|
|
The
|
|
|
|
.Va kern.dfldsiz
|
|
|
|
and
|
|
|
|
.Va kern.dflssiz
|
|
|
|
tunables set the default soft limits for process data and stack size
|
|
|
|
respectively.
|
|
|
|
Processes may increase these up to the hard limits by calling
|
|
|
|
.Xr setrlimit 2 .
|
|
|
|
The
|
|
|
|
.Va kern.maxdsiz ,
|
|
|
|
.Va kern.maxssiz ,
|
|
|
|
and
|
|
|
|
.Va kern.maxtsiz
|
|
|
|
tunables set the hard limits for process data, stack, and text size
|
|
|
|
respectively; processes may not exceed these limits.
|
|
|
|
The
|
|
|
|
.Va kern.sgrowsiz
|
|
|
|
tunable controls how much the stack segment will grow when a process
|
|
|
|
needs to allocate more stack.
|
|
|
|
.Pp
|
2001-11-21 13:28:50 +00:00
|
|
|
.Va kern.ipc.nmbclusters
|
2001-05-27 23:14:27 +00:00
|
|
|
may be adjusted to increase the number of network mbufs the system is
|
2001-11-21 13:28:50 +00:00
|
|
|
willing to allocate.
|
|
|
|
Each cluster represents approximately 2K of memory,
|
2001-05-27 23:14:27 +00:00
|
|
|
so a value of 1024 represents 2M of kernel memory reserved for network
|
2001-11-21 13:28:50 +00:00
|
|
|
buffers.
|
|
|
|
You can do a simple calculation to figure out how many you need.
|
2001-06-05 05:59:21 +00:00
|
|
|
If you have a web server which maxes out at 1000 simultaneous connections,
|
2001-05-27 23:14:27 +00:00
|
|
|
and each connection eats a 16K receive and 16K send buffer, you need
|
2002-08-29 20:34:06 +00:00
|
|
|
approximately 32MB worth of network buffers to deal with it.
|
2001-11-21 13:28:50 +00:00
|
|
|
A good rule of
|
|
|
|
thumb is to multiply by 2, so 32MBx2 = 64MB/2K = 32768.
|
|
|
|
So for this case
|
|
|
|
you would want to set
|
|
|
|
.Va kern.ipc.nmbclusters
|
|
|
|
to 32768.
|
|
|
|
We recommend values between
|
2001-05-27 23:14:27 +00:00
|
|
|
1024 and 4096 for machines with moderates amount of memory, and between 4096
|
2001-11-21 13:28:50 +00:00
|
|
|
and 32768 for machines with greater amounts of memory.
|
|
|
|
Under no circumstances
|
2001-05-27 23:14:27 +00:00
|
|
|
should you specify an arbitrarily high value for this parameter, it could
|
2001-11-21 13:28:50 +00:00
|
|
|
lead to a boot-time crash.
|
|
|
|
The
|
|
|
|
.Fl m
|
|
|
|
option to
|
2001-05-27 23:14:27 +00:00
|
|
|
.Xr netstat 1
|
|
|
|
may be used to observe network cluster use.
|
|
|
|
.Pp
|
|
|
|
More and more programs are using the
|
2001-11-21 13:28:50 +00:00
|
|
|
.Xr sendfile 2
|
|
|
|
system call to transmit files over the network.
|
|
|
|
The
|
|
|
|
.Va kern.ipc.nsfbufs
|
2002-12-12 17:26:04 +00:00
|
|
|
sysctl controls the number of file system buffers
|
2001-11-21 13:28:50 +00:00
|
|
|
.Xr sendfile 2
|
|
|
|
is allowed to use to perform its work.
|
|
|
|
This parameter nominally scales
|
2001-05-27 23:14:27 +00:00
|
|
|
with
|
2012-11-05 02:34:52 +00:00
|
|
|
.Va kern.maxusers
|
2001-12-06 19:48:48 +00:00
|
|
|
so you should not need to modify this parameter except under extreme
|
2001-05-27 23:14:27 +00:00
|
|
|
circumstances.
|
2003-03-12 09:28:44 +00:00
|
|
|
See the
|
2003-05-21 15:49:01 +00:00
|
|
|
.Sx TUNING
|
2003-03-12 09:28:44 +00:00
|
|
|
section in the
|
|
|
|
.Xr sendfile 2
|
2003-05-21 15:49:01 +00:00
|
|
|
manual page for details.
|
2001-10-29 22:29:01 +00:00
|
|
|
.Sh KERNEL CONFIG TUNING
|
|
|
|
There are a number of kernel options that you may have to fiddle with in
|
2002-08-29 20:34:06 +00:00
|
|
|
a large-scale system.
|
2001-11-21 13:28:50 +00:00
|
|
|
In order to change these options you need to be
|
|
|
|
able to compile a new kernel from source.
|
|
|
|
The
|
2001-10-29 22:29:01 +00:00
|
|
|
.Xr config 8
|
|
|
|
manual page and the handbook are good starting points for learning how to
|
2001-11-21 13:28:50 +00:00
|
|
|
do this.
|
|
|
|
Generally the first thing you do when creating your own custom
|
2002-01-10 16:27:25 +00:00
|
|
|
kernel is to strip out all the drivers and services you do not use.
|
2001-11-21 13:28:50 +00:00
|
|
|
Removing things like
|
|
|
|
.Dv INET6
|
2002-01-10 16:27:25 +00:00
|
|
|
and drivers you do not have will reduce the size of your kernel, sometimes
|
2001-10-29 22:29:01 +00:00
|
|
|
by a megabyte or more, leaving more memory available for applications.
|
|
|
|
.Pp
|
2001-11-21 13:28:50 +00:00
|
|
|
.Dv SCSI_DELAY
|
|
|
|
may be used to reduce system boot times.
|
|
|
|
The defaults are fairly high and
|
2004-12-09 15:52:51 +00:00
|
|
|
can be responsible for 5+ seconds of delay in the boot process.
|
2001-11-21 13:28:50 +00:00
|
|
|
Reducing
|
|
|
|
.Dv SCSI_DELAY
|
2004-12-09 15:52:51 +00:00
|
|
|
to something below 5 seconds could work (especially with modern drives).
|
2001-05-27 23:14:27 +00:00
|
|
|
.Pp
|
|
|
|
There are a number of
|
2001-11-21 13:28:50 +00:00
|
|
|
.Dv *_CPU
|
|
|
|
options that can be commented out.
|
|
|
|
If you only want the kernel to run
|
|
|
|
on a Pentium class CPU, you can easily remove
|
|
|
|
.Dv I486_CPU ,
|
2001-05-27 23:14:27 +00:00
|
|
|
but only remove
|
2001-11-21 13:28:50 +00:00
|
|
|
.Dv I586_CPU
|
|
|
|
if you are sure your CPU is being recognized as a Pentium II or better.
|
2001-07-14 22:41:05 +00:00
|
|
|
Some clones may be recognized as a Pentium or even a 486 and not be able
|
2001-11-21 13:28:50 +00:00
|
|
|
to boot without those options.
|
|
|
|
If it works, great!
|
|
|
|
The operating system
|
2002-08-29 20:34:06 +00:00
|
|
|
will be able to better use higher-end CPU features for MMU, task switching,
|
2001-11-21 13:28:50 +00:00
|
|
|
timebase, and even device operations.
|
|
|
|
Additionally, higher-end CPUs support
|
2002-08-29 20:34:06 +00:00
|
|
|
4MB MMU pages, which the kernel uses to map the kernel itself into memory,
|
|
|
|
increasing its efficiency under heavy syscall loads.
|
2001-05-27 23:14:27 +00:00
|
|
|
.Sh CPU, MEMORY, DISK, NETWORK
|
|
|
|
The type of tuning you do depends heavily on where your system begins to
|
2001-11-21 13:28:50 +00:00
|
|
|
bottleneck as load increases.
|
|
|
|
If your system runs out of CPU (idle times
|
2012-11-16 01:22:56 +00:00
|
|
|
are perpetually 0%) then you need to consider upgrading the CPU
|
|
|
|
or perhaps you need to revisit the
|
2001-11-21 13:28:50 +00:00
|
|
|
programs that are causing the load and try to optimize them.
|
|
|
|
If your system
|
|
|
|
is paging to swap a lot you need to consider adding more memory.
|
|
|
|
If your
|
|
|
|
system is saturating the disk you typically see high CPU idle times and
|
2001-05-27 23:14:27 +00:00
|
|
|
total disk saturation.
|
|
|
|
.Xr systat 1
|
2001-11-21 13:28:50 +00:00
|
|
|
can be used to monitor this.
|
|
|
|
There are many solutions to saturated disks:
|
2001-05-27 23:14:27 +00:00
|
|
|
increasing memory for caching, mirroring disks, distributing operations across
|
2001-11-21 13:28:50 +00:00
|
|
|
several machines, and so forth.
|
2001-05-27 23:14:27 +00:00
|
|
|
.Pp
|
2001-11-21 13:28:50 +00:00
|
|
|
Finally, you might run out of network suds.
|
2012-11-16 01:22:56 +00:00
|
|
|
Optimize the network path
|
2001-11-21 13:28:50 +00:00
|
|
|
as much as possible.
|
|
|
|
For example, in
|
2001-05-27 23:14:27 +00:00
|
|
|
.Xr firewall 7
|
|
|
|
we describe a firewall protecting internal hosts with a topology where
|
2001-11-21 13:28:50 +00:00
|
|
|
the externally visible hosts are not routed through it.
|
2017-10-31 06:35:17 +00:00
|
|
|
Most bottlenecks occur at the WAN link.
|
2002-08-29 20:34:06 +00:00
|
|
|
If expanding the link is not an option it may be possible to use the
|
2001-11-21 13:28:50 +00:00
|
|
|
.Xr dummynet 4
|
2001-05-27 23:14:27 +00:00
|
|
|
feature to implement peak shaving or other forms of traffic shaping to
|
2001-07-14 22:41:05 +00:00
|
|
|
prevent the overloaded service (such as web services) from affecting other
|
2001-11-21 13:28:50 +00:00
|
|
|
services (such as email), or vice versa.
|
|
|
|
In home installations this could
|
|
|
|
be used to give interactive traffic (your browser,
|
|
|
|
.Xr ssh 1
|
|
|
|
logins) priority
|
2001-05-27 23:14:27 +00:00
|
|
|
over services you export from your box (web services, email).
|
|
|
|
.Sh SEE ALSO
|
2001-06-24 01:17:07 +00:00
|
|
|
.Xr netstat 1 ,
|
|
|
|
.Xr systat 1 ,
|
2006-11-24 12:00:02 +00:00
|
|
|
.Xr sendfile 2 ,
|
2001-05-27 23:14:27 +00:00
|
|
|
.Xr ata 4 ,
|
2001-11-21 13:28:50 +00:00
|
|
|
.Xr dummynet 4 ,
|
2017-03-23 05:15:35 +00:00
|
|
|
.Xr eventtimers 4 ,
|
2001-06-24 01:17:07 +00:00
|
|
|
.Xr login.conf 5 ,
|
2001-12-22 14:25:31 +00:00
|
|
|
.Xr rc.conf 5 ,
|
|
|
|
.Xr sysctl.conf 5 ,
|
2014-12-26 22:43:54 +00:00
|
|
|
.Xr firewall 7 ,
|
2001-07-10 17:52:29 +00:00
|
|
|
.Xr hier 7 ,
|
2001-06-24 01:17:07 +00:00
|
|
|
.Xr ports 7 ,
|
|
|
|
.Xr boot 8 ,
|
2012-11-16 01:22:56 +00:00
|
|
|
.Xr bsdinstall 8 ,
|
2001-11-21 13:28:50 +00:00
|
|
|
.Xr ccdconfig 8 ,
|
2001-05-27 23:14:27 +00:00
|
|
|
.Xr config 8 ,
|
|
|
|
.Xr fsck 8 ,
|
2007-11-09 00:50:08 +00:00
|
|
|
.Xr gjournal 8 ,
|
2016-01-18 20:21:38 +00:00
|
|
|
.Xr gpart 8 ,
|
2007-11-09 00:50:08 +00:00
|
|
|
.Xr gstripe 8 ,
|
|
|
|
.Xr gvinum 8 ,
|
2001-05-27 23:14:27 +00:00
|
|
|
.Xr ifconfig 8 ,
|
|
|
|
.Xr ipfw 8 ,
|
|
|
|
.Xr loader 8 ,
|
2001-11-21 13:28:50 +00:00
|
|
|
.Xr mount 8 ,
|
2001-05-27 23:14:27 +00:00
|
|
|
.Xr newfs 8 ,
|
|
|
|
.Xr route 8 ,
|
|
|
|
.Xr sysctl 8 ,
|
2007-11-09 09:25:36 +00:00
|
|
|
.Xr tunefs 8
|
2001-05-27 23:14:27 +00:00
|
|
|
.Sh HISTORY
|
|
|
|
The
|
|
|
|
.Nm
|
|
|
|
manual page was originally written by
|
|
|
|
.An Matthew Dillon
|
2001-07-14 19:41:16 +00:00
|
|
|
and first appeared
|
2001-05-27 23:14:27 +00:00
|
|
|
in
|
|
|
|
.Fx 4.3 ,
|
|
|
|
May 2001.
|
2012-11-16 01:22:56 +00:00
|
|
|
The manual page was greatly modified by
|
2014-06-23 08:27:27 +00:00
|
|
|
.An Eitan Adler Aq Mt eadler@FreeBSD.org .
|