Sync tuning-related manual page additions to the handbook.
Submitted by: Hiten Pandya <hiten@unixdaemons.com>
This commit is contained in:
parent
0dfc8f80ed
commit
594d923580
Notes:
svn2git
2020-12-08 03:00:23 +00:00
svn path=/head/; revision=15774
1 changed files with 210 additions and 3 deletions
|
@ -1251,6 +1251,73 @@ kern.maxfiles: 2088 -> 5000</screen>
|
|||
experiment to find out.</para>
|
||||
</sect3>
|
||||
|
||||
<sect3>
|
||||
<title><varname>vfs.write_behind</varname></title>
|
||||
|
||||
<indexterm>
|
||||
<primary><varname>vfs.write_behind</varname></primary>
|
||||
</indexterm>
|
||||
|
||||
<para>The <varname>vfs.write_behind</varname> sysctl variable
|
||||
defaults to <literal>1</literal> (on). This tells the file system
|
||||
to issue media writes as full clusters are collected, which
|
||||
typically occurs when writing large sequential files. The idea
|
||||
is to avoid saturating the buffer cache with dirty buffers when
|
||||
it would not benefit I/O performance. However, this may stall
|
||||
processes and under certain cirucstances you may wish to turn it
|
||||
off.</para>
|
||||
</sect3>
|
||||
|
||||
<sect3>
|
||||
<title><varname>vfs.hirunningspace</varname></title>
|
||||
|
||||
<indexterm>
|
||||
<primary><varname>vfs.hirunningspace</varname></primary>
|
||||
</indexterm>
|
||||
|
||||
<para>The <varname>vfs.hirunningspace</varname> sysctl variable
|
||||
determines how much outstanding write I/O may be queued to disk
|
||||
controllers system-wide at any given instance. The default is
|
||||
usually sufficient but on machines with lots of disks you may
|
||||
want to bump it up to four or five <emphasis>megabytes</emphasis>.
|
||||
Note that setting too high a value (exceeding the buffer cache's
|
||||
write threshold) can lead to extremely bad clustering
|
||||
performance. Do not set this value arbitrarily high! Higher
|
||||
write values may add latency to reads occuring at the same time.
|
||||
</para>
|
||||
|
||||
<para>There are various other buffer-cache and VM page cache
|
||||
related sysctls. We do not recommend modifying these values. As
|
||||
of FreeBSD 4.3, the VM system does an extremely good job of
|
||||
automatically tuning itself.</para>
|
||||
</sect3>
|
||||
|
||||
<sect3>
|
||||
<title><varname>vm.swap_idle_enabled</varname></title>
|
||||
|
||||
<indexterm>
|
||||
<primary><varname>vm.swap_idle_enabled</varname></primary>
|
||||
</indexterm>
|
||||
|
||||
<para>The <varname>vm.swap_idle_enabled</varname> sysctl variable
|
||||
is useful in large multi-user systems where you have lots of
|
||||
users entering and leaving the system and lots of idle processes.
|
||||
Such systems tend to generate a great deal of continuous pressure
|
||||
on free memory reserves. Turning this feature on and tweaking
|
||||
the swapout hysteresis (in idle seconds) via
|
||||
<varname>vm.swap_idle_threshold1</varname> and
|
||||
<varname>vm.swap_idle_threshold2</varname> allows you to depress
|
||||
the priority of memory pages associated with idle processes more
|
||||
quickly then the normal pageout algorithm. This gives a helping
|
||||
hand to the pageout daemon. do not turn this option on unless
|
||||
you need it, because the tradeoff you are making is essentially
|
||||
pre-page memory sooner rather than later; thus eating more swap
|
||||
and disk bandwidth. In a small system this option will have a
|
||||
determinal effect but in a large system that is already doing
|
||||
moderate paging this option allows the VM system to stage whole
|
||||
processes into and out of memory easily.</para>
|
||||
</sect3>
|
||||
|
||||
<sect3>
|
||||
<title><varname>hw.ata.wc</varname></title>
|
||||
|
||||
|
@ -1279,6 +1346,26 @@ kern.maxfiles: 2088 -> 5000</screen>
|
|||
|
||||
<para>For more information, please see &man.ata.4;.</para>
|
||||
</sect3>
|
||||
|
||||
<sect3>
|
||||
<title><option>SCSI_DELAY</option>
|
||||
(<varname>kern.cam.scsi_delay)</varname></title>
|
||||
|
||||
<indexterm>
|
||||
<primary><option>SCSI_DELAY</option></primary>
|
||||
<secondary><varname>kern.cam.scsi_delay</varname></secondary>
|
||||
</indexterm>
|
||||
|
||||
<para>The <option>SCSI_DELAY</option> kernel config may be used to
|
||||
reduce system boot times. The defaults are fairly high and can be
|
||||
responsible for <literal>15+</literal> seconds of delay in the
|
||||
boot process. Reducing it to <literal>5</literal> seconds usually
|
||||
works (especially with modern drives). Newer versions of FreeBSD
|
||||
(5.0+) should use the <varname>kern.cam.scsi_delay</varname>
|
||||
boot time tunable. The tunable, and kernel config option accept
|
||||
values in terms of <emphasis>milliseconds</emphasis> and NOT
|
||||
<emphasis>seconds</emphasis></para>
|
||||
</sect3>
|
||||
</sect2>
|
||||
|
||||
<sect2 id="soft-updates">
|
||||
|
@ -1508,13 +1595,34 @@ kern.maxfiles: 2088 -> 5000</screen>
|
|||
your system.</para></note>
|
||||
|
||||
</sect3>
|
||||
|
||||
<sect3>
|
||||
<title><varname>kern.ipc.somaxconn</varname></title>
|
||||
|
||||
<indexterm>
|
||||
<primary><varname>kern.ipc.somaxconn</varname></primary>
|
||||
</indexterm>
|
||||
|
||||
<para>The <varname>kern.ipc.somaxconn</varname> sysctl variable
|
||||
limits the size of the listen queue for accepting new TCP
|
||||
connections. The default value of <literal>128</literal> is
|
||||
typically too low for robust handling of new connections in a
|
||||
heavily loaded web server environment. For such environments, it
|
||||
is recommended to increase this value to <literal>1024</literal> or
|
||||
higher. The service daemon may itself limit the listen queue size
|
||||
(e.g. &man.sendmail.8;, or <application>Apache</application>) but
|
||||
will often have a directive in it's configuration file to adjust
|
||||
the queue size. Large listen queues also do a better job of
|
||||
avoiding Denial of Service <abbrev>DoS</abbrev> attacks.</para>
|
||||
</sect3>
|
||||
|
||||
</sect2>
|
||||
<sect2>
|
||||
<title>Network Limits</title>
|
||||
|
||||
<para>The <option>NMBCLUSTERS</option> kernel configuration
|
||||
option dictates the amount of network mbufs available to the
|
||||
system. A heavily-trafficked server with a low number of MBUFs
|
||||
option dictates the amount of network Mbufs available to the
|
||||
system. A heavily-trafficked server with a low number of Mbufs
|
||||
will hinder FreeBSD's ability. Each cluster represents
|
||||
approximately 2 K of memory, so a value of 1024 represents 2
|
||||
megabytes of kernel memory reserved for network buffers. A
|
||||
|
@ -1523,7 +1631,106 @@ kern.maxfiles: 2088 -> 5000</screen>
|
|||
simultaneous connections, and each connection eats a 16 K receive
|
||||
and 16 K send buffer, you need approximately 32 MB worth of
|
||||
network buffers to cover the web server. A good rule of thumb is
|
||||
to multiply by 2, so 2x32 MB / 2 KB = 64 MB / 2 kB = 32768.</para>
|
||||
to multiply by 2, so 2x32 MB / 2 KB = 64 MB / 2 kB = 32768. We recommend values between 4096 and
|
||||
32768 for machines with greater amounts of memory. Under no
|
||||
circumstances should you specify an arbitrarily high value for this
|
||||
parameter as it could lead to a boot time crash. The
|
||||
<option>-m</option> option to &man.netstat.1; may be used to
|
||||
observe network cluster use.</para>
|
||||
|
||||
<para><varname>kern.ipc.nmbclusters</varname> loader tunable should
|
||||
be used to tune this at boot time. Only older versions of FreeBSD
|
||||
will require you to use the <option>NMBCLUSTERS</option> kernel
|
||||
&man.config.8; option.</para>
|
||||
|
||||
<para>Under <emphasis>extreme</emphasis> circumstances, you may need
|
||||
to modify <varname>kern.ipc.nsfbufs</varname> sysctl. This sysctl
|
||||
variable controls the number of filesystem buffers &man.sendfile.2;
|
||||
is allowed to use for performing it's work. This parameter nominally
|
||||
scales with <varname>kern.maxusers</varname> so you should not need
|
||||
to modify it.</para>
|
||||
|
||||
<sect3>
|
||||
<title><varname>net.inet.ip.portrange.*</varname></title>
|
||||
|
||||
<indexterm>
|
||||
<primary>net.inet.ip.portrange.*</primary>
|
||||
</indexterm>
|
||||
|
||||
<para>The <varname>net.inet.ip.portrange.*</varname> sysctl
|
||||
variables control the port number ranges automatically bound to TCP
|
||||
and UDP sockets. There are three ranges: a low range, a default
|
||||
range, and a high range. Most network programs use the default
|
||||
range which is controlled by the
|
||||
<varname>net.inet.ip.portrange.first</varname> and
|
||||
<varname>net.inet.ip.portrange.last</varname>, which default to
|
||||
1024 and 5000, respectively. Bound port ranges are used for
|
||||
outgoing connections, and it is possible to run the system out of
|
||||
ports under certain circumstances. This most commonly occurs
|
||||
when you are running a heavily loaded web proxy. The port range
|
||||
is not an issue when running servers which handle mainly incoming
|
||||
connections, such as a normal web server, or has a limited number
|
||||
of outgoing connections, such as a mail relay. For situations
|
||||
where you may run yourself out of ports, it is recommended to
|
||||
increase <varname>net.inet.ip.portrange.last</varname> modestly.
|
||||
A value of <literal>10000</literal>, <literal>20000</literal> or
|
||||
<literal>30000</literal> may be reasonable. You should also
|
||||
consider firewall effects when changing the port range. Some
|
||||
firewalls may block large ranges of ports (usually low-numbered
|
||||
ports) and expect systems to use higher ranges of ports for
|
||||
outgoing connections -- for this reason it is recommended that
|
||||
<varname>net.inet.ip.portrange.first</varname> be lowered.</para>
|
||||
</sect3>
|
||||
|
||||
<sect3>
|
||||
<title>TCP Bandwidth Delay Product</title>
|
||||
|
||||
<indexterm>
|
||||
<primary>TCP Bandwidth Delay Product Limiting</primary>
|
||||
<secondary><varname>net.inet.tcp.inflight_enable</varname></secondary>
|
||||
</indexterm>
|
||||
|
||||
<para>The TCP Bandwidth Delay Product Limiting is similar to
|
||||
TCP/Vegas in <application>NetBSD</application>. It can be
|
||||
enabled by setting <varname>net.inet.tcp.inflight_enable</varname>
|
||||
sysctl variable to <literal>1</literal>. The system will attempt
|
||||
to calculate the bandwidth delay product for each connection and
|
||||
limit the amount of data queued to the network to just the amount
|
||||
required to maintain optimum throughput.</para>
|
||||
|
||||
<para>This feature is useful if you are serving data over modems,
|
||||
Gigabit Ethernet, or even high speed WAN links (or any other link
|
||||
with a high bandwidth delay product), especially if you are also
|
||||
using window scaling or have configured a large send window. If
|
||||
you enable this option, you should also be sure to set
|
||||
<varname>net.inet.tcp.inflight_debug</varname> to
|
||||
<literal>0</literal> (disable debugging), and for production use
|
||||
setting <varname>net.inet.tcp.inflight_min</varname> to at least
|
||||
<literal>6144</literal> may be beneficial. Note however, that
|
||||
setting high minimums may effectively disable bandwidth limiting
|
||||
depending on the link. The limiting feature reduces the amount of
|
||||
data built up in intermediate route and switch packet queues as
|
||||
well as reduces the amount of data built up in the local host's
|
||||
interface queue. With fewer packets queued up, interactive
|
||||
connections, especially over slow modems, will also be able to
|
||||
operate with lower <emphasis>Round Trip Times</emphasis>. However,
|
||||
note that this feature only effects data transmission (uploading
|
||||
/ server side). It has no effect on data reception (downloading).
|
||||
</para>
|
||||
|
||||
<para>Adjusting <varname>net.inet.tcp.inflight_stab</varname> is
|
||||
<emphasis>not</emphasis> recommended. This parameter defaults to
|
||||
20, representing 2 maximal packets added to the bandwidth delay
|
||||
product window calculation. The additional window is required to
|
||||
stabilize the algorithm and improve responsiveness to changing
|
||||
conditions, but it can also result in higher ping times over slow
|
||||
links (though still much lower than you would get without the
|
||||
inflight algorithm). In such cases, you may wish to try reducing
|
||||
this parameter to 15, 10, or 5; and may also have to reduce
|
||||
<varname>net.inet.tcp.inflight_min</varname> (for example, to
|
||||
3500) to get the desired effect. Reducing these parameters
|
||||
should be done as a last resort only.</para>
|
||||
</sect3>
|
||||
</sect2>
|
||||
</sect1>
|
||||
|
||||
|
|
Loading…
Reference in a new issue