Add information on tuning kern.maxvnodes.
PR: docs/80267 Submitted by: Brad Davis <so14k@so14k.com> Approved by: trhodes (mentor)
This commit is contained in:
parent
4dc2e7591e
commit
e77d110832
Notes:
svn2git
2020-12-08 03:00:23 +00:00
svn path=/head/; revision=24381
1 changed files with 36 additions and 0 deletions
|
@ -2229,6 +2229,42 @@ device_probe_and_attach: cbb0 attach returned 12</screen>
|
|||
</note>
|
||||
</sect3>
|
||||
</sect2>
|
||||
|
||||
<sect2>
|
||||
<title>Virtual Memory</title>
|
||||
|
||||
<sect3>
|
||||
<title><varname>kern.maxvnodes</varname></title>
|
||||
|
||||
<para>A vnode is the internal representation of a file or
|
||||
directory. So increasing the number of vnodes available to
|
||||
the operating system cuts down on disk I/O. Normally this
|
||||
is handled by the operating system and does not need to be
|
||||
changed. In some cases where disk I/O is a bottleneck and
|
||||
the system is running out of vnodes, this setting will need
|
||||
to be increased. The amount of inactive and free RAM will
|
||||
need to be taken into account.</para>
|
||||
|
||||
<para>To see the current number of vnodes in use:</para>
|
||||
|
||||
<programlisting>&prompt.root; sysctl vfs.numvnodes
|
||||
vfs.numvnodes: 91349</programlisting>
|
||||
|
||||
<para>To see the maximum vnodes:</para>
|
||||
|
||||
<programlisting>&prompt.root; sysctl kern.maxvnodes
|
||||
kern.maxvnodes: 100000</programlisting>
|
||||
|
||||
<para>If the current vnode usage is near the maximum, increasing
|
||||
<varname>kern.maxvnodes</varname> by a value of 1,000 is
|
||||
probably a good idea. Keep an eye on the number of
|
||||
<varname>vfs.numvnodes</varname>. If it climbs up to the
|
||||
maximum again, <varname>kern.maxvnodes</varname> will need to
|
||||
be increased further. A shift in your memory usage as
|
||||
reported by &man.top.1; should be visible. More memory should
|
||||
be active.</para>
|
||||
</sect3>
|
||||
</sect2>
|
||||
</sect1>
|
||||
|
||||
<sect1 id="adding-swap-space">
|
||||
|
|
Loading…
Reference in a new issue