* Minor grammar fixes.

* Fix a couple of typos.
* <literal> -> <option> for maxusers.
* NOT -> <emphasis>not</emphasis>.
* Correct location of SoftUpdates readme.

PR:		docs/35108
Submitted by:	Ceri <setantae@submonkey.net>
This commit is contained in:
Murray Stokely 2002-03-05 15:35:44 +00:00
parent a0d25850a2
commit 6e4faa49dc
Notes: svn2git 2020-12-08 03:00:23 +00:00
svn path=/head/; revision=12366
2 changed files with 40 additions and 40 deletions
en_US.ISO8859-1/books
arch-handbook/vm
developers-handbook/vm

View file

@ -53,7 +53,7 @@
allocation at interrupt time.</para>
<para>If a process attempts to access a page that does not exist in its
page table but does exist in one of the paging queues ( such as the
page table but does exist in one of the paging queues (such as the
inactive or cache queues), a relatively inexpensive page reactivation
fault occurs which causes the page to be reactivated. If the page
does not exist in system memory at all, the process must block while
@ -61,7 +61,7 @@
<para>FreeBSD dynamically tunes its paging queues and attempts to
maintain reasonable ratios of pages in the various queues as well as
attempts to maintain a reasonable breakdown of clean v.s. dirty pages.
attempts to maintain a reasonable breakdown of clean vs. dirty pages.
The amount of rebalancing that occurs depends on the system's memory
load. This rebalancing is implemented by the pageout daemon and
involves laundering dirty pages (syncing them with their backing
@ -89,7 +89,7 @@
can be stacked on top of each other. For example, you might have a
swap-backed VM object stacked on top of a file-backed VM object in
order to implement a MAP_PRIVATE mmap()ing. This stacking is also
used to implement various sharing properties, including,
used to implement various sharing properties, including
copy-on-write, for forked address spaces.</para>
<para>It should be noted that a <literal>vm_page_t</literal> can only be
@ -106,12 +106,12 @@
system's idea of clean/dirty. For example, when the VM system decides
to synchronize a physical page to its backing store, the VM system
needs to mark the page clean before the page is actually written to
its backing s tore. Additionally, filesystems need to be able to map
its backing store. Additionally, filesystems need to be able to map
portions of a file or file metadata into KVM in order to operate on
it.</para>
<para>The entities used to manage this are known as filesystem buffers,
<literal>struct buf</literal>'s, and also known as
<literal>struct buf</literal>'s, or
<literal>bp</literal>'s. When a filesystem needs to operate on a
portion of a VM object, it typically maps part of the object into a
struct buf and the maps the pages in the struct buf into KVM. In the
@ -127,8 +127,8 @@
to hold mappings and does not limit the ability to cache data.
Physical data caching is strictly a function of
<literal>vm_page_t</literal>'s, not filesystem buffers. However,
since filesystem buffers are used placehold I/O, they do inherently
limit the amount of concurrent I/O possible. As there are usually a
since filesystem buffers are used to placehold I/O, they do inherently
limit the amount of concurrent I/O possible. However, as there are usually a
few thousand filesystem buffers available, this is not usually a
problem.</para>
</sect2>
@ -147,13 +147,13 @@
<literal>vm_entry_t</literal> structures. Page tables are directly
synthesized from the
<literal>vm_map_t</literal>/<literal>vm_entry_t</literal>/
<literal>vm_object_t</literal> hierarchy. Remember when I mentioned
<literal>vm_object_t</literal> hierarchy. Recall that I mentioned
that physical pages are only directly associated with a
<literal>vm_object</literal>. Well, that is not quite true.
<literal>vm_object</literal>; that is not quite true.
<literal>vm_page_t</literal>'s are also linked into page tables that
they are actively associated with. One <literal>vm_page_t</literal>
can be linked into several <emphasis>pmaps</emphasis>, as page tables
are called. However, the hierarchical association holds so all
are called. However, the hierarchical association holds, so all
references to the same page in the same object reference the same
<literal>vm_page_t</literal> and thus give us buffer cache unification
across the board.</para>
@ -166,7 +166,7 @@
largest entity held in KVM is the filesystem buffer cache. That is,
mappings relating to <literal>struct buf</literal> entities.</para>
<para>Unlike Linux, FreeBSD does NOT map all of physical memory into
<para>Unlike Linux, FreeBSD does <emphasis>not</emphasis> map all of physical memory into
KVM. This means that FreeBSD can handle memory configurations up to
4G on 32 bit platforms. In fact, if the mmu were capable of it,
FreeBSD could theoretically handle memory configurations up to 8TB on
@ -186,23 +186,23 @@
<para>A concerted effort has been made to make the FreeBSD kernel
dynamically tune itself. Typically you do not need to mess with
anything beyond the <literal>maxusers</literal> and
<literal>NMBCLUSTERS</literal> kernel config options. That is, kernel
anything beyond the <option>maxusers</option> and
<option>NMBCLUSTERS</option> kernel config options. That is, kernel
compilation options specified in (typically)
<filename>/usr/src/sys/i386/conf/<replaceable>CONFIG_FILE</replaceable></filename>.
A description of all available kernel configuration options can be
found in <filename>/usr/src/sys/i386/conf/LINT</filename>.</para>
<para>In a large system configuration you may wish to increase
<literal>maxusers</literal>. Values typically range from 10 to 128.
Note that raising <literal>maxusers</literal> too high can cause the
<option>maxusers</option>. Values typically range from 10 to 128.
Note that raising <option>maxusers</option> too high can cause the
system to overflow available KVM resulting in unpredictable operation.
It is better to leave maxusers at some reasonable number and add other
options, such as <literal>NMBCLUSTERS</literal>, to increase specific
It is better to leave <option>maxusers</option> at some reasonable number and add other
options, such as <option>NMBCLUSTERS</option>, to increase specific
resources.</para>
<para>If your system is going to use the network heavily, you may want
to increase <literal>NMBCLUSTERS</literal>. Typical values range from
to increase <option>NMBCLUSTERS</option>. Typical values range from
1024 to 4096.</para>
<para>The <literal>NBUF</literal> parameter is also traditionally used
@ -232,8 +232,8 @@ makeoptions COPTFLAGS="-O -pipe"</programlisting>
<para>Run time VM and system tuning is relatively straightforward.
First, use softupdates on your UFS/FFS filesystems whenever possible.
<filename>/usr/src/contrib/sys/softupdates/README</filename> contains
instructions (and restrictions) on how to configure it up.</para>
<filename>/usr/src/sys/ufs/ffs/README.softupdates</filename> contains
instructions (and restrictions) on how to configure it.</para>
<para>Second, configure sufficient swap. You should have a swap
partition configured on each physical disk, up to four, even on your

View file

@ -53,7 +53,7 @@
allocation at interrupt time.</para>
<para>If a process attempts to access a page that does not exist in its
page table but does exist in one of the paging queues ( such as the
page table but does exist in one of the paging queues (such as the
inactive or cache queues), a relatively inexpensive page reactivation
fault occurs which causes the page to be reactivated. If the page
does not exist in system memory at all, the process must block while
@ -61,7 +61,7 @@
<para>FreeBSD dynamically tunes its paging queues and attempts to
maintain reasonable ratios of pages in the various queues as well as
attempts to maintain a reasonable breakdown of clean v.s. dirty pages.
attempts to maintain a reasonable breakdown of clean vs. dirty pages.
The amount of rebalancing that occurs depends on the system's memory
load. This rebalancing is implemented by the pageout daemon and
involves laundering dirty pages (syncing them with their backing
@ -89,7 +89,7 @@
can be stacked on top of each other. For example, you might have a
swap-backed VM object stacked on top of a file-backed VM object in
order to implement a MAP_PRIVATE mmap()ing. This stacking is also
used to implement various sharing properties, including,
used to implement various sharing properties, including
copy-on-write, for forked address spaces.</para>
<para>It should be noted that a <literal>vm_page_t</literal> can only be
@ -106,12 +106,12 @@
system's idea of clean/dirty. For example, when the VM system decides
to synchronize a physical page to its backing store, the VM system
needs to mark the page clean before the page is actually written to
its backing s tore. Additionally, filesystems need to be able to map
its backing store. Additionally, filesystems need to be able to map
portions of a file or file metadata into KVM in order to operate on
it.</para>
<para>The entities used to manage this are known as filesystem buffers,
<literal>struct buf</literal>'s, and also known as
<literal>struct buf</literal>'s, or
<literal>bp</literal>'s. When a filesystem needs to operate on a
portion of a VM object, it typically maps part of the object into a
struct buf and the maps the pages in the struct buf into KVM. In the
@ -127,8 +127,8 @@
to hold mappings and does not limit the ability to cache data.
Physical data caching is strictly a function of
<literal>vm_page_t</literal>'s, not filesystem buffers. However,
since filesystem buffers are used placehold I/O, they do inherently
limit the amount of concurrent I/O possible. As there are usually a
since filesystem buffers are used to placehold I/O, they do inherently
limit the amount of concurrent I/O possible. However, as there are usually a
few thousand filesystem buffers available, this is not usually a
problem.</para>
</sect2>
@ -147,13 +147,13 @@
<literal>vm_entry_t</literal> structures. Page tables are directly
synthesized from the
<literal>vm_map_t</literal>/<literal>vm_entry_t</literal>/
<literal>vm_object_t</literal> hierarchy. Remember when I mentioned
<literal>vm_object_t</literal> hierarchy. Recall that I mentioned
that physical pages are only directly associated with a
<literal>vm_object</literal>. Well, that is not quite true.
<literal>vm_object</literal>; that is not quite true.
<literal>vm_page_t</literal>'s are also linked into page tables that
they are actively associated with. One <literal>vm_page_t</literal>
can be linked into several <emphasis>pmaps</emphasis>, as page tables
are called. However, the hierarchical association holds so all
are called. However, the hierarchical association holds, so all
references to the same page in the same object reference the same
<literal>vm_page_t</literal> and thus give us buffer cache unification
across the board.</para>
@ -166,7 +166,7 @@
largest entity held in KVM is the filesystem buffer cache. That is,
mappings relating to <literal>struct buf</literal> entities.</para>
<para>Unlike Linux, FreeBSD does NOT map all of physical memory into
<para>Unlike Linux, FreeBSD does <emphasis>not</emphasis> map all of physical memory into
KVM. This means that FreeBSD can handle memory configurations up to
4G on 32 bit platforms. In fact, if the mmu were capable of it,
FreeBSD could theoretically handle memory configurations up to 8TB on
@ -186,23 +186,23 @@
<para>A concerted effort has been made to make the FreeBSD kernel
dynamically tune itself. Typically you do not need to mess with
anything beyond the <literal>maxusers</literal> and
<literal>NMBCLUSTERS</literal> kernel config options. That is, kernel
anything beyond the <option>maxusers</option> and
<option>NMBCLUSTERS</option> kernel config options. That is, kernel
compilation options specified in (typically)
<filename>/usr/src/sys/i386/conf/<replaceable>CONFIG_FILE</replaceable></filename>.
A description of all available kernel configuration options can be
found in <filename>/usr/src/sys/i386/conf/LINT</filename>.</para>
<para>In a large system configuration you may wish to increase
<literal>maxusers</literal>. Values typically range from 10 to 128.
Note that raising <literal>maxusers</literal> too high can cause the
<option>maxusers</option>. Values typically range from 10 to 128.
Note that raising <option>maxusers</option> too high can cause the
system to overflow available KVM resulting in unpredictable operation.
It is better to leave maxusers at some reasonable number and add other
options, such as <literal>NMBCLUSTERS</literal>, to increase specific
It is better to leave <option>maxusers</option> at some reasonable number and add other
options, such as <option>NMBCLUSTERS</option>, to increase specific
resources.</para>
<para>If your system is going to use the network heavily, you may want
to increase <literal>NMBCLUSTERS</literal>. Typical values range from
to increase <option>NMBCLUSTERS</option>. Typical values range from
1024 to 4096.</para>
<para>The <literal>NBUF</literal> parameter is also traditionally used
@ -232,8 +232,8 @@ makeoptions COPTFLAGS="-O -pipe"</programlisting>
<para>Run time VM and system tuning is relatively straightforward.
First, use softupdates on your UFS/FFS filesystems whenever possible.
<filename>/usr/src/contrib/sys/softupdates/README</filename> contains
instructions (and restrictions) on how to configure it up.</para>
<filename>/usr/src/sys/ufs/ffs/README.softupdates</filename> contains
instructions (and restrictions) on how to configure it.</para>
<para>Second, configure sufficient swap. You should have a swap
partition configured on each physical disk, up to four, even on your