Whitespace-only fixes for the filesystems chapter. Translators, please

ignore.

Patch from dru on freebsd-doc, plus additional indentation fixes for ZFS
section and a few other miscellaneous whitespace problems.

Submitted by:	Dru Lavigne <dru.lavigne@att.net>
This commit is contained in:
Warren Block 2013-01-18 23:26:13 +00:00
parent 44ef37faf0
commit 74b4476eda
Notes: svn2git 2020-12-08 03:00:23 +00:00
svn path=/head/; revision=40681

View file

@ -47,17 +47,18 @@
(<acronym>ZFS</acronym>).</para>
<para>There are different levels of support for the various file
systems in &os;. Some will require a kernel module to be loaded,
others may require a toolset to be installed. This chapter is
designed to help users of &os; access other file systems on their
systems, starting with the &sun; Z file
systems in &os;. Some will require a kernel module to be
loaded, others may require a toolset to be installed. This
chapter is designed to help users of &os; access other file
systems on their systems, starting with the &sun; Z file
system.</para>
<para>After reading this chapter, you will know:</para>
<itemizedlist>
<listitem>
<para>The difference between native and supported file systems.</para>
<para>The difference between native and supported file
systems.</para>
</listitem>
<listitem>
@ -113,10 +114,11 @@
<title>ZFS Tuning</title>
<para>The <acronym>ZFS</acronym> subsystem utilizes much of
the system resources, so some tuning may be required to provide
maximum efficiency during every-day use. As an experimental
feature in &os; this may change in the near future; however,
at this time, the following steps are recommended.</para>
the system resources, so some tuning may be required to
provide maximum efficiency during every-day use. As an
experimental feature in &os; this may change in the near
future; however, at this time, the following steps are
recommended.</para>
<sect3>
<title>Memory</title>
@ -127,9 +129,10 @@
several other tuning mechanisms in place.</para>
<para>Some people have had luck using fewer than one gigabyte
of memory, but with such a limited amount of physical memory,
when the system is under heavy load, it is very plausible
that &os; will panic due to memory exhaustion.</para>
of memory, but with such a limited amount of physical
memory, when the system is under heavy load, it is very
plausible that &os; will panic due to memory
exhaustion.</para>
</sect3>
<sect3>
@ -138,11 +141,12 @@
<para>It is recommended that unused drivers and options
be removed from the kernel configuration file. Since most
devices are available as modules, they may be loaded
using the <filename>/boot/loader.conf</filename> file.</para>
using the <filename>/boot/loader.conf</filename>
file.</para>
<para>Users of the &i386; architecture should add the following
option to their kernel configuration file, rebuild their
kernel, and reboot:</para>
<para>Users of the &i386; architecture should add the
following option to their kernel configuration file,
rebuild their kernel, and reboot:</para>
<programlisting>options KVA_PAGES=512</programlisting>
@ -158,11 +162,11 @@
<sect3>
<title>Loader Tunables</title>
<para>The <devicename>kmem</devicename> address space should be
increased on all &os; architectures. On the test system with
one gigabyte of physical memory, success was achieved with the
following options which should be placed in
the <filename>/boot/loader.conf</filename> file and the system
<para>The <devicename>kmem</devicename> address space should
be increased on all &os; architectures. On the test system
with one gigabyte of physical memory, success was achieved
with the following options which should be placed in the
<filename>/boot/loader.conf</filename> file and the system
restarted:</para>
<programlisting>vm.kmem_size="330M"
@ -170,9 +174,9 @@ vm.kmem_size_max="330M"
vfs.zfs.arc_max="40M"
vfs.zfs.vdev.cache.size="5M"</programlisting>
<para>For a more detailed list of recommendations for ZFS-related
tuning, see
<ulink url="http://wiki.freebsd.org/ZFSTuningGuide"></ulink>.</para>
<para>For a more detailed list of recommendations for
ZFS-related tuning, see <ulink
url="http://wiki.freebsd.org/ZFSTuningGuide"></ulink>.</para>
</sect3>
</sect2>
@ -184,23 +188,25 @@ vfs.zfs.vdev.cache.size="5M"</programlisting>
initialization. To set it, issue the following
commands:</para>
<screen>&prompt.root; <userinput>echo 'zfs_enable="YES"' &gt;&gt; /etc/rc.conf</userinput>
<screen>&prompt.root; <userinput>echo 'zfs_enable="YES"' &gt;&gt; /etc/rc.conf</userinput>
&prompt.root; <userinput>/etc/rc.d/zfs start</userinput></screen>
<para>The remainder of this document assumes three
<acronym>SCSI</acronym> disks are available, and their device names
are <devicename><replaceable>da0</replaceable></devicename>,
<devicename><replaceable>da1</replaceable></devicename>
and <devicename><replaceable>da2</replaceable></devicename>.
Users of <acronym>IDE</acronym> hardware may use the
<devicename><replaceable>ad</replaceable></devicename>
devices in place of <acronym>SCSI</acronym> hardware.</para>
<para>The remainder of this document assumes three
<acronym>SCSI</acronym> disks are available, and their
device names are
<devicename><replaceable>da0</replaceable></devicename>,
<devicename><replaceable>da1</replaceable></devicename>
and <devicename><replaceable>da2</replaceable></devicename>.
Users of <acronym>IDE</acronym> hardware may use the
<devicename><replaceable>ad</replaceable></devicename>
devices in place of <acronym>SCSI</acronym> hardware.</para>
<sect3>
<title>Single Disk Pool</title>
<para>To create a simple, non-redundant <acronym>ZFS</acronym> pool using a
single disk device, use the <command>zpool</command> command:</para>
<para>To create a simple, non-redundant <acronym>ZFS</acronym>
pool using a single disk device, use the
<command>zpool</command> command:</para>
<screen>&prompt.root; <userinput>zpool create example /dev/da0</userinput></screen>
@ -239,8 +245,8 @@ drwxr-xr-x 21 root wheel 512 Aug 29 23:12 ..
<para>The <literal>example/compressed</literal> is now a
<acronym>ZFS</acronym> compressed file system. Try copying
some large files to it by copying them to
<filename class="directory">/example/compressed</filename>.</para>
some large files to it by copying them to <filename
class="directory">/example/compressed</filename>.</para>
<para>The compression may now be disabled with:</para>
@ -307,8 +313,8 @@ example/data 17547008 0 17547008 0% /example/data</screen>
amount of available space. This is the reason for using
<command>df</command> through these examples, to show
that the file systems are using only the amount of space
they need and will all draw from the same pool.
The <acronym>ZFS</acronym> file system does away with concepts
they need and will all draw from the same pool. The
<acronym>ZFS</acronym> file system does away with concepts
such as volumes and partitions, and allows for several file
systems to occupy the same pool. Destroy the file systems,
and then destroy the pool as they are no longer
@ -332,28 +338,31 @@ example/data 17547008 0 17547008 0% /example/data</screen>
<para>As previously noted, this section will assume that
three <acronym>SCSI</acronym> disks exist as devices
<devicename>da0</devicename>, <devicename>da1</devicename>
and <devicename>da2</devicename> (or <devicename>ad0</devicename>
and beyond in case IDE disks are being used). To create a
<acronym>RAID</acronym>-Z pool, issue the following
command:</para>
and <devicename>da2</devicename> (or
<devicename>ad0</devicename> and beyond in case IDE disks
are being used). To create a <acronym>RAID</acronym>-Z
pool, issue the following command:</para>
<screen>&prompt.root; <userinput>zpool create storage raidz da0 da1 da2</userinput></screen>
<note><para>&sun; recommends that the amount of devices used in a
<acronym>RAID</acronym>-Z configuration is between three and nine. If your needs
call for a single pool to consist of 10 disks or more, consider
breaking it up into smaller <acronym>RAID</acronym>-Z groups. If
you only have two disks and still require redundancy, consider using
a <acronym>ZFS</acronym> mirror instead. See the &man.zpool.8;
manual page for more details.</para></note>
<note>
<para>&sun; recommends that the amount of devices used
in a <acronym>RAID</acronym>-Z configuration is between
three and nine. If your needs call for a single pool to
consist of 10 disks or more, consider breaking it up into
smaller <acronym>RAID</acronym>-Z groups. If you only
have two disks and still require redundancy, consider
using a <acronym>ZFS</acronym> mirror instead. See the
&man.zpool.8; manual page for more details.</para>
</note>
<para>The <literal>storage</literal> zpool should have been
created. This may be verified by using the &man.mount.8; and
&man.df.1; commands as before. More disk devices may have
been allocated by adding them to the end of the list above.
Make a new file system in the pool, called
<literal>home</literal>, where user files will eventually be
placed:</para>
created. This may be verified by using the &man.mount.8;
and &man.df.1; commands as before. More disk devices may
have been allocated by adding them to the end of the list
above. Make a new file system in the pool, called
<literal>home</literal>, where user files will eventually
be placed:</para>
<screen>&prompt.root; <userinput>zfs create storage/home</userinput></screen>
@ -529,13 +538,14 @@ errors: No known data errors</screen>
<screen>&prompt.root; <userinput>zfs set checksum=off storage/home</userinput></screen>
<para>This is not a wise idea, however, as checksums take
very little storage space and are more useful when enabled. There
also appears to be no noticeable costs in having them enabled.
While enabled, it is possible to have <acronym>ZFS</acronym>
check data integrity using checksum verification. This
process is known as <quote>scrubbing.</quote> To verify the
data integrity of the <literal>storage</literal> pool, issue
the following command:</para>
very little storage space and are more useful when enabled.
There also appears to be no noticeable costs in having them
enabled. While enabled, it is possible to have
<acronym>ZFS</acronym> check data integrity using checksum
verification. This process is known as
<quote>scrubbing.</quote> To verify the data integrity of
the <literal>storage</literal> pool, issue the following
command:</para>
<screen>&prompt.root; <userinput>zpool scrub storage</userinput></screen>
@ -571,178 +581,187 @@ errors: No known data errors</screen>
</sect3>
<sect3>
<title>ZFS Quotas</title>
<title>ZFS Quotas</title>
<para>ZFS supports different types of quotas; the refquota, the
general quota, the user quota, and the group quota. This
section will explain the basics of each one, and include some
usage instructions.</para>
<para>ZFS supports different types of quotas; the
refquota, the general quota, the user quota, and
the group quota. This section will explain the
basics of each one, and include some usage
instructions.</para>
<para>Quotas limit the amount of space that a dataset and its
descendants can consume, and enforce a limit on the amount of
space used by filesystems and snapshots for the descendants.
In terms of users, quotas are useful to limit the amount of
space a particular user can use.</para>
<para>Quotas limit the amount of space that a dataset
and its descendants can consume, and enforce a limit
on the amount of space used by filesystems and
snapshots for the descendants. In terms of users,
quotas are useful to limit the amount of space a
particular user can use.</para>
<note>
<para>Quotas cannot be set on volumes, as the
<literal>volsize</literal> property acts as an implicit
quota.</para>
</note>
<note>
<para>Quotas cannot be set on volumes, as the
<literal>volsize</literal> property acts as an
implicit quota.</para>
</note>
<para>The refquota,
<literal>refquota=<replaceable>size</replaceable></literal>,
limits the amount of space a dataset can consume by enforcing
a hard limit on the space used. However, this hard limit does
not include space used by descendants, such as file systems or
snapshots.</para>
<para>The refquota,
<literal>refquota=<replaceable>size</replaceable></literal>,
limits the amount of space a dataset can consume
by enforcing a hard limit on the space used. However,
this hard limit does not include space used by descendants,
such as file systems or snapshots.</para>
<para>To enforce a general quota of 10&nbsp;GB for
<filename>storage/home/bob</filename>, use the
following:</para>
<para>To enforce a general quota of 10&nbsp;GB for
<filename>storage/home/bob</filename>, use the
following:</para>
<screen>&prompt.root; <userinput>zfs set quota=10G storage/home/bob</userinput></screen>
<screen>&prompt.root; <userinput>zfs set quota=10G storage/home/bob</userinput></screen>
<para>User quotas limit the amount of space that can be used by
the specified user. The general format is
<literal>userquota@<replaceable>user</replaceable>=<replaceable>size</replaceable></literal>,
and the user's name must be in one of the following
formats:</para>
<para>User quotas limit the amount of space that can
be used by the specified user. The general format
is
<literal>userquota@<replaceable>user</replaceable>=<replaceable>size</replaceable></literal>,
and the user's name must be in one of the following
formats:</para>
<itemizedlist>
<listitem>
<para><acronym
role="Portable Operating System Interface">POSIX</acronym>
compatible name (e.g., <replaceable>joe</replaceable>).</para>
</listitem>
<listitem>
<para><acronym
role="Portable Operating System Interface">POSIX</acronym>
numeric ID (e.g., <replaceable>789</replaceable>).</para>
</listitem>
<listitem>
<para><acronym
role="System Identifier">SID</acronym>
name (e.g.,
<replaceable>joe.bloggs@example.com</replaceable>).</para>
</listitem>
<listitem>
<para><acronym role="System Identifier">SID</acronym>
numeric ID (e.g.,
<replaceable>S-1-123-456-789</replaceable>).</para>
</listitem>
</itemizedlist>
<itemizedlist>
<listitem>
<para><acronym
role="Portable Operating System
Interface">POSIX</acronym> compatible name
(e.g., <replaceable>joe</replaceable>).</para>
</listitem>
<para>For example, to enforce a quota of 50&nbsp;GB for a user
named <replaceable>joe</replaceable>, use the
following:</para>
<listitem>
<para><acronym
role="Portable Operating System
Interface">POSIX</acronym>
numeric ID (e.g.,
<replaceable>789</replaceable>).</para>
</listitem>
<screen>&prompt.root; <userinput>zfs set userquota@joe=50G</userinput></screen>
<listitem>
<para><acronym role="System Identifier">SID</acronym> name
(e.g.,
<replaceable>joe.bloggs@example.com</replaceable>).</para>
</listitem>
<para>To remove the quota or make sure that one is not
set, instead use:</para>
<listitem>
<para><acronym role="System Identifier">SID</acronym>
numeric ID (e.g.,
<replaceable>S-1-123-456-789</replaceable>).</para>
</listitem>
</itemizedlist>
<screen>&prompt.root; <userinput>zfs set userquota@joe=none</userinput></screen>
<para>For example, to enforce a quota of 50&nbsp;GB for a user
named <replaceable>joe</replaceable>, use the
following:</para>
<para>User quota properties are not displayed by
<command>zfs get all</command>. Non-<username>root</username>
users can only see their own quotas unless they have been
granted the <literal>userquota</literal> privilege. Users
with this privilege are able to view and set everyone's
quota.</para>
<screen>&prompt.root; <userinput>zfs set userquota@joe=50G</userinput></screen>
<para>The group quota limits the amount of space that a
specified user group can consume. The general format is
<literal>groupquota@<replaceable>group</replaceable>=<replaceable>size</replaceable></literal>.</para>
<para>To remove the quota or make sure that one is not set,
instead use:</para>
<para>To set the quota for the group
<replaceable>firstgroup</replaceable> to 50&nbsp;GB,
use:</para>
<screen>&prompt.root; <userinput>zfs set userquota@joe=none</userinput></screen>
<screen>&prompt.root; <userinput>zfs set groupquota@firstgroup=50G</userinput></screen>
<para>User quota properties are not displayed by
<command>zfs get all</command>.
Non-<username>root</username> users can only see their own
quotas unless they have been granted the
<literal>userquota</literal> privilege. Users with this
privilege are able to view and set everyone's quota.</para>
<para>To remove the quota for the group
<replaceable>firstgroup</replaceable>, or make sure that one
is not set, instead use:</para>
<para>The group quota limits the amount of space that a
specified user group can consume. The general format is
<literal>groupquota@<replaceable>group</replaceable>=<replaceable>size</replaceable></literal>.</para>
<screen>&prompt.root; <userinput>zfs set groupquota@firstgroup=none</userinput></screen>
<para>To set the quota for the group
<replaceable>firstgroup</replaceable> to 50&nbsp;GB,
use:</para>
<para>As with the user quota property,
non-<username>root</username> users can only see the quotas
associated with the user groups that they belong to, however
a <username>root</username> user or a user with the
<literal>groupquota</literal> privilege can view and set all
quotas for all groups.</para>
<screen>&prompt.root; <userinput>zfs set groupquota@firstgroup=50G</userinput></screen>
<para>The <command>zfs userspace</command> subcommand displays
the amount of space consumed by each user on the specified
filesystem or snapshot, along with any specified quotas.
The <command>zfs groupspace</command> subcommand does the
same for groups. For more information about supported
options, or only displaying specific options, see
&man.zfs.1;.</para>
<para>To remove the quota for the group
<replaceable>firstgroup</replaceable>, or make sure that one
is not set, instead use:</para>
<para>To list the quota for
<filename>storage/home/bob</filename>, if you have the
correct privileges or are <username>root</username>,
use the following:</para>
<screen>&prompt.root; <userinput>zfs set groupquota@firstgroup=none</userinput></screen>
<screen>&prompt.root; <userinput>zfs get quota storage/home/bob</userinput></screen>
<para>As with the user quota property,
non-<username>root</username> users can only see the quotas
associated with the user groups that they belong to, however
a <username>root</username> user or a user with the
<literal>groupquota</literal> privilege can view and set all
quotas for all groups.</para>
<para>The <command>zfs userspace</command> subcommand displays
the amount of space consumed by each user on the specified
filesystem or snapshot, along with any specified quotas.
The <command>zfs groupspace</command> subcommand does the
same for groups. For more information about supported
options, or only displaying specific options, see
&man.zfs.1;.</para>
<para>To list the quota for
<filename>storage/home/bob</filename>, if you have the
correct privileges or are <username>root</username>, use the
following:</para>
<screen>&prompt.root; <userinput>zfs get quota storage/home/bob</userinput></screen>
</sect3>
<sect3>
<title>ZFS Reservations</title>
<title>ZFS Reservations</title>
<para>ZFS supports two types of space reservations. This
section will explain the basics of each one, and include
some usage instructions.</para>
<para>ZFS supports two types of space reservations.
This section will explain the basics of each one,
and include some usage instructions.</para>
<para>The <literal>reservation</literal> property makes it
possible to reserve a minimum amount of space guaranteed for a
dataset and its descendants. This means that if a 10&nbsp;GB
reservation is set on <filename>storage/home/bob</filename>,
if disk space gets low, at least 10&nbsp;GB of space is
reserved for this dataset. The
<literal>refreservation</literal> property sets or indicates
the minimum amount of space guaranteed to a dataset excluding
descendants, such as snapshots. As an example, if a snapshot
was taken of <filename>storage/home/bob</filename>, enough
disk space would have to exist outside of the
<literal>refreservation</literal> amount for the operation to
succeed because descendants of the main data set are not
counted by the <literal>refreservation</literal> amount and
so do not encroach on the space set.</para>
<para>The <literal>reservation</literal> property makes it
possible to reserve a minimum amount of space guaranteed
for a dataset and its descendants. This means that if a
10&nbsp;GB reservation is set on
<filename>storage/home/bob</filename>, if disk
space gets low, at least 10&nbsp;GB of space is reserved
for this dataset. The <literal>refreservation</literal>
property sets or indicates the minimum amount of space
guaranteed to a dataset excluding descendants, such as
snapshots. As an example, if a snapshot was taken of
<filename>storage/home/bob</filename>, enough disk space
would have to exist outside of the
<literal>refreservation</literal> amount for the operation
to succeed because descendants of the main data set are
not counted by the <literal>refreservation</literal>
amount and so do not encroach on the space set.</para>
<para>Reservations of any sort are useful in many situations,
for example planning and testing the suitability of disk space
allocation in a new system, or ensuring that enough space is
available on file systems for system recovery procedures and
files.</para>
<para>Reservations of any sort are useful in many
situations, for example planning and testing the
suitability of disk space allocation in a new system, or
ensuring that enough space is available on file systems
for system recovery procedures and files.</para>
<para>The general format of the <literal>reservation</literal>
property is
<literal>reservation=<replaceable>size</replaceable></literal>,
so to set a reservation of 10&nbsp;GB on
<filename>storage/home/bob</filename>the below command is
used:</para>
<para>The general format of the <literal>reservation</literal>
property is
<literal>reservation=<replaceable>size</replaceable></literal>,
so to set a reservation of 10&nbsp;GB on
<filename>storage/home/bob</filename>the below command is
used:</para>
<screen>&prompt.root; <userinput>zfs set reservation=10G storage/home/bob</userinput></screen>
<screen>&prompt.root; <userinput>zfs set reservation=10G storage/home/bob</userinput></screen>
<para>To make sure that no reservation is set, or to remove a
reservation, instead use:</para>
<para>To make sure that no reservation is set, or to remove a
reservation, instead use:</para>
<screen>&prompt.root; <userinput>zfs set reservation=none storage/home/bob</userinput></screen>
<screen>&prompt.root; <userinput>zfs set reservation=none storage/home/bob</userinput></screen>
<para>The same principle can be applied to the
<literal>refreservation</literal> property for setting a
refreservation, with the general format
<literal>refreservation=<replaceable>size</replaceable></literal>.</para>
<para>The same principle can be applied to the
<literal>refreservation</literal> property for setting a
refreservation, with the general format
<literal>refreservation=<replaceable>size</replaceable></literal>.</para>
<para>To check if any reservations or refreservations exist on
<filename>storage/home/bob</filename>, execute one of the
following commands:</para>
<para>To check if any reservations or refreservations exist on
<filename>storage/home/bob</filename>, execute one of the
following commands:</para>
<screen>&prompt.root; <userinput>zfs get reservation storage/home/bob</userinput>
<screen>&prompt.root; <userinput>zfs get reservation storage/home/bob</userinput>
&prompt.root; <userinput>zfs get refreservation storage/home/bob</userinput></screen>
</sect3>
</sect2>
@ -760,12 +779,13 @@ errors: No known data errors</screen>
<para>The &man.ext2fs.5; file system kernel implementation was
written by Godmar Back, and the driver first appeared in
&os; 2.2. In &os; 8 and earlier, the code is licensed under
the <acronym>GNU</acronym> Public License, however under &os; 9,
the code has been rewritten and it is now licensed under the
<acronym>BSD</acronym> license.</para>
the <acronym>GNU</acronym> Public License, however under &os;
9, the code has been rewritten and it is now licensed under
the <acronym>BSD</acronym> license.</para>
<para>The &man.ext2fs.5; driver will allow the &os; kernel
to both read and write to <acronym>ext2</acronym> file systems.</para>
to both read and write to <acronym>ext2</acronym> file
systems.</para>
<para>First, load the kernel loadable module:</para>
@ -776,6 +796,7 @@ errors: No known data errors</screen>
<screen>&prompt.root; <userinput>mount -t ext2fs /dev/ad1s1 /mnt</userinput></screen>
</sect2>
<sect2>
<title>XFS</title>
@ -815,6 +836,7 @@ errors: No known data errors</screen>
metadata. This can be used to quickly create a read-only
filesystem which can be tested on &os;.</para>
</sect2>
<sect2>
<title>ReiserFS</title>
@ -826,7 +848,8 @@ errors: No known data errors</screen>
access ReiserFS file systems and read their contents, but not
write to them, currently.</para>
<para>First, the kernel-loadable module needs to be loaded:</para>
<para>First, the kernel-loadable module needs to be
loaded:</para>
<screen>&prompt.root; <userinput>kldload reiserfs</userinput></screen>