Whitespace-only fixes for the filesystems chapter. Translators, please

ignore.

Patch from dru on freebsd-doc, plus additional indentation fixes for ZFS
section and a few other miscellaneous whitespace problems.

Submitted by:	Dru Lavigne <dru.lavigne@att.net>
This commit is contained in:
Warren Block 2013-01-18 23:26:13 +00:00
parent 44ef37faf0
commit 74b4476eda
Notes: svn2git 2020-12-08 03:00:23 +00:00
svn path=/head/; revision=40681

View file

@ -47,17 +47,18 @@
(<acronym>ZFS</acronym>).</para> (<acronym>ZFS</acronym>).</para>
<para>There are different levels of support for the various file <para>There are different levels of support for the various file
systems in &os;. Some will require a kernel module to be loaded, systems in &os;. Some will require a kernel module to be
others may require a toolset to be installed. This chapter is loaded, others may require a toolset to be installed. This
designed to help users of &os; access other file systems on their chapter is designed to help users of &os; access other file
systems, starting with the &sun; Z file systems on their systems, starting with the &sun; Z file
system.</para> system.</para>
<para>After reading this chapter, you will know:</para> <para>After reading this chapter, you will know:</para>
<itemizedlist> <itemizedlist>
<listitem> <listitem>
<para>The difference between native and supported file systems.</para> <para>The difference between native and supported file
systems.</para>
</listitem> </listitem>
<listitem> <listitem>
@ -113,10 +114,11 @@
<title>ZFS Tuning</title> <title>ZFS Tuning</title>
<para>The <acronym>ZFS</acronym> subsystem utilizes much of <para>The <acronym>ZFS</acronym> subsystem utilizes much of
the system resources, so some tuning may be required to provide the system resources, so some tuning may be required to
maximum efficiency during every-day use. As an experimental provide maximum efficiency during every-day use. As an
feature in &os; this may change in the near future; however, experimental feature in &os; this may change in the near
at this time, the following steps are recommended.</para> future; however, at this time, the following steps are
recommended.</para>
<sect3> <sect3>
<title>Memory</title> <title>Memory</title>
@ -127,9 +129,10 @@
several other tuning mechanisms in place.</para> several other tuning mechanisms in place.</para>
<para>Some people have had luck using fewer than one gigabyte <para>Some people have had luck using fewer than one gigabyte
of memory, but with such a limited amount of physical memory, of memory, but with such a limited amount of physical
when the system is under heavy load, it is very plausible memory, when the system is under heavy load, it is very
that &os; will panic due to memory exhaustion.</para> plausible that &os; will panic due to memory
exhaustion.</para>
</sect3> </sect3>
<sect3> <sect3>
@ -138,11 +141,12 @@
<para>It is recommended that unused drivers and options <para>It is recommended that unused drivers and options
be removed from the kernel configuration file. Since most be removed from the kernel configuration file. Since most
devices are available as modules, they may be loaded devices are available as modules, they may be loaded
using the <filename>/boot/loader.conf</filename> file.</para> using the <filename>/boot/loader.conf</filename>
file.</para>
<para>Users of the &i386; architecture should add the following <para>Users of the &i386; architecture should add the
option to their kernel configuration file, rebuild their following option to their kernel configuration file,
kernel, and reboot:</para> rebuild their kernel, and reboot:</para>
<programlisting>options KVA_PAGES=512</programlisting> <programlisting>options KVA_PAGES=512</programlisting>
@ -158,11 +162,11 @@
<sect3> <sect3>
<title>Loader Tunables</title> <title>Loader Tunables</title>
<para>The <devicename>kmem</devicename> address space should be <para>The <devicename>kmem</devicename> address space should
increased on all &os; architectures. On the test system with be increased on all &os; architectures. On the test system
one gigabyte of physical memory, success was achieved with the with one gigabyte of physical memory, success was achieved
following options which should be placed in with the following options which should be placed in the
the <filename>/boot/loader.conf</filename> file and the system <filename>/boot/loader.conf</filename> file and the system
restarted:</para> restarted:</para>
<programlisting>vm.kmem_size="330M" <programlisting>vm.kmem_size="330M"
@ -170,9 +174,9 @@ vm.kmem_size_max="330M"
vfs.zfs.arc_max="40M" vfs.zfs.arc_max="40M"
vfs.zfs.vdev.cache.size="5M"</programlisting> vfs.zfs.vdev.cache.size="5M"</programlisting>
<para>For a more detailed list of recommendations for ZFS-related <para>For a more detailed list of recommendations for
tuning, see ZFS-related tuning, see <ulink
<ulink url="http://wiki.freebsd.org/ZFSTuningGuide"></ulink>.</para> url="http://wiki.freebsd.org/ZFSTuningGuide"></ulink>.</para>
</sect3> </sect3>
</sect2> </sect2>
@ -184,23 +188,25 @@ vfs.zfs.vdev.cache.size="5M"</programlisting>
initialization. To set it, issue the following initialization. To set it, issue the following
commands:</para> commands:</para>
<screen>&prompt.root; <userinput>echo 'zfs_enable="YES"' &gt;&gt; /etc/rc.conf</userinput> <screen>&prompt.root; <userinput>echo 'zfs_enable="YES"' &gt;&gt; /etc/rc.conf</userinput>
&prompt.root; <userinput>/etc/rc.d/zfs start</userinput></screen> &prompt.root; <userinput>/etc/rc.d/zfs start</userinput></screen>
<para>The remainder of this document assumes three <para>The remainder of this document assumes three
<acronym>SCSI</acronym> disks are available, and their device names <acronym>SCSI</acronym> disks are available, and their
are <devicename><replaceable>da0</replaceable></devicename>, device names are
<devicename><replaceable>da1</replaceable></devicename> <devicename><replaceable>da0</replaceable></devicename>,
and <devicename><replaceable>da2</replaceable></devicename>. <devicename><replaceable>da1</replaceable></devicename>
Users of <acronym>IDE</acronym> hardware may use the and <devicename><replaceable>da2</replaceable></devicename>.
<devicename><replaceable>ad</replaceable></devicename> Users of <acronym>IDE</acronym> hardware may use the
devices in place of <acronym>SCSI</acronym> hardware.</para> <devicename><replaceable>ad</replaceable></devicename>
devices in place of <acronym>SCSI</acronym> hardware.</para>
<sect3> <sect3>
<title>Single Disk Pool</title> <title>Single Disk Pool</title>
<para>To create a simple, non-redundant <acronym>ZFS</acronym> pool using a <para>To create a simple, non-redundant <acronym>ZFS</acronym>
single disk device, use the <command>zpool</command> command:</para> pool using a single disk device, use the
<command>zpool</command> command:</para>
<screen>&prompt.root; <userinput>zpool create example /dev/da0</userinput></screen> <screen>&prompt.root; <userinput>zpool create example /dev/da0</userinput></screen>
@ -239,8 +245,8 @@ drwxr-xr-x 21 root wheel 512 Aug 29 23:12 ..
<para>The <literal>example/compressed</literal> is now a <para>The <literal>example/compressed</literal> is now a
<acronym>ZFS</acronym> compressed file system. Try copying <acronym>ZFS</acronym> compressed file system. Try copying
some large files to it by copying them to some large files to it by copying them to <filename
<filename class="directory">/example/compressed</filename>.</para> class="directory">/example/compressed</filename>.</para>
<para>The compression may now be disabled with:</para> <para>The compression may now be disabled with:</para>
@ -307,8 +313,8 @@ example/data 17547008 0 17547008 0% /example/data</screen>
amount of available space. This is the reason for using amount of available space. This is the reason for using
<command>df</command> through these examples, to show <command>df</command> through these examples, to show
that the file systems are using only the amount of space that the file systems are using only the amount of space
they need and will all draw from the same pool. they need and will all draw from the same pool. The
The <acronym>ZFS</acronym> file system does away with concepts <acronym>ZFS</acronym> file system does away with concepts
such as volumes and partitions, and allows for several file such as volumes and partitions, and allows for several file
systems to occupy the same pool. Destroy the file systems, systems to occupy the same pool. Destroy the file systems,
and then destroy the pool as they are no longer and then destroy the pool as they are no longer
@ -332,28 +338,31 @@ example/data 17547008 0 17547008 0% /example/data</screen>
<para>As previously noted, this section will assume that <para>As previously noted, this section will assume that
three <acronym>SCSI</acronym> disks exist as devices three <acronym>SCSI</acronym> disks exist as devices
<devicename>da0</devicename>, <devicename>da1</devicename> <devicename>da0</devicename>, <devicename>da1</devicename>
and <devicename>da2</devicename> (or <devicename>ad0</devicename> and <devicename>da2</devicename> (or
and beyond in case IDE disks are being used). To create a <devicename>ad0</devicename> and beyond in case IDE disks
<acronym>RAID</acronym>-Z pool, issue the following are being used). To create a <acronym>RAID</acronym>-Z
command:</para> pool, issue the following command:</para>
<screen>&prompt.root; <userinput>zpool create storage raidz da0 da1 da2</userinput></screen> <screen>&prompt.root; <userinput>zpool create storage raidz da0 da1 da2</userinput></screen>
<note><para>&sun; recommends that the amount of devices used in a <note>
<acronym>RAID</acronym>-Z configuration is between three and nine. If your needs <para>&sun; recommends that the amount of devices used
call for a single pool to consist of 10 disks or more, consider in a <acronym>RAID</acronym>-Z configuration is between
breaking it up into smaller <acronym>RAID</acronym>-Z groups. If three and nine. If your needs call for a single pool to
you only have two disks and still require redundancy, consider using consist of 10 disks or more, consider breaking it up into
a <acronym>ZFS</acronym> mirror instead. See the &man.zpool.8; smaller <acronym>RAID</acronym>-Z groups. If you only
manual page for more details.</para></note> have two disks and still require redundancy, consider
using a <acronym>ZFS</acronym> mirror instead. See the
&man.zpool.8; manual page for more details.</para>
</note>
<para>The <literal>storage</literal> zpool should have been <para>The <literal>storage</literal> zpool should have been
created. This may be verified by using the &man.mount.8; and created. This may be verified by using the &man.mount.8;
&man.df.1; commands as before. More disk devices may have and &man.df.1; commands as before. More disk devices may
been allocated by adding them to the end of the list above. have been allocated by adding them to the end of the list
Make a new file system in the pool, called above. Make a new file system in the pool, called
<literal>home</literal>, where user files will eventually be <literal>home</literal>, where user files will eventually
placed:</para> be placed:</para>
<screen>&prompt.root; <userinput>zfs create storage/home</userinput></screen> <screen>&prompt.root; <userinput>zfs create storage/home</userinput></screen>
@ -529,13 +538,14 @@ errors: No known data errors</screen>
<screen>&prompt.root; <userinput>zfs set checksum=off storage/home</userinput></screen> <screen>&prompt.root; <userinput>zfs set checksum=off storage/home</userinput></screen>
<para>This is not a wise idea, however, as checksums take <para>This is not a wise idea, however, as checksums take
very little storage space and are more useful when enabled. There very little storage space and are more useful when enabled.
also appears to be no noticeable costs in having them enabled. There also appears to be no noticeable costs in having them
While enabled, it is possible to have <acronym>ZFS</acronym> enabled. While enabled, it is possible to have
check data integrity using checksum verification. This <acronym>ZFS</acronym> check data integrity using checksum
process is known as <quote>scrubbing.</quote> To verify the verification. This process is known as
data integrity of the <literal>storage</literal> pool, issue <quote>scrubbing.</quote> To verify the data integrity of
the following command:</para> the <literal>storage</literal> pool, issue the following
command:</para>
<screen>&prompt.root; <userinput>zpool scrub storage</userinput></screen> <screen>&prompt.root; <userinput>zpool scrub storage</userinput></screen>
@ -571,178 +581,187 @@ errors: No known data errors</screen>
</sect3> </sect3>
<sect3> <sect3>
<title>ZFS Quotas</title> <title>ZFS Quotas</title>
<para>ZFS supports different types of quotas; the refquota, the <para>ZFS supports different types of quotas; the
general quota, the user quota, and the group quota. This refquota, the general quota, the user quota, and
section will explain the basics of each one, and include some the group quota. This section will explain the
usage instructions.</para> basics of each one, and include some usage
instructions.</para>
<para>Quotas limit the amount of space that a dataset and its <para>Quotas limit the amount of space that a dataset
descendants can consume, and enforce a limit on the amount of and its descendants can consume, and enforce a limit
space used by filesystems and snapshots for the descendants. on the amount of space used by filesystems and
In terms of users, quotas are useful to limit the amount of snapshots for the descendants. In terms of users,
space a particular user can use.</para> quotas are useful to limit the amount of space a
particular user can use.</para>
<note> <note>
<para>Quotas cannot be set on volumes, as the <para>Quotas cannot be set on volumes, as the
<literal>volsize</literal> property acts as an implicit <literal>volsize</literal> property acts as an
quota.</para> implicit quota.</para>
</note> </note>
<para>The refquota, <para>The refquota,
<literal>refquota=<replaceable>size</replaceable></literal>, <literal>refquota=<replaceable>size</replaceable></literal>,
limits the amount of space a dataset can consume by enforcing limits the amount of space a dataset can consume
a hard limit on the space used. However, this hard limit does by enforcing a hard limit on the space used. However,
not include space used by descendants, such as file systems or this hard limit does not include space used by descendants,
snapshots.</para> such as file systems or snapshots.</para>
<para>To enforce a general quota of 10&nbsp;GB for <para>To enforce a general quota of 10&nbsp;GB for
<filename>storage/home/bob</filename>, use the <filename>storage/home/bob</filename>, use the
following:</para> following:</para>
<screen>&prompt.root; <userinput>zfs set quota=10G storage/home/bob</userinput></screen> <screen>&prompt.root; <userinput>zfs set quota=10G storage/home/bob</userinput></screen>
<para>User quotas limit the amount of space that can be used by <para>User quotas limit the amount of space that can
the specified user. The general format is be used by the specified user. The general format
<literal>userquota@<replaceable>user</replaceable>=<replaceable>size</replaceable></literal>, is
and the user's name must be in one of the following <literal>userquota@<replaceable>user</replaceable>=<replaceable>size</replaceable></literal>,
formats:</para> and the user's name must be in one of the following
formats:</para>
<itemizedlist> <itemizedlist>
<listitem> <listitem>
<para><acronym <para><acronym
role="Portable Operating System Interface">POSIX</acronym> role="Portable Operating System
compatible name (e.g., <replaceable>joe</replaceable>).</para> Interface">POSIX</acronym> compatible name
</listitem> (e.g., <replaceable>joe</replaceable>).</para>
<listitem> </listitem>
<para><acronym
role="Portable Operating System Interface">POSIX</acronym>
numeric ID (e.g., <replaceable>789</replaceable>).</para>
</listitem>
<listitem>
<para><acronym
role="System Identifier">SID</acronym>
name (e.g.,
<replaceable>joe.bloggs@example.com</replaceable>).</para>
</listitem>
<listitem>
<para><acronym role="System Identifier">SID</acronym>
numeric ID (e.g.,
<replaceable>S-1-123-456-789</replaceable>).</para>
</listitem>
</itemizedlist>
<para>For example, to enforce a quota of 50&nbsp;GB for a user <listitem>
named <replaceable>joe</replaceable>, use the <para><acronym
following:</para> role="Portable Operating System
Interface">POSIX</acronym>
numeric ID (e.g.,
<replaceable>789</replaceable>).</para>
</listitem>
<screen>&prompt.root; <userinput>zfs set userquota@joe=50G</userinput></screen> <listitem>
<para><acronym role="System Identifier">SID</acronym> name
(e.g.,
<replaceable>joe.bloggs@example.com</replaceable>).</para>
</listitem>
<para>To remove the quota or make sure that one is not <listitem>
set, instead use:</para> <para><acronym role="System Identifier">SID</acronym>
numeric ID (e.g.,
<replaceable>S-1-123-456-789</replaceable>).</para>
</listitem>
</itemizedlist>
<screen>&prompt.root; <userinput>zfs set userquota@joe=none</userinput></screen> <para>For example, to enforce a quota of 50&nbsp;GB for a user
named <replaceable>joe</replaceable>, use the
following:</para>
<para>User quota properties are not displayed by <screen>&prompt.root; <userinput>zfs set userquota@joe=50G</userinput></screen>
<command>zfs get all</command>. Non-<username>root</username>
users can only see their own quotas unless they have been
granted the <literal>userquota</literal> privilege. Users
with this privilege are able to view and set everyone's
quota.</para>
<para>The group quota limits the amount of space that a <para>To remove the quota or make sure that one is not set,
specified user group can consume. The general format is instead use:</para>
<literal>groupquota@<replaceable>group</replaceable>=<replaceable>size</replaceable></literal>.</para>
<para>To set the quota for the group <screen>&prompt.root; <userinput>zfs set userquota@joe=none</userinput></screen>
<replaceable>firstgroup</replaceable> to 50&nbsp;GB,
use:</para>
<screen>&prompt.root; <userinput>zfs set groupquota@firstgroup=50G</userinput></screen> <para>User quota properties are not displayed by
<command>zfs get all</command>.
Non-<username>root</username> users can only see their own
quotas unless they have been granted the
<literal>userquota</literal> privilege. Users with this
privilege are able to view and set everyone's quota.</para>
<para>To remove the quota for the group <para>The group quota limits the amount of space that a
<replaceable>firstgroup</replaceable>, or make sure that one specified user group can consume. The general format is
is not set, instead use:</para> <literal>groupquota@<replaceable>group</replaceable>=<replaceable>size</replaceable></literal>.</para>
<screen>&prompt.root; <userinput>zfs set groupquota@firstgroup=none</userinput></screen> <para>To set the quota for the group
<replaceable>firstgroup</replaceable> to 50&nbsp;GB,
use:</para>
<para>As with the user quota property, <screen>&prompt.root; <userinput>zfs set groupquota@firstgroup=50G</userinput></screen>
non-<username>root</username> users can only see the quotas
associated with the user groups that they belong to, however
a <username>root</username> user or a user with the
<literal>groupquota</literal> privilege can view and set all
quotas for all groups.</para>
<para>The <command>zfs userspace</command> subcommand displays <para>To remove the quota for the group
the amount of space consumed by each user on the specified <replaceable>firstgroup</replaceable>, or make sure that one
filesystem or snapshot, along with any specified quotas. is not set, instead use:</para>
The <command>zfs groupspace</command> subcommand does the
same for groups. For more information about supported
options, or only displaying specific options, see
&man.zfs.1;.</para>
<para>To list the quota for <screen>&prompt.root; <userinput>zfs set groupquota@firstgroup=none</userinput></screen>
<filename>storage/home/bob</filename>, if you have the
correct privileges or are <username>root</username>,
use the following:</para>
<screen>&prompt.root; <userinput>zfs get quota storage/home/bob</userinput></screen> <para>As with the user quota property,
non-<username>root</username> users can only see the quotas
associated with the user groups that they belong to, however
a <username>root</username> user or a user with the
<literal>groupquota</literal> privilege can view and set all
quotas for all groups.</para>
<para>The <command>zfs userspace</command> subcommand displays
the amount of space consumed by each user on the specified
filesystem or snapshot, along with any specified quotas.
The <command>zfs groupspace</command> subcommand does the
same for groups. For more information about supported
options, or only displaying specific options, see
&man.zfs.1;.</para>
<para>To list the quota for
<filename>storage/home/bob</filename>, if you have the
correct privileges or are <username>root</username>, use the
following:</para>
<screen>&prompt.root; <userinput>zfs get quota storage/home/bob</userinput></screen>
</sect3> </sect3>
<sect3> <sect3>
<title>ZFS Reservations</title> <title>ZFS Reservations</title>
<para>ZFS supports two types of space reservations. This <para>ZFS supports two types of space reservations.
section will explain the basics of each one, and include This section will explain the basics of each one,
some usage instructions.</para> and include some usage instructions.</para>
<para>The <literal>reservation</literal> property makes it <para>The <literal>reservation</literal> property makes it
possible to reserve a minimum amount of space guaranteed for a possible to reserve a minimum amount of space guaranteed
dataset and its descendants. This means that if a 10&nbsp;GB for a dataset and its descendants. This means that if a
reservation is set on <filename>storage/home/bob</filename>, 10&nbsp;GB reservation is set on
if disk space gets low, at least 10&nbsp;GB of space is <filename>storage/home/bob</filename>, if disk
reserved for this dataset. The space gets low, at least 10&nbsp;GB of space is reserved
<literal>refreservation</literal> property sets or indicates for this dataset. The <literal>refreservation</literal>
the minimum amount of space guaranteed to a dataset excluding property sets or indicates the minimum amount of space
descendants, such as snapshots. As an example, if a snapshot guaranteed to a dataset excluding descendants, such as
was taken of <filename>storage/home/bob</filename>, enough snapshots. As an example, if a snapshot was taken of
disk space would have to exist outside of the <filename>storage/home/bob</filename>, enough disk space
<literal>refreservation</literal> amount for the operation to would have to exist outside of the
succeed because descendants of the main data set are not <literal>refreservation</literal> amount for the operation
counted by the <literal>refreservation</literal> amount and to succeed because descendants of the main data set are
so do not encroach on the space set.</para> not counted by the <literal>refreservation</literal>
amount and so do not encroach on the space set.</para>
<para>Reservations of any sort are useful in many situations, <para>Reservations of any sort are useful in many
for example planning and testing the suitability of disk space situations, for example planning and testing the
allocation in a new system, or ensuring that enough space is suitability of disk space allocation in a new system, or
available on file systems for system recovery procedures and ensuring that enough space is available on file systems
files.</para> for system recovery procedures and files.</para>
<para>The general format of the <literal>reservation</literal> <para>The general format of the <literal>reservation</literal>
property is property is
<literal>reservation=<replaceable>size</replaceable></literal>, <literal>reservation=<replaceable>size</replaceable></literal>,
so to set a reservation of 10&nbsp;GB on so to set a reservation of 10&nbsp;GB on
<filename>storage/home/bob</filename>the below command is <filename>storage/home/bob</filename>the below command is
used:</para> used:</para>
<screen>&prompt.root; <userinput>zfs set reservation=10G storage/home/bob</userinput></screen> <screen>&prompt.root; <userinput>zfs set reservation=10G storage/home/bob</userinput></screen>
<para>To make sure that no reservation is set, or to remove a <para>To make sure that no reservation is set, or to remove a
reservation, instead use:</para> reservation, instead use:</para>
<screen>&prompt.root; <userinput>zfs set reservation=none storage/home/bob</userinput></screen> <screen>&prompt.root; <userinput>zfs set reservation=none storage/home/bob</userinput></screen>
<para>The same principle can be applied to the <para>The same principle can be applied to the
<literal>refreservation</literal> property for setting a <literal>refreservation</literal> property for setting a
refreservation, with the general format refreservation, with the general format
<literal>refreservation=<replaceable>size</replaceable></literal>.</para> <literal>refreservation=<replaceable>size</replaceable></literal>.</para>
<para>To check if any reservations or refreservations exist on <para>To check if any reservations or refreservations exist on
<filename>storage/home/bob</filename>, execute one of the <filename>storage/home/bob</filename>, execute one of the
following commands:</para> following commands:</para>
<screen>&prompt.root; <userinput>zfs get reservation storage/home/bob</userinput> <screen>&prompt.root; <userinput>zfs get reservation storage/home/bob</userinput>
&prompt.root; <userinput>zfs get refreservation storage/home/bob</userinput></screen> &prompt.root; <userinput>zfs get refreservation storage/home/bob</userinput></screen>
</sect3> </sect3>
</sect2> </sect2>
@ -760,12 +779,13 @@ errors: No known data errors</screen>
<para>The &man.ext2fs.5; file system kernel implementation was <para>The &man.ext2fs.5; file system kernel implementation was
written by Godmar Back, and the driver first appeared in written by Godmar Back, and the driver first appeared in
&os; 2.2. In &os; 8 and earlier, the code is licensed under &os; 2.2. In &os; 8 and earlier, the code is licensed under
the <acronym>GNU</acronym> Public License, however under &os; 9, the <acronym>GNU</acronym> Public License, however under &os;
the code has been rewritten and it is now licensed under the 9, the code has been rewritten and it is now licensed under
<acronym>BSD</acronym> license.</para> the <acronym>BSD</acronym> license.</para>
<para>The &man.ext2fs.5; driver will allow the &os; kernel <para>The &man.ext2fs.5; driver will allow the &os; kernel
to both read and write to <acronym>ext2</acronym> file systems.</para> to both read and write to <acronym>ext2</acronym> file
systems.</para>
<para>First, load the kernel loadable module:</para> <para>First, load the kernel loadable module:</para>
@ -776,6 +796,7 @@ errors: No known data errors</screen>
<screen>&prompt.root; <userinput>mount -t ext2fs /dev/ad1s1 /mnt</userinput></screen> <screen>&prompt.root; <userinput>mount -t ext2fs /dev/ad1s1 /mnt</userinput></screen>
</sect2> </sect2>
<sect2> <sect2>
<title>XFS</title> <title>XFS</title>
@ -815,6 +836,7 @@ errors: No known data errors</screen>
metadata. This can be used to quickly create a read-only metadata. This can be used to quickly create a read-only
filesystem which can be tested on &os;.</para> filesystem which can be tested on &os;.</para>
</sect2> </sect2>
<sect2> <sect2>
<title>ReiserFS</title> <title>ReiserFS</title>
@ -826,7 +848,8 @@ errors: No known data errors</screen>
access ReiserFS file systems and read their contents, but not access ReiserFS file systems and read their contents, but not
write to them, currently.</para> write to them, currently.</para>
<para>First, the kernel-loadable module needs to be loaded:</para> <para>First, the kernel-loadable module needs to be
loaded:</para>
<screen>&prompt.root; <userinput>kldload reiserfs</userinput></screen> <screen>&prompt.root; <userinput>kldload reiserfs</userinput></screen>