This patch addresses the following:

- removes you

- fixes xref

- modernizes the intro

- modernizes the ZFS RAM section

- updates the date in one sample output

Approved by:  gjb (mentor)
This commit is contained in:
Dru Lavigne 2013-02-11 14:58:34 +00:00
parent 92487e8875
commit 6aea3fc76d
Notes: svn2git 2020-12-08 03:00:23 +00:00
svn path=/head/; revision=40947

View file

@ -27,32 +27,30 @@
</indexterm> </indexterm>
<para>File systems are an integral part of any operating system. <para>File systems are an integral part of any operating system.
They allow for users to upload and store files, provide access They allow users to upload and store files, provide access
to data, and of course, make hard drives useful. Different to data, and make hard drives useful. Different operating
operating systems usually have one major aspect in common, that systems differ in their native file system. Traditionally, the
is their native file system. On &os; this file system is known native &os; file system has been the Unix File System
as the Fast File System or <acronym>FFS</acronym> which is built <acronym>UFS</acronym> which has been recently modernized as
on the original Unix&trade; File System, also known as <acronym>UFS2</acronym>. Since &os;&nbsp;7.0, the Z File
<acronym>UFS</acronym>. This is the native file system on &os; System <acronym>ZFS</acronym> is also available as a native file
which is placed on hard disks for access to data.</para>
<para>&os; also supports a multitude of different file systems to
provide support for accessing data from other operating systems
locally, i.e.,&nbsp;data stored on locally attached
<acronym>USB</acronym> storage devices, flash drives, and hard
disks. There is also support for some non-native file systems.
These are file systems developed on other
operating systems, like the &linux; Extended File System
(<acronym>EXT</acronym>), and the &sun; Z File System
(<acronym>ZFS</acronym>).</para>
<para>There are different levels of support for the various file
systems in &os;. Some will require a kernel module to be
loaded, others may require a toolset to be installed. This
chapter is designed to help users of &os; access other file
systems on their systems, starting with the &sun; Z file
system.</para> system.</para>
<para>In addition to its native file systems, &os; supports a
multitude of other file systems so that data from other
operating systems can be accessed locally, such as data stored
on locally attached <acronym>USB</acronym> storage devices,
flash drives, and hard disks. This includes support for the
&linux; Extended File System (<acronym>EXT</acronym>) and the
&microsoft; New Technology File System
(<acronym>NTFS</acronym>).</para>
<para>There are different levels of &os; support for the various
file systems. Some require a kernel module to be loaded and
others may require a toolset to be installed. Some non-native
file system support is full read-write while others are
read-only.</para>
<para>After reading this chapter, you will know:</para> <para>After reading this chapter, you will know:</para>
<itemizedlist> <itemizedlist>
@ -62,11 +60,11 @@
</listitem> </listitem>
<listitem> <listitem>
<para>What file systems are supported by &os;.</para> <para>Which file systems are supported by &os;.</para>
</listitem> </listitem>
<listitem> <listitem>
<para>How to enable, configure, access and make use of <para>How to enable, configure, access, and make use of
non-native file systems.</para> non-native file systems.</para>
</listitem> </listitem>
</itemizedlist> </itemizedlist>
@ -75,24 +73,25 @@
<itemizedlist> <itemizedlist>
<listitem> <listitem>
<para>Understand &unix; and &os; basics <para>Understand &unix; and <link
(<xref linkend="basics"/>).</para> linkend="basics">&os; basics</link>.</para>
</listitem> </listitem>
<listitem> <listitem>
<para>Be familiar with <para>Be familiar with the basics of <link
the basics of kernel configuration/compilation linkend="kernelconfig">kernel configuration and
(<xref linkend="kernelconfig"/>).</para> compilation</link>.</para>
</listitem> </listitem>
<listitem> <listitem>
<para>Feel comfortable installing third party software <para>Feel comfortable <link linkend="ports">installing
in &os; (<xref linkend="ports"/>).</para> software</link> in &os;.</para>
</listitem> </listitem>
<listitem> <listitem>
<para>Have some familiarity with disks, storage and <para>Have some familiarity with <link
device names in &os; (<xref linkend="disks"/>).</para> linkend="disks">disks</link>, storage, and device names in
&os;.</para>
</listitem> </listitem>
</itemizedlist> </itemizedlist>
</sect1> </sect1>
@ -100,73 +99,67 @@
<sect1 id="filesystems-zfs"> <sect1 id="filesystems-zfs">
<title>The Z File System (ZFS)</title> <title>The Z File System (ZFS)</title>
<para>The Z&nbsp;file system, developed by &sun;, is a new <para>The Z&nbsp;file system, originally developed by &sun;,
technology designed to use a pooled storage method. This means is designed to use a pooled storage method in that space is only
that space is only used as it is needed for data storage. It used as it is needed for data storage. It is also designed for
has also been designed for maximum data integrity, supporting maximum data integrity, supporting data snapshots, multiple
data snapshots, multiple copies, and data checksums. A new copies, and data checksums. It uses a software data replication
data replication model, known as <acronym>RAID</acronym>-Z has model, known as <acronym>RAID</acronym>-Z.
been added. The <acronym>RAID</acronym>-Z model is similar <acronym>RAID</acronym>-Z provides redundancy similar to
to <acronym>RAID</acronym>5 but is designed to prevent data hardware <acronym>RAID</acronym>, but is designed to prevent
write corruption.</para> data write corruption and to overcome some of the limitations
of hardware <acronym>RAID</acronym>.</para>
<sect2> <sect2>
<title>ZFS Tuning</title> <title>ZFS Tuning</title>
<para>The <acronym>ZFS</acronym> subsystem utilizes much of <para>Some of the features provided by <acronym>ZFS</acronym>
the system resources, so some tuning may be required to are RAM-intensive, so some tuning may be required to provide
provide maximum efficiency during every-day use. As an maximum efficiency on systems with limited RAM.</para>
experimental feature in &os; this may change in the near
future; however, at this time, the following steps are
recommended.</para>
<sect3> <sect3>
<title>Memory</title> <title>Memory</title>
<para>The total system memory should be at least one gigabyte, <para>At a bare minimum, the total system memory should be at
with two gigabytes or more recommended. In all of the least one gigabyte. The amount of recommended RAM depends
examples here, the system has one gigabyte of memory with upon the size of the pool and the ZFS features which are
several other tuning mechanisms in place.</para> used. A general rule of thumb is 1GB of RAM for every 1TB
of storage. If the deduplication feature is used, a general
<para>Some people have had luck using fewer than one gigabyte rule of thumb is 5GB of RAM per TB of storage to be
of memory, but with such a limited amount of physical deduplicated. While some users successfully use ZFS with
memory, when the system is under heavy load, it is very less RAM, it is possible that when the system is under heavy
plausible that &os; will panic due to memory load, it may panic due to memory exhaustion. Further tuning
exhaustion.</para> may be required for systems with less than the recommended
RAM requirements.</para>
</sect3> </sect3>
<sect3> <sect3>
<title>Kernel Configuration</title> <title>Kernel Configuration</title>
<para>It is recommended that unused drivers and options <para>Due to the RAM limitations of the &i386; platform, users
be removed from the kernel configuration file. Since most using ZFS on the &i386; architecture should add the
devices are available as modules, they may be loaded following option to a custom kernel configuration file,
using the <filename>/boot/loader.conf</filename> rebuild the kernel, and reboot:</para>
file.</para>
<para>Users of the &i386; architecture should add the
following option to their kernel configuration file,
rebuild their kernel, and reboot:</para>
<programlisting>options KVA_PAGES=512</programlisting> <programlisting>options KVA_PAGES=512</programlisting>
<para>This option will expand the kernel address space, thus <para>This option expands the kernel address space, allowing
allowing the <varname>vm.kvm_size</varname> tunable to be the <varname>vm.kvm_size</varname> tunable to be pushed
pushed beyond the currently imposed limit of 1&nbsp;GB beyond the currently imposed limit of 1&nbsp;GB, or the
(2&nbsp;GB for <acronym>PAE</acronym>). To find the most limit of 2&nbsp;GB for <acronym>PAE</acronym>. To find the
suitable value for this option, divide the desired address most suitable value for this option, divide the desired
space in megabytes by four (4). In this case, it is address space in megabytes by four (4). In this example, it
<literal>512</literal> for 2&nbsp;GB.</para> is <literal>512</literal> for 2&nbsp;GB.</para>
</sect3> </sect3>
<sect3> <sect3>
<title>Loader Tunables</title> <title>Loader Tunables</title>
<para>The <devicename>kmem</devicename> address space should <para>The <devicename>kmem</devicename> address space can
be increased on all &os; architectures. On the test system be increased on all &os; architectures. On a test system
with one gigabyte of physical memory, success was achieved with one gigabyte of physical memory, success was achieved
with the following options which should be placed in the with the following options added to
<filename>/boot/loader.conf</filename> file and the system <filename>/boot/loader.conf</filename>, and the system
restarted:</para> restarted:</para>
<programlisting>vm.kmem_size="330M" <programlisting>vm.kmem_size="330M"
@ -191,22 +184,21 @@ vfs.zfs.vdev.cache.size="5M"</programlisting>
<screen>&prompt.root; <userinput>echo 'zfs_enable="YES"' &gt;&gt; /etc/rc.conf</userinput> <screen>&prompt.root; <userinput>echo 'zfs_enable="YES"' &gt;&gt; /etc/rc.conf</userinput>
&prompt.root; <userinput>service zfs start</userinput></screen> &prompt.root; <userinput>service zfs start</userinput></screen>
<para>The remainder of this document assumes three <para>The examples in this section assume three
<acronym>SCSI</acronym> disks are available, and their <acronym>SCSI</acronym> disks with the device names
device names are
<devicename><replaceable>da0</replaceable></devicename>, <devicename><replaceable>da0</replaceable></devicename>,
<devicename><replaceable>da1</replaceable></devicename> <devicename><replaceable>da1</replaceable></devicename>,
and <devicename><replaceable>da2</replaceable></devicename>. and <devicename><replaceable>da2</replaceable></devicename>.
Users of <acronym>IDE</acronym> hardware may use the Users of <acronym>IDE</acronym> hardware should instead use
<devicename><replaceable>ad</replaceable></devicename> <devicename><replaceable>ad</replaceable></devicename>
devices in place of <acronym>SCSI</acronym> hardware.</para> device names.</para>
<sect3> <sect3>
<title>Single Disk Pool</title> <title>Single Disk Pool</title>
<para>To create a simple, non-redundant <acronym>ZFS</acronym> <para>To create a simple, non-redundant <acronym>ZFS</acronym>
pool using a single disk device, use the pool using a single disk device, use
<command>zpool</command> command:</para> <command>zpool</command>:</para>
<screen>&prompt.root; <userinput>zpool create example /dev/da0</userinput></screen> <screen>&prompt.root; <userinput>zpool create example /dev/da0</userinput></screen>
@ -220,12 +212,11 @@ devfs 1 1 0 100% /dev
/dev/ad0s1d 54098308 1032846 48737598 2% /usr /dev/ad0s1d 54098308 1032846 48737598 2% /usr
example 17547136 0 17547136 0% /example</screen> example 17547136 0 17547136 0% /example</screen>
<para>This output clearly shows the <literal>example</literal> <para>This output shows that the <literal>example</literal>
pool has not only been created but pool has been created and <emphasis>mounted</emphasis>. It
<emphasis>mounted</emphasis> as well. It is also accessible is now accessible as a file system. Files may be created
just like a normal file system, files may be created on it on it and users can browse it, as seen in the following
and users are able to browse it as in the example:</para>
following example:</para>
<screen>&prompt.root; <userinput>cd /example</userinput> <screen>&prompt.root; <userinput>cd /example</userinput>
&prompt.root; <userinput>ls</userinput> &prompt.root; <userinput>ls</userinput>
@ -236,25 +227,24 @@ drwxr-xr-x 2 root wheel 3 Aug 29 23:15 .
drwxr-xr-x 21 root wheel 512 Aug 29 23:12 .. drwxr-xr-x 21 root wheel 512 Aug 29 23:12 ..
-rw-r--r-- 1 root wheel 0 Aug 29 23:15 testfile</screen> -rw-r--r-- 1 root wheel 0 Aug 29 23:15 testfile</screen>
<para>Unfortunately this pool is not taking advantage of <para>However, this pool is not taking advantage of any
any <acronym>ZFS</acronym> features. Create a file system <acronym>ZFS</acronym> features. To create a dataset on
on this pool, and enable compression on it:</para> this pool with compression enabled:</para>
<screen>&prompt.root; <userinput>zfs create example/compressed</userinput> <screen>&prompt.root; <userinput>zfs create example/compressed</userinput>
&prompt.root; <userinput>zfs set compression=gzip example/compressed</userinput></screen> &prompt.root; <userinput>zfs set compression=gzip example/compressed</userinput></screen>
<para>The <literal>example/compressed</literal> is now a <para>The <literal>example/compressed</literal> dataset is now
<acronym>ZFS</acronym> compressed file system. Try copying a <acronym>ZFS</acronym> compressed file system. Try
some large files to it by copying them to <filename copying some large files to <filename
class="directory">/example/compressed</filename>.</para> class="directory">/example/compressed</filename>.</para>
<para>The compression may now be disabled with:</para> <para>Compression can be disabled with:</para>
<screen>&prompt.root; <userinput>zfs set compression=off example/compressed</userinput></screen> <screen>&prompt.root; <userinput>zfs set compression=off example/compressed</userinput></screen>
<para>To unmount the file system, issue the following command <para>To unmount a file system, issue the following command
and then verify by using the <command>df</command> and then verify by using <command>df</command>:</para>
utility:</para>
<screen>&prompt.root; <userinput>zfs umount example/compressed</userinput> <screen>&prompt.root; <userinput>zfs umount example/compressed</userinput>
&prompt.root; <userinput>df</userinput> &prompt.root; <userinput>df</userinput>
@ -264,7 +254,7 @@ devfs 1 1 0 100% /dev
/dev/ad0s1d 54098308 1032864 48737580 2% /usr /dev/ad0s1d 54098308 1032864 48737580 2% /usr
example 17547008 0 17547008 0% /example</screen> example 17547008 0 17547008 0% /example</screen>
<para>Re-mount the file system to make it accessible <para>To re-mount the file system to make it accessible
again, and verify with <command>df</command>:</para> again, and verify with <command>df</command>:</para>
<screen>&prompt.root; <userinput>zfs mount example/compressed</userinput> <screen>&prompt.root; <userinput>zfs mount example/compressed</userinput>
@ -287,18 +277,19 @@ example on /example (zfs, local)
example/data on /example/data (zfs, local) example/data on /example/data (zfs, local)
example/compressed on /example/compressed (zfs, local)</screen> example/compressed on /example/compressed (zfs, local)</screen>
<para>As observed, <acronym>ZFS</acronym> file systems, after <para><acronym>ZFS</acronym> datasets, after creation, may be
creation, may be used like ordinary file systems; however, used like any file systems. However, many other features
many other features are also available. In the following are available which can be set on a per-dataset basis. In
example, a new file system, <literal>data</literal> is the following example, a new file system,
created. Important files will be stored here, so the file <literal>data</literal> is created. Important files will be
system is set to keep two copies of each data block:</para> stored here, the file system is set to keep two copies of
each data block:</para>
<screen>&prompt.root; <userinput>zfs create example/data</userinput> <screen>&prompt.root; <userinput>zfs create example/data</userinput>
&prompt.root; <userinput>zfs set copies=2 example/data</userinput></screen> &prompt.root; <userinput>zfs set copies=2 example/data</userinput></screen>
<para>It is now possible to see the data and space utilization <para>It is now possible to see the data and space utilization
by issuing <command>df</command> again:</para> by issuing <command>df</command>:</para>
<screen>&prompt.root; <userinput>df</userinput> <screen>&prompt.root; <userinput>df</userinput>
Filesystem 1K-blocks Used Avail Capacity Mounted on Filesystem 1K-blocks Used Avail Capacity Mounted on
@ -311,64 +302,56 @@ example/data 17547008 0 17547008 0% /example/data</screen>
<para>Notice that each file system on the pool has the same <para>Notice that each file system on the pool has the same
amount of available space. This is the reason for using amount of available space. This is the reason for using
<command>df</command> through these examples, to show <command>df</command> in these examples, to show that the
that the file systems are using only the amount of space file systems use only the amount of space they need and all
they need and will all draw from the same pool. The draw from the same pool. The <acronym>ZFS</acronym> file
<acronym>ZFS</acronym> file system does away with concepts system does away with concepts such as volumes and
such as volumes and partitions, and allows for several file partitions, and allows for several file systems to occupy
systems to occupy the same pool. Destroy the file systems, the same pool.</para>
and then destroy the pool as they are no longer
needed:</para> <para>To destroy the file systems and then destroy the pool as
they are no longer needed:</para>
<screen>&prompt.root; <userinput>zfs destroy example/compressed</userinput> <screen>&prompt.root; <userinput>zfs destroy example/compressed</userinput>
&prompt.root; <userinput>zfs destroy example/data</userinput> &prompt.root; <userinput>zfs destroy example/data</userinput>
&prompt.root; <userinput>zpool destroy example</userinput></screen> &prompt.root; <userinput>zpool destroy example</userinput></screen>
<para>Disks go bad and fail, an unavoidable trait. When
this disk goes bad, the data will be lost. One method of
avoiding data loss due to a failed hard disk is to implement
a <acronym>RAID</acronym>. <acronym>ZFS</acronym> supports
this feature in its pool design which is covered in
the next section.</para>
</sect3> </sect3>
<sect3> <sect3>
<title><acronym>ZFS</acronym> RAID-Z</title> <title><acronym>ZFS</acronym> RAID-Z</title>
<para>As previously noted, this section will assume that <para>There is no way to prevent a disk from failing. One
three <acronym>SCSI</acronym> disks exist as devices method of avoiding data loss due to a failed hard disk is to
<devicename>da0</devicename>, <devicename>da1</devicename> implement <acronym>RAID</acronym>. <acronym>ZFS</acronym>
and <devicename>da2</devicename> (or supports this feature in its pool design.</para>
<devicename>ad0</devicename> and beyond in case IDE disks
are being used). To create a <acronym>RAID</acronym>-Z <para>To create a <acronym>RAID</acronym>-Z pool, issue the
pool, issue the following command:</para> following command and specify the disks to add to the
pool:</para>
<screen>&prompt.root; <userinput>zpool create storage raidz da0 da1 da2</userinput></screen> <screen>&prompt.root; <userinput>zpool create storage raidz da0 da1 da2</userinput></screen>
<note> <note>
<para>&sun; recommends that the amount of devices used <para>&sun; recommends that the amount of devices used in
in a <acronym>RAID</acronym>-Z configuration is between a <acronym>RAID</acronym>-Z configuration is between
three and nine. If your needs call for a single pool to three and nine. For environments requiring a single pool
consist of 10 disks or more, consider breaking it up into consisting of 10 disks or more, consider breaking it up
smaller <acronym>RAID</acronym>-Z groups. If you only into smaller <acronym>RAID</acronym>-Z groups. If only
have two disks and still require redundancy, consider two disks are available and redundancy is a requirement,
using a <acronym>ZFS</acronym> mirror instead. See the consider using a <acronym>ZFS</acronym> mirror. Refer to
&man.zpool.8; manual page for more details.</para> &man.zpool.8; for more details.</para>
</note> </note>
<para>The <literal>storage</literal> zpool should have been <para>This command creates the <literal>storage</literal>
created. This may be verified by using the &man.mount.8; zpool. This may be verified using &man.mount.8; and
and &man.df.1; commands as before. More disk devices may &man.df.1;. This command makes a new file system in the
have been allocated by adding them to the end of the list pool called <literal>home</literal>:</para>
above. Make a new file system in the pool, called
<literal>home</literal>, where user files will eventually
be placed:</para>
<screen>&prompt.root; <userinput>zfs create storage/home</userinput></screen> <screen>&prompt.root; <userinput>zfs create storage/home</userinput></screen>
<para>It is now possible to enable compression and keep extra <para>It is now possible to enable compression and keep extra
copies of the user's home directories and files. This may copies of directories and files using the following
be accomplished just as before using the following
commands:</para> commands:</para>
<screen>&prompt.root; <userinput>zfs set copies=2 storage/home</userinput> <screen>&prompt.root; <userinput>zfs set copies=2 storage/home</userinput>
@ -384,9 +367,9 @@ example/data 17547008 0 17547008 0% /example/data</screen>
&prompt.root; <userinput>ln -s /storage/home /usr/home</userinput></screen> &prompt.root; <userinput>ln -s /storage/home /usr/home</userinput></screen>
<para>Users should now have their data stored on the freshly <para>Users should now have their data stored on the freshly
created <filename class="directory">/storage/home</filename> created <filename
file system. Test by adding a new user and logging in as class="directory">/storage/home</filename>. Test by
that user.</para> adding a new user and logging in as that user.</para>
<para>Try creating a snapshot which may be rolled back <para>Try creating a snapshot which may be rolled back
later:</para> later:</para>
@ -405,28 +388,27 @@ example/data 17547008 0 17547008 0% /example/data</screen>
<command>ls</command> in the file system's <command>ls</command> in the file system's
<filename class="directory">.zfs/snapshot</filename> <filename class="directory">.zfs/snapshot</filename>
directory. For example, to see the previously taken directory. For example, to see the previously taken
snapshot, perform the following command:</para> snapshot:</para>
<screen>&prompt.root; <userinput>ls /storage/home/.zfs/snapshot</userinput></screen> <screen>&prompt.root; <userinput>ls /storage/home/.zfs/snapshot</userinput></screen>
<para>It is possible to write a script to perform monthly <para>It is possible to write a script to perform regular
snapshots on user data; however, over time, snapshots snapshots on user data. However, over time, snapshots
may consume a great deal of disk space. The previous may consume a great deal of disk space. The previous
snapshot may be removed using the following command:</para> snapshot may be removed using the following command:</para>
<screen>&prompt.root; <userinput>zfs destroy storage/home@08-30-08</userinput></screen> <screen>&prompt.root; <userinput>zfs destroy storage/home@08-30-08</userinput></screen>
<para>After all of this testing, there is no reason we should <para>After testing, <filename
keep <filename class="directory">/storage/home</filename> class="directory">/storage/home</filename> can be made the
around in its present state. Make it the real real <filename class="directory">/home</filename> using
<filename class="directory">/home</filename> file this command:</para>
system:</para>
<screen>&prompt.root; <userinput>zfs set mountpoint=/home storage/home</userinput></screen> <screen>&prompt.root; <userinput>zfs set mountpoint=/home storage/home</userinput></screen>
<para>Issuing the <command>df</command> and <para>Run <command>df</command> and
<command>mount</command> commands will show that the system <command>mount</command> to confirm that the system now
now treats our file system as the real treats the file system as the real
<filename class="directory">/home</filename>:</para> <filename class="directory">/home</filename>:</para>
<screen>&prompt.root; <userinput>mount</userinput> <screen>&prompt.root; <userinput>mount</userinput>
@ -455,8 +437,7 @@ storage/home 26320512 0 26320512 0% /home</screen>
<title>Recovering <acronym>RAID</acronym>-Z</title> <title>Recovering <acronym>RAID</acronym>-Z</title>
<para>Every software <acronym>RAID</acronym> has a method of <para>Every software <acronym>RAID</acronym> has a method of
monitoring their <literal>state</literal>. monitoring its <literal>state</literal>. The status of
<acronym>ZFS</acronym> is no exception. The status of
<acronym>RAID</acronym>-Z devices may be viewed with the <acronym>RAID</acronym>-Z devices may be viewed with the
following command:</para> following command:</para>
@ -468,7 +449,7 @@ storage/home 26320512 0 26320512 0% /home</screen>
<screen>all pools are healthy</screen> <screen>all pools are healthy</screen>
<para>If there is an issue, perhaps a disk has gone offline, <para>If there is an issue, perhaps a disk has gone offline,
the pool state will be returned and look similar to:</para> the pool state will look similar to:</para>
<screen> pool: storage <screen> pool: storage
state: DEGRADED state: DEGRADED
@ -489,14 +470,13 @@ config:
errors: No known data errors</screen> errors: No known data errors</screen>
<para>This states that the device was taken offline by the <para>This indicates that the device was previously taken
administrator. This is true for this particular example. offline by the administrator using the following
To take the disk offline, the following command was command:</para>
used:</para>
<screen>&prompt.root; <userinput>zpool offline storage da1</userinput></screen> <screen>&prompt.root; <userinput>zpool offline storage da1</userinput></screen>
<para>It is now possible to replace the <para>It is now possible to replace
<devicename>da1</devicename> after the system has been <devicename>da1</devicename> after the system has been
powered down. When the system is back online, the following powered down. When the system is back online, the following
command may issued to replace the disk:</para> command may issued to replace the disk:</para>
@ -529,37 +509,34 @@ errors: No known data errors</screen>
<sect3> <sect3>
<title>Data Verification</title> <title>Data Verification</title>
<para>As previously mentioned, <acronym>ZFS</acronym> uses <para><acronym>ZFS</acronym> uses
<literal>checksums</literal> to verify the integrity of <literal>checksums</literal> to verify the integrity of
stored data. They are enabled automatically upon creation stored data. These are enabled automatically upon creation
of file systems and may be disabled using the following of file systems and may be disabled using the following
command:</para> command:</para>
<screen>&prompt.root; <userinput>zfs set checksum=off storage/home</userinput></screen> <screen>&prompt.root; <userinput>zfs set checksum=off storage/home</userinput></screen>
<para>This is not a wise idea, however, as checksums take <para>Doing so is <emphasis>not</emphasis> recommended as
very little storage space and are more useful when enabled. checksums take very little storage space and are used to
There also appears to be no noticeable costs in having them check data integrity using checksum verification in a
enabled. While enabled, it is possible to have process is known as <quote>scrubbing.</quote> To verify the
<acronym>ZFS</acronym> check data integrity using checksum data integrity of the <literal>storage</literal> pool, issue
verification. This process is known as this command:</para>
<quote>scrubbing.</quote> To verify the data integrity of
the <literal>storage</literal> pool, issue the following
command:</para>
<screen>&prompt.root; <userinput>zpool scrub storage</userinput></screen> <screen>&prompt.root; <userinput>zpool scrub storage</userinput></screen>
<para>This process may take considerable time depending on <para>This process may take considerable time depending on
the amount of data stored. It is also very the amount of data stored. It is also very
<acronym>I/O</acronym> intensive, so much that only one <acronym>I/O</acronym> intensive, so much so that only one
of these operations may be run at any given time. After scrub may be run at any given time. After the scrub has
the scrub has completed, the status is updated and may be completed, the status is updated and may be viewed by
viewed by issuing a status request:</para> issuing a status request:</para>
<screen>&prompt.root; <userinput>zpool status storage</userinput> <screen>&prompt.root; <userinput>zpool status storage</userinput>
pool: storage pool: storage
state: ONLINE state: ONLINE
scrub: scrub completed with 0 errors on Sat Aug 30 19:57:37 2008 scrub: scrub completed with 0 errors on Sat Jan 26 19:57:37 2013
config: config:
NAME STATE READ WRITE CKSUM NAME STATE READ WRITE CKSUM
@ -571,43 +548,39 @@ config:
errors: No known data errors</screen> errors: No known data errors</screen>
<para>The completion time is in plain view in this example. <para>The completion time is displayed and helps to ensure
This feature helps to ensure data integrity over a long data integrity over a long period of time.</para>
period of time.</para>
<para>There are many more options for the Z&nbsp;file system, <para>Refer to &man.zfs.8; and &man.zpool.8; for other
see the &man.zfs.8; and &man.zpool.8; manual <acronym>ZFS</acronym> options.</para>
pages.</para>
</sect3> </sect3>
<sect3> <sect3>
<title>ZFS Quotas</title> <title>ZFS Quotas</title>
<para>ZFS supports different types of quotas; the <para>ZFS supports different types of quotas: the refquota,
refquota, the general quota, the user quota, and the general quota, the user quota, and the group quota.
the group quota. This section will explain the This section explains the basics of each type and includes
basics of each one, and include some usage some usage instructions.</para>
instructions.</para>
<para>Quotas limit the amount of space that a dataset <para>Quotas limit the amount of space that a dataset and its
and its descendants can consume, and enforce a limit descendants can consume, and enforce a limit on the amount
on the amount of space used by filesystems and of space used by filesystems and snapshots for the
snapshots for the descendants. In terms of users, descendants. Quotas are useful to limit the amount of space
quotas are useful to limit the amount of space a a particular user can use.</para>
particular user can use.</para>
<note> <note>
<para>Quotas cannot be set on volumes, as the <para>Quotas cannot be set on volumes, as the
<literal>volsize</literal> property acts as an <literal>volsize</literal> property acts as an implicit
implicit quota.</para> quota.</para>
</note> </note>
<para>The refquota, <para>The
<literal>refquota=<replaceable>size</replaceable></literal>, <literal>refquota=<replaceable>size</replaceable></literal>
limits the amount of space a dataset can consume limits the amount of space a dataset can consume by
by enforcing a hard limit on the space used. However, enforcing a hard limit on the space used. However, this
this hard limit does not include space used by descendants, hard limit does not include space used by descendants, such
such as file systems or snapshots.</para> as file systems or snapshots.</para>
<para>To enforce a general quota of 10&nbsp;GB for <para>To enforce a general quota of 10&nbsp;GB for
<filename>storage/home/bob</filename>, use the <filename>storage/home/bob</filename>, use the
@ -615,9 +588,8 @@ errors: No known data errors</screen>
<screen>&prompt.root; <userinput>zfs set quota=10G storage/home/bob</userinput></screen> <screen>&prompt.root; <userinput>zfs set quota=10G storage/home/bob</userinput></screen>
<para>User quotas limit the amount of space that can <para>User quotas limit the amount of space that can be used
be used by the specified user. The general format by the specified user. The general format is
is
<literal>userquota@<replaceable>user</replaceable>=<replaceable>size</replaceable></literal>, <literal>userquota@<replaceable>user</replaceable>=<replaceable>size</replaceable></literal>,
and the user's name must be in one of the following and the user's name must be in one of the following
formats:</para> formats:</para>
@ -626,28 +598,28 @@ errors: No known data errors</screen>
<listitem> <listitem>
<para><acronym <para><acronym
role="Portable Operating System role="Portable Operating System
Interface">POSIX</acronym> compatible name Interface">POSIX</acronym> compatible name such as
(e.g., <replaceable>joe</replaceable>).</para> <replaceable>joe</replaceable>.</para>
</listitem> </listitem>
<listitem> <listitem>
<para><acronym <para><acronym
role="Portable Operating System role="Portable Operating System
Interface">POSIX</acronym> Interface">POSIX</acronym>
numeric ID (e.g., numeric ID such as
<replaceable>789</replaceable>).</para> <replaceable>789</replaceable>.</para>
</listitem> </listitem>
<listitem> <listitem>
<para><acronym role="System Identifier">SID</acronym> name <para><acronym role="System Identifier">SID</acronym> name
(e.g., such as
<replaceable>joe.bloggs@example.com</replaceable>).</para> <replaceable>joe.bloggs@example.com</replaceable>.</para>
</listitem> </listitem>
<listitem> <listitem>
<para><acronym role="System Identifier">SID</acronym> <para><acronym role="System Identifier">SID</acronym>
numeric ID (e.g., numeric ID such as
<replaceable>S-1-123-456-789</replaceable>).</para> <replaceable>S-1-123-456-789</replaceable>.</para>
</listitem> </listitem>
</itemizedlist> </itemizedlist>
@ -670,7 +642,7 @@ errors: No known data errors</screen>
privilege are able to view and set everyone's quota.</para> privilege are able to view and set everyone's quota.</para>
<para>The group quota limits the amount of space that a <para>The group quota limits the amount of space that a
specified user group can consume. The general format is specified group can consume. The general format is
<literal>groupquota@<replaceable>group</replaceable>=<replaceable>size</replaceable></literal>.</para> <literal>groupquota@<replaceable>group</replaceable>=<replaceable>size</replaceable></literal>.</para>
<para>To set the quota for the group <para>To set the quota for the group
@ -680,30 +652,29 @@ errors: No known data errors</screen>
<screen>&prompt.root; <userinput>zfs set groupquota@firstgroup=50G</userinput></screen> <screen>&prompt.root; <userinput>zfs set groupquota@firstgroup=50G</userinput></screen>
<para>To remove the quota for the group <para>To remove the quota for the group
<replaceable>firstgroup</replaceable>, or make sure that one <replaceable>firstgroup</replaceable>, or to make sure that
is not set, instead use:</para> one is not set, instead use:</para>
<screen>&prompt.root; <userinput>zfs set groupquota@firstgroup=none</userinput></screen> <screen>&prompt.root; <userinput>zfs set groupquota@firstgroup=none</userinput></screen>
<para>As with the user quota property, <para>As with the user quota property,
non-<username>root</username> users can only see the quotas non-<username>root</username> users can only see the quotas
associated with the user groups that they belong to, however associated with the groups that they belong to. However,
a <username>root</username> user or a user with the <username>root</username> or a user with the
<literal>groupquota</literal> privilege can view and set all <literal>groupquota</literal> privilege can view and set all
quotas for all groups.</para> quotas for all groups.</para>
<para>The <command>zfs userspace</command> subcommand displays <para>To display the amount of space consumed by each user on
the amount of space consumed by each user on the specified the specified filesystem or snapshot, along with any
filesystem or snapshot, along with any specified quotas. specified quotas, use <command>zfs userspace</command>.
The <command>zfs groupspace</command> subcommand does the For group information, use <command>zfs
same for groups. For more information about supported groupspace</command>. For more information about
options, or only displaying specific options, see supported options or how to display only specific options,
&man.zfs.1;.</para> refer to &man.zfs.1;.</para>
<para>To list the quota for <para>Users with sufficient privileges and
<filename>storage/home/bob</filename>, if you have the <username>root</username> can list the quota for
correct privileges or are <username>root</username>, use the <filename>storage/home/bob</filename> using:</para>
following:</para>
<screen>&prompt.root; <userinput>zfs get quota storage/home/bob</userinput></screen> <screen>&prompt.root; <userinput>zfs get quota storage/home/bob</userinput></screen>
</sect3> </sect3>
@ -711,9 +682,9 @@ errors: No known data errors</screen>
<sect3> <sect3>
<title>ZFS Reservations</title> <title>ZFS Reservations</title>
<para>ZFS supports two types of space reservations. <para>ZFS supports two types of space reservations. This
This section will explain the basics of each one, section explains the basics of each and includes some usage
and include some usage instructions.</para> instructions.</para>
<para>The <literal>reservation</literal> property makes it <para>The <literal>reservation</literal> property makes it
possible to reserve a minimum amount of space guaranteed possible to reserve a minimum amount of space guaranteed
@ -732,23 +703,22 @@ errors: No known data errors</screen>
not counted by the <literal>refreservation</literal> not counted by the <literal>refreservation</literal>
amount and so do not encroach on the space set.</para> amount and so do not encroach on the space set.</para>
<para>Reservations of any sort are useful in many <para>Reservations of any sort are useful in many situations,
situations, for example planning and testing the such as planning and testing the suitability of disk space
suitability of disk space allocation in a new system, or allocation in a new system, or ensuring that enough space is
ensuring that enough space is available on file systems available on file systems for system recovery procedures and
for system recovery procedures and files.</para> files.</para>
<para>The general format of the <literal>reservation</literal> <para>The general format of the <literal>reservation</literal>
property is property is
<literal>reservation=<replaceable>size</replaceable></literal>, <literal>reservation=<replaceable>size</replaceable></literal>,
so to set a reservation of 10&nbsp;GB on so to set a reservation of 10&nbsp;GB on
<filename>storage/home/bob</filename>the below command is <filename>storage/home/bob</filename>, use:</para>
used:</para>
<screen>&prompt.root; <userinput>zfs set reservation=10G storage/home/bob</userinput></screen> <screen>&prompt.root; <userinput>zfs set reservation=10G storage/home/bob</userinput></screen>
<para>To make sure that no reservation is set, or to remove a <para>To make sure that no reservation is set, or to remove a
reservation, instead use:</para> reservation, use:</para>
<screen>&prompt.root; <userinput>zfs set reservation=none storage/home/bob</userinput></screen> <screen>&prompt.root; <userinput>zfs set reservation=none storage/home/bob</userinput></screen>
@ -770,24 +740,24 @@ errors: No known data errors</screen>
<sect1 id="filesystems-linux"> <sect1 id="filesystems-linux">
<title>&linux; Filesystems</title> <title>&linux; Filesystems</title>
<para>This section will describe some of the &linux; filesystems <para>This section describes some of the &linux; filesystems
supported by &os;.</para> supported by &os;.</para>
<sect2> <sect2>
<title>Ext2FS</title> <title><acronym>ext2</acronym></title>
<para>The &man.ext2fs.5; file system kernel implementation was <para>The &man.ext2fs.5; file system kernel implementation has
written by Godmar Back, and the driver first appeared in been available since &os;&nbsp;2.2. In &os;&nbsp;8.x and
&os; 2.2. In &os; 8 and earlier, the code is licensed under earlier, the code is licensed under the
the <acronym>GNU</acronym> Public License, however under &os; <acronym>GPL</acronym>. Since &os;&nbsp;9.0, the code has
9, the code has been rewritten and it is now licensed under been rewritten and is now <acronym>BSD</acronym>
the <acronym>BSD</acronym> license.</para> licensed.</para>
<para>The &man.ext2fs.5; driver will allow the &os; kernel <para>The &man.ext2fs.5; driver allows the &os; kernel to both
to both read and write to <acronym>ext2</acronym> file read and write to <acronym>ext2</acronym> file systems.</para>
systems.</para>
<para>First, load the kernel loadable module:</para> <para>To access an <acronym>ext2</acronym> file system, first
load the kernel loadable module:</para>
<screen>&prompt.root; <userinput>kldload ext2fs</userinput></screen> <screen>&prompt.root; <userinput>kldload ext2fs</userinput></screen>
@ -800,11 +770,10 @@ errors: No known data errors</screen>
<sect2> <sect2>
<title>XFS</title> <title>XFS</title>
<para>The X file system, <acronym>XFS</acronym>, was originally <para><acronym>XFS</acronym> was originally written by
written by <acronym>SGI</acronym> for the <acronym>SGI</acronym> for the <acronym>IRIX</acronym>
<acronym>IRIX</acronym> operating system, and they ported it operating system and was then ported to &linux; and
to &linux;. The source code has been released under the released under the <acronym>GPL</acronym>. See
<acronym>GNU</acronym> Public License. See
<ulink url="http://oss.sgi.com/projects/xfs">this page</ulink> <ulink url="http://oss.sgi.com/projects/xfs">this page</ulink>
for more details. The &os; port was started by Russel for more details. The &os; port was started by Russel
Cattelan, &a.kan;, and &a.rodrigc;.</para> Cattelan, &a.kan;, and &a.rodrigc;.</para>
@ -814,21 +783,19 @@ errors: No known data errors</screen>
<screen>&prompt.root; <userinput>kldload xfs</userinput></screen> <screen>&prompt.root; <userinput>kldload xfs</userinput></screen>
<para>The &man.xfs.5; driver lets the &os; kernel access <para>The &man.xfs.5; driver lets the &os; kernel access XFS
XFS filesystems. However, at present only read-only filesystems. However, only read-only access is supported and
access is supported. Writing to a volume is not writing to a volume is not possible.</para>
possible.</para>
<para>To mount a &man.xfs.5; volume located on <para>To mount a &man.xfs.5; volume located on
<filename>/dev/ad1s1</filename>, do the following:</para> <filename>/dev/ad1s1</filename>:</para>
<screen>&prompt.root; <userinput>mount -t xfs /dev/ad1s1 /mnt</userinput></screen> <screen>&prompt.root; <userinput>mount -t xfs /dev/ad1s1 /mnt</userinput></screen>
<para>Also useful to note is that the <para>The <filename role="package">sysutils/xfsprogs</filename>
<filename role="package">sysutils/xfsprogs</filename> port port includes the <command>mkfs.xfs</command> which enables
contains the <command>mkfs.xfs</command> utility which enables the creation of <acronym>XFS</acronym> filesystems, plus
creation of <acronym>XFS</acronym> filesystems, plus utilities utilities for analyzing and repairing them.</para>
for analysing and repairing them.</para>
<para>The <literal>-p</literal> flag to <para>The <literal>-p</literal> flag to
<command>mkfs.xfs</command> can be used to create an <command>mkfs.xfs</command> can be used to create an
@ -842,11 +809,11 @@ errors: No known data errors</screen>
<para>The Reiser file system, ReiserFS, was ported to <para>The Reiser file system, ReiserFS, was ported to
&os; by &a.dumbbell;, and has been released under the &os; by &a.dumbbell;, and has been released under the
<acronym>GNU</acronym> Public License.</para> <acronym>GPL</acronym> .</para>
<para>The ReiserFS driver will permit the &os; kernel to <para>The ReiserFS driver permits the &os; kernel to access
access ReiserFS file systems and read their contents, but not ReiserFS file systems and read their contents, but not
write to them, currently.</para> write to them.</para>
<para>First, the kernel-loadable module needs to be <para>First, the kernel-loadable module needs to be
loaded:</para> loaded:</para>