This patch addresses the following:

- removes you

- fixes xref

- modernizes the intro

- modernizes the ZFS RAM section

- updates the date in one sample output

Approved by:  gjb (mentor)
This commit is contained in:
Dru Lavigne 2013-02-11 14:58:34 +00:00
parent 92487e8875
commit 6aea3fc76d
Notes: svn2git 2020-12-08 03:00:23 +00:00
svn path=/head/; revision=40947

View file

@ -27,32 +27,30 @@
</indexterm>
<para>File systems are an integral part of any operating system.
They allow for users to upload and store files, provide access
to data, and of course, make hard drives useful. Different
operating systems usually have one major aspect in common, that
is their native file system. On &os; this file system is known
as the Fast File System or <acronym>FFS</acronym> which is built
on the original Unix&trade; File System, also known as
<acronym>UFS</acronym>. This is the native file system on &os;
which is placed on hard disks for access to data.</para>
<para>&os; also supports a multitude of different file systems to
provide support for accessing data from other operating systems
locally, i.e.,&nbsp;data stored on locally attached
<acronym>USB</acronym> storage devices, flash drives, and hard
disks. There is also support for some non-native file systems.
These are file systems developed on other
operating systems, like the &linux; Extended File System
(<acronym>EXT</acronym>), and the &sun; Z File System
(<acronym>ZFS</acronym>).</para>
<para>There are different levels of support for the various file
systems in &os;. Some will require a kernel module to be
loaded, others may require a toolset to be installed. This
chapter is designed to help users of &os; access other file
systems on their systems, starting with the &sun; Z file
They allow users to upload and store files, provide access
to data, and make hard drives useful. Different operating
systems differ in their native file system. Traditionally, the
native &os; file system has been the Unix File System
<acronym>UFS</acronym> which has been recently modernized as
<acronym>UFS2</acronym>. Since &os;&nbsp;7.0, the Z File
System <acronym>ZFS</acronym> is also available as a native file
system.</para>
<para>In addition to its native file systems, &os; supports a
multitude of other file systems so that data from other
operating systems can be accessed locally, such as data stored
on locally attached <acronym>USB</acronym> storage devices,
flash drives, and hard disks. This includes support for the
&linux; Extended File System (<acronym>EXT</acronym>) and the
&microsoft; New Technology File System
(<acronym>NTFS</acronym>).</para>
<para>There are different levels of &os; support for the various
file systems. Some require a kernel module to be loaded and
others may require a toolset to be installed. Some non-native
file system support is full read-write while others are
read-only.</para>
<para>After reading this chapter, you will know:</para>
<itemizedlist>
@ -62,11 +60,11 @@
</listitem>
<listitem>
<para>What file systems are supported by &os;.</para>
<para>Which file systems are supported by &os;.</para>
</listitem>
<listitem>
<para>How to enable, configure, access and make use of
<para>How to enable, configure, access, and make use of
non-native file systems.</para>
</listitem>
</itemizedlist>
@ -75,24 +73,25 @@
<itemizedlist>
<listitem>
<para>Understand &unix; and &os; basics
(<xref linkend="basics"/>).</para>
<para>Understand &unix; and <link
linkend="basics">&os; basics</link>.</para>
</listitem>
<listitem>
<para>Be familiar with
the basics of kernel configuration/compilation
(<xref linkend="kernelconfig"/>).</para>
<para>Be familiar with the basics of <link
linkend="kernelconfig">kernel configuration and
compilation</link>.</para>
</listitem>
<listitem>
<para>Feel comfortable installing third party software
in &os; (<xref linkend="ports"/>).</para>
<para>Feel comfortable <link linkend="ports">installing
software</link> in &os;.</para>
</listitem>
<listitem>
<para>Have some familiarity with disks, storage and
device names in &os; (<xref linkend="disks"/>).</para>
<para>Have some familiarity with <link
linkend="disks">disks</link>, storage, and device names in
&os;.</para>
</listitem>
</itemizedlist>
</sect1>
@ -100,73 +99,67 @@
<sect1 id="filesystems-zfs">
<title>The Z File System (ZFS)</title>
<para>The Z&nbsp;file system, developed by &sun;, is a new
technology designed to use a pooled storage method. This means
that space is only used as it is needed for data storage. It
has also been designed for maximum data integrity, supporting
data snapshots, multiple copies, and data checksums. A new
data replication model, known as <acronym>RAID</acronym>-Z has
been added. The <acronym>RAID</acronym>-Z model is similar
to <acronym>RAID</acronym>5 but is designed to prevent data
write corruption.</para>
<para>The Z&nbsp;file system, originally developed by &sun;,
is designed to use a pooled storage method in that space is only
used as it is needed for data storage. It is also designed for
maximum data integrity, supporting data snapshots, multiple
copies, and data checksums. It uses a software data replication
model, known as <acronym>RAID</acronym>-Z.
<acronym>RAID</acronym>-Z provides redundancy similar to
hardware <acronym>RAID</acronym>, but is designed to prevent
data write corruption and to overcome some of the limitations
of hardware <acronym>RAID</acronym>.</para>
<sect2>
<title>ZFS Tuning</title>
<para>The <acronym>ZFS</acronym> subsystem utilizes much of
the system resources, so some tuning may be required to
provide maximum efficiency during every-day use. As an
experimental feature in &os; this may change in the near
future; however, at this time, the following steps are
recommended.</para>
<para>Some of the features provided by <acronym>ZFS</acronym>
are RAM-intensive, so some tuning may be required to provide
maximum efficiency on systems with limited RAM.</para>
<sect3>
<title>Memory</title>
<para>The total system memory should be at least one gigabyte,
with two gigabytes or more recommended. In all of the
examples here, the system has one gigabyte of memory with
several other tuning mechanisms in place.</para>
<para>Some people have had luck using fewer than one gigabyte
of memory, but with such a limited amount of physical
memory, when the system is under heavy load, it is very
plausible that &os; will panic due to memory
exhaustion.</para>
<para>At a bare minimum, the total system memory should be at
least one gigabyte. The amount of recommended RAM depends
upon the size of the pool and the ZFS features which are
used. A general rule of thumb is 1GB of RAM for every 1TB
of storage. If the deduplication feature is used, a general
rule of thumb is 5GB of RAM per TB of storage to be
deduplicated. While some users successfully use ZFS with
less RAM, it is possible that when the system is under heavy
load, it may panic due to memory exhaustion. Further tuning
may be required for systems with less than the recommended
RAM requirements.</para>
</sect3>
<sect3>
<title>Kernel Configuration</title>
<para>It is recommended that unused drivers and options
be removed from the kernel configuration file. Since most
devices are available as modules, they may be loaded
using the <filename>/boot/loader.conf</filename>
file.</para>
<para>Users of the &i386; architecture should add the
following option to their kernel configuration file,
rebuild their kernel, and reboot:</para>
<para>Due to the RAM limitations of the &i386; platform, users
using ZFS on the &i386; architecture should add the
following option to a custom kernel configuration file,
rebuild the kernel, and reboot:</para>
<programlisting>options KVA_PAGES=512</programlisting>
<para>This option will expand the kernel address space, thus
allowing the <varname>vm.kvm_size</varname> tunable to be
pushed beyond the currently imposed limit of 1&nbsp;GB
(2&nbsp;GB for <acronym>PAE</acronym>). To find the most
suitable value for this option, divide the desired address
space in megabytes by four (4). In this case, it is
<literal>512</literal> for 2&nbsp;GB.</para>
<para>This option expands the kernel address space, allowing
the <varname>vm.kvm_size</varname> tunable to be pushed
beyond the currently imposed limit of 1&nbsp;GB, or the
limit of 2&nbsp;GB for <acronym>PAE</acronym>. To find the
most suitable value for this option, divide the desired
address space in megabytes by four (4). In this example, it
is <literal>512</literal> for 2&nbsp;GB.</para>
</sect3>
<sect3>
<title>Loader Tunables</title>
<para>The <devicename>kmem</devicename> address space should
be increased on all &os; architectures. On the test system
<para>The <devicename>kmem</devicename> address space can
be increased on all &os; architectures. On a test system
with one gigabyte of physical memory, success was achieved
with the following options which should be placed in the
<filename>/boot/loader.conf</filename> file and the system
with the following options added to
<filename>/boot/loader.conf</filename>, and the system
restarted:</para>
<programlisting>vm.kmem_size="330M"
@ -191,22 +184,21 @@ vfs.zfs.vdev.cache.size="5M"</programlisting>
<screen>&prompt.root; <userinput>echo 'zfs_enable="YES"' &gt;&gt; /etc/rc.conf</userinput>
&prompt.root; <userinput>service zfs start</userinput></screen>
<para>The remainder of this document assumes three
<acronym>SCSI</acronym> disks are available, and their
device names are
<para>The examples in this section assume three
<acronym>SCSI</acronym> disks with the device names
<devicename><replaceable>da0</replaceable></devicename>,
<devicename><replaceable>da1</replaceable></devicename>
<devicename><replaceable>da1</replaceable></devicename>,
and <devicename><replaceable>da2</replaceable></devicename>.
Users of <acronym>IDE</acronym> hardware may use the
Users of <acronym>IDE</acronym> hardware should instead use
<devicename><replaceable>ad</replaceable></devicename>
devices in place of <acronym>SCSI</acronym> hardware.</para>
device names.</para>
<sect3>
<title>Single Disk Pool</title>
<para>To create a simple, non-redundant <acronym>ZFS</acronym>
pool using a single disk device, use the
<command>zpool</command> command:</para>
pool using a single disk device, use
<command>zpool</command>:</para>
<screen>&prompt.root; <userinput>zpool create example /dev/da0</userinput></screen>
@ -220,12 +212,11 @@ devfs 1 1 0 100% /dev
/dev/ad0s1d 54098308 1032846 48737598 2% /usr
example 17547136 0 17547136 0% /example</screen>
<para>This output clearly shows the <literal>example</literal>
pool has not only been created but
<emphasis>mounted</emphasis> as well. It is also accessible
just like a normal file system, files may be created on it
and users are able to browse it as in the
following example:</para>
<para>This output shows that the <literal>example</literal>
pool has been created and <emphasis>mounted</emphasis>. It
is now accessible as a file system. Files may be created
on it and users can browse it, as seen in the following
example:</para>
<screen>&prompt.root; <userinput>cd /example</userinput>
&prompt.root; <userinput>ls</userinput>
@ -236,25 +227,24 @@ drwxr-xr-x 2 root wheel 3 Aug 29 23:15 .
drwxr-xr-x 21 root wheel 512 Aug 29 23:12 ..
-rw-r--r-- 1 root wheel 0 Aug 29 23:15 testfile</screen>
<para>Unfortunately this pool is not taking advantage of
any <acronym>ZFS</acronym> features. Create a file system
on this pool, and enable compression on it:</para>
<para>However, this pool is not taking advantage of any
<acronym>ZFS</acronym> features. To create a dataset on
this pool with compression enabled:</para>
<screen>&prompt.root; <userinput>zfs create example/compressed</userinput>
&prompt.root; <userinput>zfs set compression=gzip example/compressed</userinput></screen>
<para>The <literal>example/compressed</literal> is now a
<acronym>ZFS</acronym> compressed file system. Try copying
some large files to it by copying them to <filename
<para>The <literal>example/compressed</literal> dataset is now
a <acronym>ZFS</acronym> compressed file system. Try
copying some large files to <filename
class="directory">/example/compressed</filename>.</para>
<para>The compression may now be disabled with:</para>
<para>Compression can be disabled with:</para>
<screen>&prompt.root; <userinput>zfs set compression=off example/compressed</userinput></screen>
<para>To unmount the file system, issue the following command
and then verify by using the <command>df</command>
utility:</para>
<para>To unmount a file system, issue the following command
and then verify by using <command>df</command>:</para>
<screen>&prompt.root; <userinput>zfs umount example/compressed</userinput>
&prompt.root; <userinput>df</userinput>
@ -264,7 +254,7 @@ devfs 1 1 0 100% /dev
/dev/ad0s1d 54098308 1032864 48737580 2% /usr
example 17547008 0 17547008 0% /example</screen>
<para>Re-mount the file system to make it accessible
<para>To re-mount the file system to make it accessible
again, and verify with <command>df</command>:</para>
<screen>&prompt.root; <userinput>zfs mount example/compressed</userinput>
@ -287,18 +277,19 @@ example on /example (zfs, local)
example/data on /example/data (zfs, local)
example/compressed on /example/compressed (zfs, local)</screen>
<para>As observed, <acronym>ZFS</acronym> file systems, after
creation, may be used like ordinary file systems; however,
many other features are also available. In the following
example, a new file system, <literal>data</literal> is
created. Important files will be stored here, so the file
system is set to keep two copies of each data block:</para>
<para><acronym>ZFS</acronym> datasets, after creation, may be
used like any file systems. However, many other features
are available which can be set on a per-dataset basis. In
the following example, a new file system,
<literal>data</literal> is created. Important files will be
stored here, the file system is set to keep two copies of
each data block:</para>
<screen>&prompt.root; <userinput>zfs create example/data</userinput>
&prompt.root; <userinput>zfs set copies=2 example/data</userinput></screen>
<para>It is now possible to see the data and space utilization
by issuing <command>df</command> again:</para>
by issuing <command>df</command>:</para>
<screen>&prompt.root; <userinput>df</userinput>
Filesystem 1K-blocks Used Avail Capacity Mounted on
@ -311,64 +302,56 @@ example/data 17547008 0 17547008 0% /example/data</screen>
<para>Notice that each file system on the pool has the same
amount of available space. This is the reason for using
<command>df</command> through these examples, to show
that the file systems are using only the amount of space
they need and will all draw from the same pool. The
<acronym>ZFS</acronym> file system does away with concepts
such as volumes and partitions, and allows for several file
systems to occupy the same pool. Destroy the file systems,
and then destroy the pool as they are no longer
needed:</para>
<command>df</command> in these examples, to show that the
file systems use only the amount of space they need and all
draw from the same pool. The <acronym>ZFS</acronym> file
system does away with concepts such as volumes and
partitions, and allows for several file systems to occupy
the same pool.</para>
<para>To destroy the file systems and then destroy the pool as
they are no longer needed:</para>
<screen>&prompt.root; <userinput>zfs destroy example/compressed</userinput>
&prompt.root; <userinput>zfs destroy example/data</userinput>
&prompt.root; <userinput>zpool destroy example</userinput></screen>
<para>Disks go bad and fail, an unavoidable trait. When
this disk goes bad, the data will be lost. One method of
avoiding data loss due to a failed hard disk is to implement
a <acronym>RAID</acronym>. <acronym>ZFS</acronym> supports
this feature in its pool design which is covered in
the next section.</para>
</sect3>
<sect3>
<title><acronym>ZFS</acronym> RAID-Z</title>
<para>As previously noted, this section will assume that
three <acronym>SCSI</acronym> disks exist as devices
<devicename>da0</devicename>, <devicename>da1</devicename>
and <devicename>da2</devicename> (or
<devicename>ad0</devicename> and beyond in case IDE disks
are being used). To create a <acronym>RAID</acronym>-Z
pool, issue the following command:</para>
<para>There is no way to prevent a disk from failing. One
method of avoiding data loss due to a failed hard disk is to
implement <acronym>RAID</acronym>. <acronym>ZFS</acronym>
supports this feature in its pool design.</para>
<para>To create a <acronym>RAID</acronym>-Z pool, issue the
following command and specify the disks to add to the
pool:</para>
<screen>&prompt.root; <userinput>zpool create storage raidz da0 da1 da2</userinput></screen>
<note>
<para>&sun; recommends that the amount of devices used
in a <acronym>RAID</acronym>-Z configuration is between
three and nine. If your needs call for a single pool to
consist of 10 disks or more, consider breaking it up into
smaller <acronym>RAID</acronym>-Z groups. If you only
have two disks and still require redundancy, consider
using a <acronym>ZFS</acronym> mirror instead. See the
&man.zpool.8; manual page for more details.</para>
<para>&sun; recommends that the amount of devices used in
a <acronym>RAID</acronym>-Z configuration is between
three and nine. For environments requiring a single pool
consisting of 10 disks or more, consider breaking it up
into smaller <acronym>RAID</acronym>-Z groups. If only
two disks are available and redundancy is a requirement,
consider using a <acronym>ZFS</acronym> mirror. Refer to
&man.zpool.8; for more details.</para>
</note>
<para>The <literal>storage</literal> zpool should have been
created. This may be verified by using the &man.mount.8;
and &man.df.1; commands as before. More disk devices may
have been allocated by adding them to the end of the list
above. Make a new file system in the pool, called
<literal>home</literal>, where user files will eventually
be placed:</para>
<para>This command creates the <literal>storage</literal>
zpool. This may be verified using &man.mount.8; and
&man.df.1;. This command makes a new file system in the
pool called <literal>home</literal>:</para>
<screen>&prompt.root; <userinput>zfs create storage/home</userinput></screen>
<para>It is now possible to enable compression and keep extra
copies of the user's home directories and files. This may
be accomplished just as before using the following
copies of directories and files using the following
commands:</para>
<screen>&prompt.root; <userinput>zfs set copies=2 storage/home</userinput>
@ -384,9 +367,9 @@ example/data 17547008 0 17547008 0% /example/data</screen>
&prompt.root; <userinput>ln -s /storage/home /usr/home</userinput></screen>
<para>Users should now have their data stored on the freshly
created <filename class="directory">/storage/home</filename>
file system. Test by adding a new user and logging in as
that user.</para>
created <filename
class="directory">/storage/home</filename>. Test by
adding a new user and logging in as that user.</para>
<para>Try creating a snapshot which may be rolled back
later:</para>
@ -405,28 +388,27 @@ example/data 17547008 0 17547008 0% /example/data</screen>
<command>ls</command> in the file system's
<filename class="directory">.zfs/snapshot</filename>
directory. For example, to see the previously taken
snapshot, perform the following command:</para>
snapshot:</para>
<screen>&prompt.root; <userinput>ls /storage/home/.zfs/snapshot</userinput></screen>
<para>It is possible to write a script to perform monthly
snapshots on user data; however, over time, snapshots
<para>It is possible to write a script to perform regular
snapshots on user data. However, over time, snapshots
may consume a great deal of disk space. The previous
snapshot may be removed using the following command:</para>
<screen>&prompt.root; <userinput>zfs destroy storage/home@08-30-08</userinput></screen>
<para>After all of this testing, there is no reason we should
keep <filename class="directory">/storage/home</filename>
around in its present state. Make it the real
<filename class="directory">/home</filename> file
system:</para>
<para>After testing, <filename
class="directory">/storage/home</filename> can be made the
real <filename class="directory">/home</filename> using
this command:</para>
<screen>&prompt.root; <userinput>zfs set mountpoint=/home storage/home</userinput></screen>
<para>Issuing the <command>df</command> and
<command>mount</command> commands will show that the system
now treats our file system as the real
<para>Run <command>df</command> and
<command>mount</command> to confirm that the system now
treats the file system as the real
<filename class="directory">/home</filename>:</para>
<screen>&prompt.root; <userinput>mount</userinput>
@ -455,8 +437,7 @@ storage/home 26320512 0 26320512 0% /home</screen>
<title>Recovering <acronym>RAID</acronym>-Z</title>
<para>Every software <acronym>RAID</acronym> has a method of
monitoring their <literal>state</literal>.
<acronym>ZFS</acronym> is no exception. The status of
monitoring its <literal>state</literal>. The status of
<acronym>RAID</acronym>-Z devices may be viewed with the
following command:</para>
@ -468,7 +449,7 @@ storage/home 26320512 0 26320512 0% /home</screen>
<screen>all pools are healthy</screen>
<para>If there is an issue, perhaps a disk has gone offline,
the pool state will be returned and look similar to:</para>
the pool state will look similar to:</para>
<screen> pool: storage
state: DEGRADED
@ -489,14 +470,13 @@ config:
errors: No known data errors</screen>
<para>This states that the device was taken offline by the
administrator. This is true for this particular example.
To take the disk offline, the following command was
used:</para>
<para>This indicates that the device was previously taken
offline by the administrator using the following
command:</para>
<screen>&prompt.root; <userinput>zpool offline storage da1</userinput></screen>
<para>It is now possible to replace the
<para>It is now possible to replace
<devicename>da1</devicename> after the system has been
powered down. When the system is back online, the following
command may issued to replace the disk:</para>
@ -529,37 +509,34 @@ errors: No known data errors</screen>
<sect3>
<title>Data Verification</title>
<para>As previously mentioned, <acronym>ZFS</acronym> uses
<para><acronym>ZFS</acronym> uses
<literal>checksums</literal> to verify the integrity of
stored data. They are enabled automatically upon creation
stored data. These are enabled automatically upon creation
of file systems and may be disabled using the following
command:</para>
<screen>&prompt.root; <userinput>zfs set checksum=off storage/home</userinput></screen>
<para>This is not a wise idea, however, as checksums take
very little storage space and are more useful when enabled.
There also appears to be no noticeable costs in having them
enabled. While enabled, it is possible to have
<acronym>ZFS</acronym> check data integrity using checksum
verification. This process is known as
<quote>scrubbing.</quote> To verify the data integrity of
the <literal>storage</literal> pool, issue the following
command:</para>
<para>Doing so is <emphasis>not</emphasis> recommended as
checksums take very little storage space and are used to
check data integrity using checksum verification in a
process is known as <quote>scrubbing.</quote> To verify the
data integrity of the <literal>storage</literal> pool, issue
this command:</para>
<screen>&prompt.root; <userinput>zpool scrub storage</userinput></screen>
<para>This process may take considerable time depending on
the amount of data stored. It is also very
<acronym>I/O</acronym> intensive, so much that only one
of these operations may be run at any given time. After
the scrub has completed, the status is updated and may be
viewed by issuing a status request:</para>
<acronym>I/O</acronym> intensive, so much so that only one
scrub may be run at any given time. After the scrub has
completed, the status is updated and may be viewed by
issuing a status request:</para>
<screen>&prompt.root; <userinput>zpool status storage</userinput>
pool: storage
state: ONLINE
scrub: scrub completed with 0 errors on Sat Aug 30 19:57:37 2008
scrub: scrub completed with 0 errors on Sat Jan 26 19:57:37 2013
config:
NAME STATE READ WRITE CKSUM
@ -571,43 +548,39 @@ config:
errors: No known data errors</screen>
<para>The completion time is in plain view in this example.
This feature helps to ensure data integrity over a long
period of time.</para>
<para>The completion time is displayed and helps to ensure
data integrity over a long period of time.</para>
<para>There are many more options for the Z&nbsp;file system,
see the &man.zfs.8; and &man.zpool.8; manual
pages.</para>
<para>Refer to &man.zfs.8; and &man.zpool.8; for other
<acronym>ZFS</acronym> options.</para>
</sect3>
<sect3>
<title>ZFS Quotas</title>
<para>ZFS supports different types of quotas; the
refquota, the general quota, the user quota, and
the group quota. This section will explain the
basics of each one, and include some usage
instructions.</para>
<para>ZFS supports different types of quotas: the refquota,
the general quota, the user quota, and the group quota.
This section explains the basics of each type and includes
some usage instructions.</para>
<para>Quotas limit the amount of space that a dataset
and its descendants can consume, and enforce a limit
on the amount of space used by filesystems and
snapshots for the descendants. In terms of users,
quotas are useful to limit the amount of space a
particular user can use.</para>
<para>Quotas limit the amount of space that a dataset and its
descendants can consume, and enforce a limit on the amount
of space used by filesystems and snapshots for the
descendants. Quotas are useful to limit the amount of space
a particular user can use.</para>
<note>
<para>Quotas cannot be set on volumes, as the
<literal>volsize</literal> property acts as an
implicit quota.</para>
<literal>volsize</literal> property acts as an implicit
quota.</para>
</note>
<para>The refquota,
<literal>refquota=<replaceable>size</replaceable></literal>,
limits the amount of space a dataset can consume
by enforcing a hard limit on the space used. However,
this hard limit does not include space used by descendants,
such as file systems or snapshots.</para>
<para>The
<literal>refquota=<replaceable>size</replaceable></literal>
limits the amount of space a dataset can consume by
enforcing a hard limit on the space used. However, this
hard limit does not include space used by descendants, such
as file systems or snapshots.</para>
<para>To enforce a general quota of 10&nbsp;GB for
<filename>storage/home/bob</filename>, use the
@ -615,9 +588,8 @@ errors: No known data errors</screen>
<screen>&prompt.root; <userinput>zfs set quota=10G storage/home/bob</userinput></screen>
<para>User quotas limit the amount of space that can
be used by the specified user. The general format
is
<para>User quotas limit the amount of space that can be used
by the specified user. The general format is
<literal>userquota@<replaceable>user</replaceable>=<replaceable>size</replaceable></literal>,
and the user's name must be in one of the following
formats:</para>
@ -626,28 +598,28 @@ errors: No known data errors</screen>
<listitem>
<para><acronym
role="Portable Operating System
Interface">POSIX</acronym> compatible name
(e.g., <replaceable>joe</replaceable>).</para>
Interface">POSIX</acronym> compatible name such as
<replaceable>joe</replaceable>.</para>
</listitem>
<listitem>
<para><acronym
role="Portable Operating System
Interface">POSIX</acronym>
numeric ID (e.g.,
<replaceable>789</replaceable>).</para>
numeric ID such as
<replaceable>789</replaceable>.</para>
</listitem>
<listitem>
<para><acronym role="System Identifier">SID</acronym> name
(e.g.,
<replaceable>joe.bloggs@example.com</replaceable>).</para>
such as
<replaceable>joe.bloggs@example.com</replaceable>.</para>
</listitem>
<listitem>
<para><acronym role="System Identifier">SID</acronym>
numeric ID (e.g.,
<replaceable>S-1-123-456-789</replaceable>).</para>
numeric ID such as
<replaceable>S-1-123-456-789</replaceable>.</para>
</listitem>
</itemizedlist>
@ -670,7 +642,7 @@ errors: No known data errors</screen>
privilege are able to view and set everyone's quota.</para>
<para>The group quota limits the amount of space that a
specified user group can consume. The general format is
specified group can consume. The general format is
<literal>groupquota@<replaceable>group</replaceable>=<replaceable>size</replaceable></literal>.</para>
<para>To set the quota for the group
@ -680,30 +652,29 @@ errors: No known data errors</screen>
<screen>&prompt.root; <userinput>zfs set groupquota@firstgroup=50G</userinput></screen>
<para>To remove the quota for the group
<replaceable>firstgroup</replaceable>, or make sure that one
is not set, instead use:</para>
<replaceable>firstgroup</replaceable>, or to make sure that
one is not set, instead use:</para>
<screen>&prompt.root; <userinput>zfs set groupquota@firstgroup=none</userinput></screen>
<para>As with the user quota property,
non-<username>root</username> users can only see the quotas
associated with the user groups that they belong to, however
a <username>root</username> user or a user with the
associated with the groups that they belong to. However,
<username>root</username> or a user with the
<literal>groupquota</literal> privilege can view and set all
quotas for all groups.</para>
<para>The <command>zfs userspace</command> subcommand displays
the amount of space consumed by each user on the specified
filesystem or snapshot, along with any specified quotas.
The <command>zfs groupspace</command> subcommand does the
same for groups. For more information about supported
options, or only displaying specific options, see
&man.zfs.1;.</para>
<para>To display the amount of space consumed by each user on
the specified filesystem or snapshot, along with any
specified quotas, use <command>zfs userspace</command>.
For group information, use <command>zfs
groupspace</command>. For more information about
supported options or how to display only specific options,
refer to &man.zfs.1;.</para>
<para>To list the quota for
<filename>storage/home/bob</filename>, if you have the
correct privileges or are <username>root</username>, use the
following:</para>
<para>Users with sufficient privileges and
<username>root</username> can list the quota for
<filename>storage/home/bob</filename> using:</para>
<screen>&prompt.root; <userinput>zfs get quota storage/home/bob</userinput></screen>
</sect3>
@ -711,9 +682,9 @@ errors: No known data errors</screen>
<sect3>
<title>ZFS Reservations</title>
<para>ZFS supports two types of space reservations.
This section will explain the basics of each one,
and include some usage instructions.</para>
<para>ZFS supports two types of space reservations. This
section explains the basics of each and includes some usage
instructions.</para>
<para>The <literal>reservation</literal> property makes it
possible to reserve a minimum amount of space guaranteed
@ -732,23 +703,22 @@ errors: No known data errors</screen>
not counted by the <literal>refreservation</literal>
amount and so do not encroach on the space set.</para>
<para>Reservations of any sort are useful in many
situations, for example planning and testing the
suitability of disk space allocation in a new system, or
ensuring that enough space is available on file systems
for system recovery procedures and files.</para>
<para>Reservations of any sort are useful in many situations,
such as planning and testing the suitability of disk space
allocation in a new system, or ensuring that enough space is
available on file systems for system recovery procedures and
files.</para>
<para>The general format of the <literal>reservation</literal>
property is
<literal>reservation=<replaceable>size</replaceable></literal>,
<literal>reservation=<replaceable>size</replaceable></literal>,
so to set a reservation of 10&nbsp;GB on
<filename>storage/home/bob</filename>the below command is
used:</para>
<filename>storage/home/bob</filename>, use:</para>
<screen>&prompt.root; <userinput>zfs set reservation=10G storage/home/bob</userinput></screen>
<para>To make sure that no reservation is set, or to remove a
reservation, instead use:</para>
reservation, use:</para>
<screen>&prompt.root; <userinput>zfs set reservation=none storage/home/bob</userinput></screen>
@ -770,24 +740,24 @@ errors: No known data errors</screen>
<sect1 id="filesystems-linux">
<title>&linux; Filesystems</title>
<para>This section will describe some of the &linux; filesystems
<para>This section describes some of the &linux; filesystems
supported by &os;.</para>
<sect2>
<title>Ext2FS</title>
<title><acronym>ext2</acronym></title>
<para>The &man.ext2fs.5; file system kernel implementation was
written by Godmar Back, and the driver first appeared in
&os; 2.2. In &os; 8 and earlier, the code is licensed under
the <acronym>GNU</acronym> Public License, however under &os;
9, the code has been rewritten and it is now licensed under
the <acronym>BSD</acronym> license.</para>
<para>The &man.ext2fs.5; file system kernel implementation has
been available since &os;&nbsp;2.2. In &os;&nbsp;8.x and
earlier, the code is licensed under the
<acronym>GPL</acronym>. Since &os;&nbsp;9.0, the code has
been rewritten and is now <acronym>BSD</acronym>
licensed.</para>
<para>The &man.ext2fs.5; driver will allow the &os; kernel
to both read and write to <acronym>ext2</acronym> file
systems.</para>
<para>The &man.ext2fs.5; driver allows the &os; kernel to both
read and write to <acronym>ext2</acronym> file systems.</para>
<para>First, load the kernel loadable module:</para>
<para>To access an <acronym>ext2</acronym> file system, first
load the kernel loadable module:</para>
<screen>&prompt.root; <userinput>kldload ext2fs</userinput></screen>
@ -800,11 +770,10 @@ errors: No known data errors</screen>
<sect2>
<title>XFS</title>
<para>The X file system, <acronym>XFS</acronym>, was originally
written by <acronym>SGI</acronym> for the
<acronym>IRIX</acronym> operating system, and they ported it
to &linux;. The source code has been released under the
<acronym>GNU</acronym> Public License. See
<para><acronym>XFS</acronym> was originally written by
<acronym>SGI</acronym> for the <acronym>IRIX</acronym>
operating system and was then ported to &linux; and
released under the <acronym>GPL</acronym>. See
<ulink url="http://oss.sgi.com/projects/xfs">this page</ulink>
for more details. The &os; port was started by Russel
Cattelan, &a.kan;, and &a.rodrigc;.</para>
@ -814,21 +783,19 @@ errors: No known data errors</screen>
<screen>&prompt.root; <userinput>kldload xfs</userinput></screen>
<para>The &man.xfs.5; driver lets the &os; kernel access
XFS filesystems. However, at present only read-only
access is supported. Writing to a volume is not
possible.</para>
<para>The &man.xfs.5; driver lets the &os; kernel access XFS
filesystems. However, only read-only access is supported and
writing to a volume is not possible.</para>
<para>To mount a &man.xfs.5; volume located on
<filename>/dev/ad1s1</filename>, do the following:</para>
<filename>/dev/ad1s1</filename>:</para>
<screen>&prompt.root; <userinput>mount -t xfs /dev/ad1s1 /mnt</userinput></screen>
<para>Also useful to note is that the
<filename role="package">sysutils/xfsprogs</filename> port
contains the <command>mkfs.xfs</command> utility which enables
creation of <acronym>XFS</acronym> filesystems, plus utilities
for analysing and repairing them.</para>
<para>The <filename role="package">sysutils/xfsprogs</filename>
port includes the <command>mkfs.xfs</command> which enables
the creation of <acronym>XFS</acronym> filesystems, plus
utilities for analyzing and repairing them.</para>
<para>The <literal>-p</literal> flag to
<command>mkfs.xfs</command> can be used to create an
@ -842,11 +809,11 @@ errors: No known data errors</screen>
<para>The Reiser file system, ReiserFS, was ported to
&os; by &a.dumbbell;, and has been released under the
<acronym>GNU</acronym> Public License.</para>
<acronym>GPL</acronym> .</para>
<para>The ReiserFS driver will permit the &os; kernel to
access ReiserFS file systems and read their contents, but not
write to them, currently.</para>
<para>The ReiserFS driver permits the &os; kernel to access
ReiserFS file systems and read their contents, but not
write to them.</para>
<para>First, the kernel-loadable module needs to be
loaded:</para>