doc/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml
Dru Lavigne f223ab1430 Add section ID to ZFS Quotas so it can be referred to in Disk Quotas.
Editorial review of Disk Quotas.

Sponsored by:	iXsystems
2014-04-09 15:41:54 +00:00

853 lines
32 KiB
XML

<?xml version="1.0" encoding="iso-8859-1"?>
<!--
The FreeBSD Documentation Project
$FreeBSD$
-->
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="filesystems">
<info>
<title>File Systems Support</title>
<authorgroup>
<author><personname><firstname>Tom</firstname><surname>Rhodes</surname></personname><contrib>Written
by </contrib></author>
</authorgroup>
</info>
<sect1 xml:id="filesystems-synopsis">
<title>Synopsis</title>
<indexterm><primary>File Systems</primary></indexterm>
<indexterm>
<primary>File Systems Support</primary>
<see>File Systems</see>
</indexterm>
<para>File systems are an integral part of any operating system.
They allow users to upload and store files, provide access
to data, and make hard drives useful. Different operating
systems differ in their native file system. Traditionally, the
native &os; file system has been the Unix File System
<acronym>UFS</acronym> which has been modernized as
<acronym>UFS2</acronym>. Since &os;&nbsp;7.0, the Z File
System <acronym>ZFS</acronym> is also available as a native file
system.</para>
<para>In addition to its native file systems, &os; supports a
multitude of other file systems so that data from other
operating systems can be accessed locally, such as data stored
on locally attached <acronym>USB</acronym> storage devices,
flash drives, and hard disks. This includes support for the
&linux; Extended File System (<acronym>EXT</acronym>) and the
Reiser file system.</para>
<para>There are different levels of &os; support for the various
file systems. Some require a kernel module to be loaded and
others may require a toolset to be installed. Some non-native
file system support is full read-write while others are
read-only.</para>
<para>After reading this chapter, you will know:</para>
<itemizedlist>
<listitem>
<para>The difference between native and supported file
systems.</para>
</listitem>
<listitem>
<para>Which file systems are supported by &os;.</para>
</listitem>
<listitem>
<para>How to enable, configure, access, and make use of
non-native file systems.</para>
</listitem>
</itemizedlist>
<para>Before reading this chapter, you should:</para>
<itemizedlist>
<listitem>
<para>Understand &unix; and <link
linkend="basics">&os; basics</link>.</para>
</listitem>
<listitem>
<para>Be familiar with the basics of <link
linkend="kernelconfig">kernel configuration and
compilation</link>.</para>
</listitem>
<listitem>
<para>Feel comfortable <link linkend="ports">installing
software</link> in &os;.</para>
</listitem>
<listitem>
<para>Have some familiarity with <link
linkend="disks">disks</link>, storage, and device names in
&os;.</para>
</listitem>
</itemizedlist>
</sect1>
<sect1 xml:id="filesystems-zfs">
<title>The Z File System (ZFS)</title>
<para>The Z&nbsp;file system, originally developed by &sun;,
is designed to use a pooled storage method in that space is only
used as it is needed for data storage. It is also designed for
maximum data integrity, supporting data snapshots, multiple
copies, and data checksums. It uses a software data replication
model, known as <acronym>RAID</acronym>-Z.
<acronym>RAID</acronym>-Z provides redundancy similar to
hardware <acronym>RAID</acronym>, but is designed to prevent
data write corruption and to overcome some of the limitations
of hardware <acronym>RAID</acronym>.</para>
<sect2>
<title>ZFS Tuning</title>
<para>Some of the features provided by <acronym>ZFS</acronym>
are RAM-intensive, so some tuning may be required to provide
maximum efficiency on systems with limited RAM.</para>
<sect3>
<title>Memory</title>
<para>At a bare minimum, the total system memory should be at
least one gigabyte. The amount of recommended RAM depends
upon the size of the pool and the ZFS features which are
used. A general rule of thumb is 1GB of RAM for every 1TB
of storage. If the deduplication feature is used, a general
rule of thumb is 5GB of RAM per TB of storage to be
deduplicated. While some users successfully use ZFS with
less RAM, it is possible that when the system is under heavy
load, it may panic due to memory exhaustion. Further tuning
may be required for systems with less than the recommended
RAM requirements.</para>
</sect3>
<sect3>
<title>Kernel Configuration</title>
<para>Due to the RAM limitations of the &i386; platform, users
using ZFS on the &i386; architecture should add the
following option to a custom kernel configuration file,
rebuild the kernel, and reboot:</para>
<programlisting>options KVA_PAGES=512</programlisting>
<para>This option expands the kernel address space, allowing
the <varname>vm.kvm_size</varname> tunable to be pushed
beyond the currently imposed limit of 1&nbsp;GB, or the
limit of 2&nbsp;GB for <acronym>PAE</acronym>. To find the
most suitable value for this option, divide the desired
address space in megabytes by four (4). In this example, it
is <literal>512</literal> for 2&nbsp;GB.</para>
</sect3>
<sect3>
<title>Loader Tunables</title>
<para>The <filename>kmem</filename> address space can
be increased on all &os; architectures. On a test system
with one gigabyte of physical memory, success was achieved
with the following options added to
<filename>/boot/loader.conf</filename>, and the system
restarted:</para>
<programlisting>vm.kmem_size="330M"
vm.kmem_size_max="330M"
vfs.zfs.arc_max="40M"
vfs.zfs.vdev.cache.size="5M"</programlisting>
<para>For a more detailed list of recommendations for
ZFS-related tuning, see <uri
xlink:href="http://wiki.freebsd.org/ZFSTuningGuide">http://wiki.freebsd.org/ZFSTuningGuide</uri>.</para>
</sect3>
</sect2>
<sect2>
<title>Using <acronym>ZFS</acronym></title>
<para>There is a start up mechanism that allows &os; to mount
<acronym>ZFS</acronym> pools during system initialization. To
set it, issue the following commands:</para>
<screen>&prompt.root; <userinput>echo 'zfs_enable="YES"' &gt;&gt; /etc/rc.conf</userinput>
&prompt.root; <userinput>service zfs start</userinput></screen>
<para>The examples in this section assume three
<acronym>SCSI</acronym> disks with the device names
<filename><replaceable>da0</replaceable></filename>,
<filename><replaceable>da1</replaceable></filename>,
and <filename><replaceable>da2</replaceable></filename>.
Users of <acronym>IDE</acronym> hardware should instead use
<filename><replaceable>ad</replaceable></filename>
device names.</para>
<sect3>
<title>Single Disk Pool</title>
<para>To create a simple, non-redundant <acronym>ZFS</acronym>
pool using a single disk device, use
<command>zpool</command>:</para>
<screen>&prompt.root; <userinput>zpool create example /dev/da0</userinput></screen>
<para>To view the new pool, review the output of
<command>df</command>:</para>
<screen>&prompt.root; <userinput>df</userinput>
Filesystem 1K-blocks Used Avail Capacity Mounted on
/dev/ad0s1a 2026030 235230 1628718 13% /
devfs 1 1 0 100% /dev
/dev/ad0s1d 54098308 1032846 48737598 2% /usr
example 17547136 0 17547136 0% /example</screen>
<para>This output shows that the <literal>example</literal>
pool has been created and <emphasis>mounted</emphasis>. It
is now accessible as a file system. Files may be created
on it and users can browse it, as seen in the following
example:</para>
<screen>&prompt.root; <userinput>cd /example</userinput>
&prompt.root; <userinput>ls</userinput>
&prompt.root; <userinput>touch testfile</userinput>
&prompt.root; <userinput>ls -al</userinput>
total 4
drwxr-xr-x 2 root wheel 3 Aug 29 23:15 .
drwxr-xr-x 21 root wheel 512 Aug 29 23:12 ..
-rw-r--r-- 1 root wheel 0 Aug 29 23:15 testfile</screen>
<para>However, this pool is not taking advantage of any
<acronym>ZFS</acronym> features. To create a dataset on
this pool with compression enabled:</para>
<screen>&prompt.root; <userinput>zfs create example/compressed</userinput>
&prompt.root; <userinput>zfs set compression=gzip example/compressed</userinput></screen>
<para>The <literal>example/compressed</literal> dataset is now
a <acronym>ZFS</acronym> compressed file system. Try
copying some large files to
<filename>/example/compressed</filename>.</para>
<para>Compression can be disabled with:</para>
<screen>&prompt.root; <userinput>zfs set compression=off example/compressed</userinput></screen>
<para>To unmount a file system, issue the following command
and then verify by using <command>df</command>:</para>
<screen>&prompt.root; <userinput>zfs umount example/compressed</userinput>
&prompt.root; <userinput>df</userinput>
Filesystem 1K-blocks Used Avail Capacity Mounted on
/dev/ad0s1a 2026030 235232 1628716 13% /
devfs 1 1 0 100% /dev
/dev/ad0s1d 54098308 1032864 48737580 2% /usr
example 17547008 0 17547008 0% /example</screen>
<para>To re-mount the file system to make it accessible
again, and verify with <command>df</command>:</para>
<screen>&prompt.root; <userinput>zfs mount example/compressed</userinput>
&prompt.root; <userinput>df</userinput>
Filesystem 1K-blocks Used Avail Capacity Mounted on
/dev/ad0s1a 2026030 235234 1628714 13% /
devfs 1 1 0 100% /dev
/dev/ad0s1d 54098308 1032864 48737580 2% /usr
example 17547008 0 17547008 0% /example
example/compressed 17547008 0 17547008 0% /example/compressed</screen>
<para>The pool and file system may also be observed by viewing
the output from <command>mount</command>:</para>
<screen>&prompt.root; <userinput>mount</userinput>
/dev/ad0s1a on / (ufs, local)
devfs on /dev (devfs, local)
/dev/ad0s1d on /usr (ufs, local, soft-updates)
example on /example (zfs, local)
example/data on /example/data (zfs, local)
example/compressed on /example/compressed (zfs, local)</screen>
<para><acronym>ZFS</acronym> datasets, after creation, may be
used like any file systems. However, many other features
are available which can be set on a per-dataset basis. In
the following example, a new file system,
<literal>data</literal> is created. Important files will be
stored here, the file system is set to keep two copies of
each data block:</para>
<screen>&prompt.root; <userinput>zfs create example/data</userinput>
&prompt.root; <userinput>zfs set copies=2 example/data</userinput></screen>
<para>It is now possible to see the data and space utilization
by issuing <command>df</command>:</para>
<screen>&prompt.root; <userinput>df</userinput>
Filesystem 1K-blocks Used Avail Capacity Mounted on
/dev/ad0s1a 2026030 235234 1628714 13% /
devfs 1 1 0 100% /dev
/dev/ad0s1d 54098308 1032864 48737580 2% /usr
example 17547008 0 17547008 0% /example
example/compressed 17547008 0 17547008 0% /example/compressed
example/data 17547008 0 17547008 0% /example/data</screen>
<para>Notice that each file system on the pool has the same
amount of available space. This is the reason for using
<command>df</command> in these examples, to show that the
file systems use only the amount of space they need and all
draw from the same pool. The <acronym>ZFS</acronym> file
system does away with concepts such as volumes and
partitions, and allows for several file systems to occupy
the same pool.</para>
<para>To destroy the file systems and then destroy the pool as
they are no longer needed:</para>
<screen>&prompt.root; <userinput>zfs destroy example/compressed</userinput>
&prompt.root; <userinput>zfs destroy example/data</userinput>
&prompt.root; <userinput>zpool destroy example</userinput></screen>
</sect3>
<sect3>
<title><acronym>ZFS</acronym> RAID-Z</title>
<para>There is no way to prevent a disk from failing. One
method of avoiding data loss due to a failed hard disk is to
implement <acronym>RAID</acronym>. <acronym>ZFS</acronym>
supports this feature in its pool design.</para>
<para>To create a <acronym>RAID</acronym>-Z pool, issue the
following command and specify the disks to add to the
pool:</para>
<screen>&prompt.root; <userinput>zpool create storage raidz da0 da1 da2</userinput></screen>
<note>
<para>&sun; recommends that the amount of devices used in
a <acronym>RAID</acronym>-Z configuration is between
three and nine. For environments requiring a single pool
consisting of 10 disks or more, consider breaking it up
into smaller <acronym>RAID</acronym>-Z groups. If only
two disks are available and redundancy is a requirement,
consider using a <acronym>ZFS</acronym> mirror. Refer to
&man.zpool.8; for more details.</para>
</note>
<para>This command creates the <literal>storage</literal>
zpool. This may be verified using &man.mount.8; and
&man.df.1;. This command makes a new file system in the
pool called <literal>home</literal>:</para>
<screen>&prompt.root; <userinput>zfs create storage/home</userinput></screen>
<para>It is now possible to enable compression and keep extra
copies of directories and files using the following
commands:</para>
<screen>&prompt.root; <userinput>zfs set copies=2 storage/home</userinput>
&prompt.root; <userinput>zfs set compression=gzip storage/home</userinput></screen>
<para>To make this the new home directory for users, copy the
user data to this directory, and create the appropriate
symbolic links:</para>
<screen>&prompt.root; <userinput>cp -rp /home/* /storage/home</userinput>
&prompt.root; <userinput>rm -rf /home /usr/home</userinput>
&prompt.root; <userinput>ln -s /storage/home /home</userinput>
&prompt.root; <userinput>ln -s /storage/home /usr/home</userinput></screen>
<para>Users should now have their data stored on the freshly
created <filename>/storage/home</filename>. Test by
adding a new user and logging in as that user.</para>
<para>Try creating a snapshot which may be rolled back
later:</para>
<screen>&prompt.root; <userinput>zfs snapshot storage/home@08-30-08</userinput></screen>
<para>Note that the snapshot option will only capture a real
file system, not a home directory or a file. The
<literal>@</literal> character is a delimiter used between
the file system name or the volume name. When a user's
home directory gets trashed, restore it with:</para>
<screen>&prompt.root; <userinput>zfs rollback storage/home@08-30-08</userinput></screen>
<para>To get a list of all available snapshots, run
<command>ls</command> in the file system's
<filename>.zfs/snapshot</filename> directory. For example,
to see the previously taken snapshot:</para>
<screen>&prompt.root; <userinput>ls /storage/home/.zfs/snapshot</userinput></screen>
<para>It is possible to write a script to perform regular
snapshots on user data. However, over time, snapshots
may consume a great deal of disk space. The previous
snapshot may be removed using the following command:</para>
<screen>&prompt.root; <userinput>zfs destroy storage/home@08-30-08</userinput></screen>
<para>After testing, <filename>/storage/home</filename> can be
made the real <filename>/home</filename> using this
command:</para>
<screen>&prompt.root; <userinput>zfs set mountpoint=/home storage/home</userinput></screen>
<para>Run <command>df</command> and
<command>mount</command> to confirm that the system now
treats the file system as the real
<filename>/home</filename>:</para>
<screen>&prompt.root; <userinput>mount</userinput>
/dev/ad0s1a on / (ufs, local)
devfs on /dev (devfs, local)
/dev/ad0s1d on /usr (ufs, local, soft-updates)
storage on /storage (zfs, local)
storage/home on /home (zfs, local)
&prompt.root; <userinput>df</userinput>
Filesystem 1K-blocks Used Avail Capacity Mounted on
/dev/ad0s1a 2026030 235240 1628708 13% /
devfs 1 1 0 100% /dev
/dev/ad0s1d 54098308 1032826 48737618 2% /usr
storage 26320512 0 26320512 0% /storage
storage/home 26320512 0 26320512 0% /home</screen>
<para>This completes the <acronym>RAID</acronym>-Z
configuration. To get status updates about the file systems
created during the nightly &man.periodic.8; runs, issue the
following command:</para>
<screen>&prompt.root; <userinput>echo 'daily_status_zfs_enable="YES"' &gt;&gt; /etc/periodic.conf</userinput></screen>
</sect3>
<sect3>
<title>Recovering <acronym>RAID</acronym>-Z</title>
<para>Every software <acronym>RAID</acronym> has a method of
monitoring its <literal>state</literal>. The status of
<acronym>RAID</acronym>-Z devices may be viewed with the
following command:</para>
<screen>&prompt.root; <userinput>zpool status -x</userinput></screen>
<para>If all pools are healthy and everything is normal, the
following message will be returned:</para>
<screen>all pools are healthy</screen>
<para>If there is an issue, perhaps a disk has gone offline,
the pool state will look similar to:</para>
<screen> pool: storage
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
storage DEGRADED 0 0 0
raidz1 DEGRADED 0 0 0
da0 ONLINE 0 0 0
da1 OFFLINE 0 0 0
da2 ONLINE 0 0 0
errors: No known data errors</screen>
<para>This indicates that the device was previously taken
offline by the administrator using the following
command:</para>
<screen>&prompt.root; <userinput>zpool offline storage da1</userinput></screen>
<para>It is now possible to replace
<filename>da1</filename> after the system has been
powered down. When the system is back online, the following
command may issued to replace the disk:</para>
<screen>&prompt.root; <userinput>zpool replace storage da1</userinput></screen>
<para>From here, the status may be checked again, this time
without the <option>-x</option> flag to get state
information:</para>
<screen>&prompt.root; <userinput>zpool status storage</userinput>
pool: storage
state: ONLINE
scrub: resilver completed with 0 errors on Sat Aug 30 19:44:11 2008
config:
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
raidz1 ONLINE 0 0 0
da0 ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0
errors: No known data errors</screen>
<para>As shown from this example, everything appears to be
normal.</para>
</sect3>
<sect3>
<title>Data Verification</title>
<para><acronym>ZFS</acronym> uses checksums to verify the
integrity of stored data. These are enabled automatically
upon creation of file systems and may be disabled using the
following command:</para>
<screen>&prompt.root; <userinput>zfs set checksum=off storage/home</userinput></screen>
<para>Doing so is <emphasis>not</emphasis> recommended as
checksums take very little storage space and are used to
check data integrity using checksum verification in a
process is known as <quote>scrubbing.</quote> To verify the
data integrity of the <literal>storage</literal> pool, issue
this command:</para>
<screen>&prompt.root; <userinput>zpool scrub storage</userinput></screen>
<para>This process may take considerable time depending on
the amount of data stored. It is also very
<acronym>I/O</acronym> intensive, so much so that only one
scrub may be run at any given time. After the scrub has
completed, the status is updated and may be viewed by
issuing a status request:</para>
<screen>&prompt.root; <userinput>zpool status storage</userinput>
pool: storage
state: ONLINE
scrub: scrub completed with 0 errors on Sat Jan 26 19:57:37 2013
config:
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
raidz1 ONLINE 0 0 0
da0 ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0
errors: No known data errors</screen>
<para>The completion time is displayed and helps to ensure
data integrity over a long period of time.</para>
<para>Refer to &man.zfs.8; and &man.zpool.8; for other
<acronym>ZFS</acronym> options.</para>
</sect3>
<sect3 xml:id="zfs-quotas">
<title>ZFS Quotas</title>
<para>ZFS supports different types of quotas: the refquota,
the general quota, the user quota, and the group quota.
This section explains the basics of each type and includes
some usage instructions.</para>
<para>Quotas limit the amount of space that a dataset and its
descendants can consume, and enforce a limit on the amount
of space used by file systems and snapshots for the
descendants. Quotas are useful to limit the amount of space
a particular user can use.</para>
<note>
<para>Quotas cannot be set on volumes, as the
<literal>volsize</literal> property acts as an implicit
quota.</para>
</note>
<para>The
<literal>refquota=<replaceable>size</replaceable></literal>
limits the amount of space a dataset can consume by
enforcing a hard limit on the space used. However, this
hard limit does not include space used by descendants, such
as file systems or snapshots.</para>
<para>To enforce a general quota of 10&nbsp;GB for
<filename>storage/home/bob</filename>, use the
following:</para>
<screen>&prompt.root; <userinput>zfs set quota=10G storage/home/bob</userinput></screen>
<para>User quotas limit the amount of space that can be used
by the specified user. The general format is
<literal>userquota@<replaceable>user</replaceable>=<replaceable>size</replaceable></literal>,
and the user's name must be in one of the following
formats:</para>
<itemizedlist>
<listitem>
<para><acronym
role="Portable Operating System
Interface">POSIX</acronym> compatible name such as
<replaceable>joe</replaceable>.</para>
</listitem>
<listitem>
<para><acronym
role="Portable Operating System
Interface">POSIX</acronym> numeric ID such as
<replaceable>789</replaceable>.</para>
</listitem>
<listitem>
<para><acronym role="System Identifier">SID</acronym> name
such as
<replaceable>joe.bloggs@example.com</replaceable>.</para>
</listitem>
<listitem>
<para><acronym role="System Identifier">SID</acronym>
numeric ID such as
<replaceable>S-1-123-456-789</replaceable>.</para>
</listitem>
</itemizedlist>
<para>For example, to enforce a quota of 50&nbsp;GB for a user
named <replaceable>joe</replaceable>, use the
following:</para>
<screen>&prompt.root; <userinput>zfs set userquota@joe=50G</userinput></screen>
<para>To remove the quota or make sure that one is not set,
instead use:</para>
<screen>&prompt.root; <userinput>zfs set userquota@joe=none</userinput></screen>
<para>User quota properties are not displayed by
<command>zfs get all</command>.
Non-<systemitem class="username">root</systemitem> users can
only see their own quotas unless they have been granted the
<literal>userquota</literal> privilege. Users with this
privilege are able to view and set everyone's quota.</para>
<para>The group quota limits the amount of space that a
specified group can consume. The general format is
<literal>groupquota@<replaceable>group</replaceable>=<replaceable>size</replaceable></literal>.</para>
<para>To set the quota for the group
<replaceable>firstgroup</replaceable> to 50&nbsp;GB,
use:</para>
<screen>&prompt.root; <userinput>zfs set groupquota@firstgroup=50G</userinput></screen>
<para>To remove the quota for the group
<replaceable>firstgroup</replaceable>, or to make sure that
one is not set, instead use:</para>
<screen>&prompt.root; <userinput>zfs set groupquota@firstgroup=none</userinput></screen>
<para>As with the user quota property,
non-<systemitem class="username">root</systemitem> users can
only see the quotas associated with the groups that they
belong to. However, <systemitem
class="username">root</systemitem> or a user with the
<literal>groupquota</literal> privilege can view and set all
quotas for all groups.</para>
<para>To display the amount of space consumed by each user on
the specified file system or snapshot, along with any
specified quotas, use <command>zfs userspace</command>.
For group information, use <command>zfs
groupspace</command>. For more information about
supported options or how to display only specific options,
refer to &man.zfs.1;.</para>
<para>Users with sufficient privileges and <systemitem
class="username">root</systemitem> can list the quota for
<filename>storage/home/bob</filename> using:</para>
<screen>&prompt.root; <userinput>zfs get quota storage/home/bob</userinput></screen>
</sect3>
<sect3>
<title>ZFS Reservations</title>
<para>ZFS supports two types of space reservations. This
section explains the basics of each and includes some usage
instructions.</para>
<para>The <literal>reservation</literal> property makes it
possible to reserve a minimum amount of space guaranteed
for a dataset and its descendants. This means that if a
10&nbsp;GB reservation is set on
<filename>storage/home/bob</filename>, if disk
space gets low, at least 10&nbsp;GB of space is reserved
for this dataset. The <literal>refreservation</literal>
property sets or indicates the minimum amount of space
guaranteed to a dataset excluding descendants, such as
snapshots. As an example, if a snapshot was taken of
<filename>storage/home/bob</filename>, enough disk space
would have to exist outside of the
<literal>refreservation</literal> amount for the operation
to succeed because descendants of the main data set are
not counted by the <literal>refreservation</literal>
amount and so do not encroach on the space set.</para>
<para>Reservations of any sort are useful in many situations,
such as planning and testing the suitability of disk space
allocation in a new system, or ensuring that enough space is
available on file systems for system recovery procedures and
files.</para>
<para>The general format of the <literal>reservation</literal>
property is
<literal>reservation=<replaceable>size</replaceable></literal>,
so to set a reservation of 10&nbsp;GB on
<filename>storage/home/bob</filename>, use:</para>
<screen>&prompt.root; <userinput>zfs set reservation=10G storage/home/bob</userinput></screen>
<para>To make sure that no reservation is set, or to remove a
reservation, use:</para>
<screen>&prompt.root; <userinput>zfs set reservation=none storage/home/bob</userinput></screen>
<para>The same principle can be applied to the
<literal>refreservation</literal> property for setting a
refreservation, with the general format
<literal>refreservation=<replaceable>size</replaceable></literal>.</para>
<para>To check if any reservations or refreservations exist on
<filename>storage/home/bob</filename>, execute one of the
following commands:</para>
<screen>&prompt.root; <userinput>zfs get reservation storage/home/bob</userinput>
&prompt.root; <userinput>zfs get refreservation storage/home/bob</userinput></screen>
</sect3>
</sect2>
</sect1>
<sect1 xml:id="filesystems-linux">
<title>&linux; File Systems</title>
<para>&os; provides built-in support for several &linux; file
systems. This section demonstrates how to load support for and
how to mount the supported &linux; file systems.</para>
<sect2>
<title><acronym>ext2</acronym></title>
<para>Kernel support for ext2 file systems has
been available since &os;&nbsp;2.2. In &os;&nbsp;8.x and
earlier, the code is licensed under the
<acronym>GPL</acronym>. Since &os;&nbsp;9.0, the code has
been rewritten and is now <acronym>BSD</acronym>
licensed.</para>
<para>The &man.ext2fs.5; driver allows the &os; kernel to both
read and write to ext2 file systems.</para>
<note>
<para>
This driver can also be used to access ext3 and ext4 file
systems. However, ext3 journaling, extended attributes, and
inodes greater than 128-bytes are not supported. Support
for ext4 is read-only.</para>
</note>
<para>To access an ext file system, first
load the kernel loadable module:</para>
<screen>&prompt.root; <userinput>kldload ext2fs</userinput></screen>
<para>Then, mount the ext volume by specifying its &os;
partition name and an existing mount point. This example
mounts <filename>/dev/ad1s1</filename> on
<filename>/mnt</filename>:</para>
<screen>&prompt.root; <userinput>mount -t ext2fs <replaceable>/dev/ad1s1</replaceable> <replaceable>/mnt</replaceable></userinput></screen>
</sect2>
<sect2>
<title>XFS</title>
<para>A &os; kernel can be configured to provide read-only
support for <acronym>XFS</acronym>
file systems.</para>
<para>To compile in <acronym>XFS</acronym> support, add the
following option to a custom kernel configuration file and
recompile the kernel using the instructions in <xref
linkend="kernelconfig"/>:</para>
<programlisting>options XFS</programlisting>
<para>Then, to mount an <acronym>XFS</acronym> volume located on
<filename>/dev/ad1s1</filename>:</para>
<screen>&prompt.root; <userinput>mount -t xfs <replaceable>/dev/ad1s1</replaceable> <replaceable>/mnt</replaceable></userinput></screen>
<para>The <package>sysutils/xfsprogs</package> package or
port provides additional
utilities, with man pages, for using, analyzing, and repairing
<acronym>XFS</acronym> file systems.</para>
</sect2>
<sect2>
<title>ReiserFS</title>
<para>&os; provides read-only support for The Reiser file
system, ReiserFS.</para>
<para>To load the &man.reiserfs.5; driver:</para>
<screen>&prompt.root; <userinput>kldload reiserfs</userinput></screen>
<para>Then, to mount a ReiserFS volume located on
<filename>/dev/ad1s1</filename>:</para>
<screen>&prompt.root; <userinput>mount -t reiserfs <replaceable>/dev/ad1s1</replaceable> <replaceable>/mnt</replaceable></userinput></screen>
</sect2>
</sect1>
<!--
<sect1>
<title>Device File System</title>
</sect1>
<sect1>
<title>DOS and NTFS File Systems</title>
<para>This is a good section for those who transfer files, using
USB devices, from Windows to FreeBSD and vice-versa. My camera,
and many other cameras I have seen default to using FAT16. There
is (was?) a kde utility, I think called kamera, that could be used
to access camera devices. A section on this would be useful.</para>
<para>XXXTR: Though! The disks chapter, covers a bit of this and
devfs under it's USB devices. It leaves a lot to be desired though,
see:
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/usb-disks.html
It may be better to flesh out that section a bit more. Add the
word "camera" to it so that others can easily notice.</para>
</sect1>
<sect1>
<title>Linux EXT File System</title>
<para>Probably NOT as useful as the other two, but it requires
knowledge of the existence of the tools. Which are hidden in
the ports collection. Most Linux guys would probably only use
Linux, BSD guys would be smarter and use NFS.</para>
</sect1>
<sect1>
<title>HFS</title>
<para>I think this is the file system used on Apple OSX. There are
tools in the ports collection, and with Apple being a big
FreeBSD supporter and user of our technologies, surely there
is enough cross over to cover this?</para>
</sect1>
-->
</chapter>