Correct misusage of "zpool".

PR:		206940
Submitted by:	Shawn Debnath <sd@beastie.io>
Differential Revision:	https://reviews.freebsd.org/D6163
This commit is contained in:
Warren Block 2016-06-03 18:20:29 +00:00
parent bb71c015e0
commit 0dd71013f3
Notes: svn2git 2020-12-08 03:00:23 +00:00
svn path=/head/; revision=48889

View file

@ -2265,7 +2265,7 @@ passwd vi.recover
cp: /var/tmp/.zfs/snapshot/after_cp/rc.conf: Read-only file system</screen>
<para>The error reminds the user that snapshots are read-only
and can not be changed after creation. No files can be
and cannot be changed after creation. Files cannot be
copied into or removed from snapshot directories because
that would change the state of the dataset they
represent.</para>
@ -2315,7 +2315,7 @@ camino/home/joe@backup 0K - 87K -</screen>
<para>A typical use for clones is to experiment with a specific
dataset while keeping the snapshot around to fall back to in
case something goes wrong. Since snapshots can not be
case something goes wrong. Since snapshots cannot be
changed, a read/write clone of a snapshot is created. After
the desired result is achieved in the clone, the clone can be
promoted to a dataset and the old file system removed. This
@ -3461,7 +3461,7 @@ vfs.zfs.vdev.cache.size="5M"</programlisting>
combining the traditionally separate roles,
<acronym>ZFS</acronym> is able to overcome previous limitations
that prevented <acronym>RAID</acronym> groups being able to
grow. Each top level device in a zpool is called a
grow. Each top level device in a pool is called a
<emphasis>vdev</emphasis>, which can be a simple disk or a
<acronym>RAID</acronym> transformation such as a mirror or
<acronym>RAID-Z</acronym> array. <acronym>ZFS</acronym> file
@ -3476,7 +3476,7 @@ vfs.zfs.vdev.cache.size="5M"</programlisting>
<tgroup cols="2">
<tbody valign="top">
<row>
<entry xml:id="zfs-term-zpool">zpool</entry>
<entry xml:id="zfs-term-pool">pool</entry>
<entry>A storage <emphasis>pool</emphasis> is the most
basic building block of <acronym>ZFS</acronym>. A pool
@ -3534,7 +3534,7 @@ vfs.zfs.vdev.cache.size="5M"</programlisting>
pools can be backed by regular files, this is
especially useful for testing and experimentation.
Use the full path to the file as the device path
in the zpool create command. All vdevs must be
in <command>zpool create</command>. All vdevs must be
at least 128&nbsp;MB in size.</para>
</listitem>
@ -3641,7 +3641,7 @@ vfs.zfs.vdev.cache.size="5M"</programlisting>
<listitem>
<para
xml:id="zfs-term-vdev-cache"><emphasis>Cache</emphasis>
- Adding a cache vdev to a zpool will add the
- Adding a cache vdev to a pool will add the
storage of the cache to the <link
linkend="zfs-term-l2arc"><acronym>L2ARC</acronym></link>.
Cache devices cannot be mirrored. Since a cache