From 0dd71013f34402680145db2009a862da5281a0e4 Mon Sep 17 00:00:00 2001 From: Warren Block Date: Fri, 3 Jun 2016 18:20:29 +0000 Subject: [PATCH] Correct misusage of "zpool". PR: 206940 Submitted by: Shawn Debnath Differential Revision: https://reviews.freebsd.org/D6163 --- en_US.ISO8859-1/books/handbook/zfs/chapter.xml | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/en_US.ISO8859-1/books/handbook/zfs/chapter.xml b/en_US.ISO8859-1/books/handbook/zfs/chapter.xml index 8fcc5a4d67..c553886946 100644 --- a/en_US.ISO8859-1/books/handbook/zfs/chapter.xml +++ b/en_US.ISO8859-1/books/handbook/zfs/chapter.xml @@ -2265,7 +2265,7 @@ passwd vi.recover cp: /var/tmp/.zfs/snapshot/after_cp/rc.conf: Read-only file system The error reminds the user that snapshots are read-only - and can not be changed after creation. No files can be + and cannot be changed after creation. Files cannot be copied into or removed from snapshot directories because that would change the state of the dataset they represent. @@ -2315,7 +2315,7 @@ camino/home/joe@backup 0K - 87K - A typical use for clones is to experiment with a specific dataset while keeping the snapshot around to fall back to in - case something goes wrong. Since snapshots can not be + case something goes wrong. Since snapshots cannot be changed, a read/write clone of a snapshot is created. After the desired result is achieved in the clone, the clone can be promoted to a dataset and the old file system removed. This @@ -3461,7 +3461,7 @@ vfs.zfs.vdev.cache.size="5M" combining the traditionally separate roles, ZFS is able to overcome previous limitations that prevented RAID groups being able to - grow. Each top level device in a zpool is called a + grow. Each top level device in a pool is called a vdev, which can be a simple disk or a RAID transformation such as a mirror or RAID-Z array. ZFS file @@ -3476,7 +3476,7 @@ vfs.zfs.vdev.cache.size="5M" - zpool + pool A storage pool is the most basic building block of ZFS. A pool @@ -3534,7 +3534,7 @@ vfs.zfs.vdev.cache.size="5M" pools can be backed by regular files, this is especially useful for testing and experimentation. Use the full path to the file as the device path - in the zpool create command. All vdevs must be + in zpool create. All vdevs must be at least 128 MB in size. @@ -3641,7 +3641,7 @@ vfs.zfs.vdev.cache.size="5M" Cache - - Adding a cache vdev to a zpool will add the + - Adding a cache vdev to a pool will add the storage of the cache to the L2ARC. Cache devices cannot be mirrored. Since a cache