Improve wording in ZFS chapter

PR:         253075
Patch by:   panden(at)gmail.com
main
Sergio Carlavilla Delgado 3 years ago
parent 8ee5df0179
commit 3c2a5e96f9

@ -514,7 +514,7 @@ A pool that is no longer needed can be destroyed so that the disks can be reused
There are two cases for adding disks to a zpool: attaching a disk to an existing vdev with `zpool attach`, or adding vdevs to the pool with `zpool add`. Only some <<zfs-term-vdev,vdev types>> allow disks to be added to the vdev after creation.
A pool created with a single disk lacks redundancy. Corruption can be detected but not repaired, because there is no other copy of the data. The <<zfs-term-copies,copies>> property may be able to recover from a small failure such as a bad sector, but does not provide the same level of protection as mirroring or RAID-Z. Starting with a pool consisting of a single disk vdev, `zpool attach` can be used to add an additional disk to the vdev, creating a mirror. `zpool attach` can also be used to add additional disks to a mirror group, increasing redundancy and read performance. If the disks being used for the pool are partitioned, replicate the layout of the first disk on to the second, `gpart backup` and `gpart restore` can be used to make this process easier.
A pool created with a single disk lacks redundancy. Corruption can be detected but not repaired, because there is no other copy of the data. The <<zfs-term-copies,copies>> property may be able to recover from a small failure such as a bad sector, but does not provide the same level of protection as mirroring or RAID-Z. Starting with a pool consisting of a single disk vdev, `zpool attach` can be used to add an additional disk to the vdev, creating a mirror. `zpool attach` can also be used to add additional disks to a mirror group, increasing redundancy and read performance. If the disks being used for the pool are partitioned, replicate the layout of the first disk on to the second. `gpart backup` and `gpart restore` can be used to make this process easier.
Upgrade the single disk (stripe) vdev _ada0p3_ to a mirror by attaching _ada1p3_:
@ -882,7 +882,7 @@ NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALT
healer 960M 92.5K 960M - - 0% 0% 1.00x ONLINE -
....
Some important data that to be protected from data errors using the self-healing feature is copied to the pool. A checksum of the pool is created for later comparison.
Some important data that have to be protected from data errors using the self-healing feature are copied to the pool. A checksum of the pool is created for later comparison.
[source,bash]
....
@ -2343,7 +2343,7 @@ A configuration of two RAID-Z2 vdevs consisting of 8 disks each would create som
|A ZFS dataset is most often used as a file system. Like most other file systems, a ZFS file system is mounted somewhere in the systems directory hierarchy and contains files and directories of its own with permissions, flags, and other metadata.
|[[zfs-term-volume]]Volume
|In additional to regular file system datasets, ZFS can also create volumes, which are block devices. Volumes have many of the same features, including copy-on-write, snapshots, clones, and checksumming. Volumes can be useful for running other file system formats on top of ZFS, such as UFS virtualization, or exporting iSCSI extents.
|In addition to regular file system datasets, ZFS can also create volumes, which are block devices. Volumes have many of the same features, including copy-on-write, snapshots, clones, and checksumming. Volumes can be useful for running other file system formats on top of ZFS, such as UFS virtualization, or exporting iSCSI extents.
|[[zfs-term-snapshot]]Snapshot
|The <<zfs-term-cow,copy-on-write>> (COW) design of ZFS allows for nearly instantaneous, consistent snapshots with arbitrary names. After taking a snapshot of a dataset, or a recursive snapshot of a parent dataset that will include all child datasets, new data is written to new blocks, but the old blocks are not reclaimed as free space. The snapshot contains the original version of the file system, and the live file system contains any changes made since the snapshot was taken. No additional space is used. As new data is written to the live file system, new blocks are allocated to store this data. The apparent size of the snapshot will grow as the blocks are no longer used in the live file system, but only in the snapshot. These snapshots can be mounted read only to allow for the recovery of previous versions of files. It is also possible to <<zfs-zfs-snapshot,rollback>> a live file system to a specific snapshot, undoing any changes that took place after the snapshot was taken. Each block in the pool has a reference counter which keeps track of how many snapshots, clones, datasets, or volumes make use of that block. As files and snapshots are deleted, the reference count is decremented. When a block is no longer referenced, it is reclaimed as free space. Snapshots can also be marked with a <<zfs-zfs-snapshot,hold>>. When a snapshot is held, any attempt to destroy it will return an `EBUSY` error. Each snapshot can have multiple holds, each with a unique name. The <<zfs-zfs-snapshot,release>> command removes the hold so the snapshot can deleted. Snapshots can be taken on volumes, but they can only be cloned or rolled back, not mounted independently.

@ -517,7 +517,7 @@ A pool that is no longer needed can be destroyed so that the disks can be reused
There are two cases for adding disks to a zpool: attaching a disk to an existing vdev with `zpool attach`, or adding vdevs to the pool with `zpool add`. Only some <<zfs-term-vdev,vdev types>> allow disks to be added to the vdev after creation.
A pool created with a single disk lacks redundancy. Corruption can be detected but not repaired, because there is no other copy of the data. The <<zfs-term-copies,copies>> property may be able to recover from a small failure such as a bad sector, but does not provide the same level of protection as mirroring or RAID-Z. Starting with a pool consisting of a single disk vdev, `zpool attach` can be used to add an additional disk to the vdev, creating a mirror. `zpool attach` can also be used to add additional disks to a mirror group, increasing redundancy and read performance. If the disks being used for the pool are partitioned, replicate the layout of the first disk on to the second, `gpart backup` and `gpart restore` can be used to make this process easier.
A pool created with a single disk lacks redundancy. Corruption can be detected but not repaired, because there is no other copy of the data. The <<zfs-term-copies,copies>> property may be able to recover from a small failure such as a bad sector, but does not provide the same level of protection as mirroring or RAID-Z. Starting with a pool consisting of a single disk vdev, `zpool attach` can be used to add an additional disk to the vdev, creating a mirror. `zpool attach` can also be used to add additional disks to a mirror group, increasing redundancy and read performance. If the disks being used for the pool are partitioned, replicate the layout of the first disk on to the second. `gpart backup` and `gpart restore` can be used to make this process easier.
Upgrade the single disk (stripe) vdev _ada0p3_ to a mirror by attaching _ada1p3_:
@ -885,7 +885,7 @@ NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALT
healer 960M 92.5K 960M - - 0% 0% 1.00x ONLINE -
....
Some important data that to be protected from data errors using the self-healing feature is copied to the pool. A checksum of the pool is created for later comparison.
Some important data that have to be protected from data errors using the self-healing feature are copied to the pool. A checksum of the pool is created for later comparison.
[source,bash]
....
@ -2333,7 +2333,7 @@ A configuration of two RAID-Z2 vdevs consisting of 8 disks each would create som
|A ZFS dataset is most often used as a file system. Like most other file systems, a ZFS file system is mounted somewhere in the systems directory hierarchy and contains files and directories of its own with permissions, flags, and other metadata.
|[[zfs-term-volume]]Volume
|In additional to regular file system datasets, ZFS can also create volumes, which are block devices. Volumes have many of the same features, including copy-on-write, snapshots, clones, and checksumming. Volumes can be useful for running other file system formats on top of ZFS, such as UFS virtualization, or exporting iSCSI extents.
|In addition to regular file system datasets, ZFS can also create volumes, which are block devices. Volumes have many of the same features, including copy-on-write, snapshots, clones, and checksumming. Volumes can be useful for running other file system formats on top of ZFS, such as UFS virtualization, or exporting iSCSI extents.
|[[zfs-term-snapshot]]Snapshot
|The <<zfs-term-cow,copy-on-write>> (COW) design of ZFS allows for nearly instantaneous, consistent snapshots with arbitrary names. After taking a snapshot of a dataset, or a recursive snapshot of a parent dataset that will include all child datasets, new data is written to new blocks, but the old blocks are not reclaimed as free space. The snapshot contains the original version of the file system, and the live file system contains any changes made since the snapshot was taken. No additional space is used. As new data is written to the live file system, new blocks are allocated to store this data. The apparent size of the snapshot will grow as the blocks are no longer used in the live file system, but only in the snapshot. These snapshots can be mounted read only to allow for the recovery of previous versions of files. It is also possible to <<zfs-zfs-snapshot,rollback>> a live file system to a specific snapshot, undoing any changes that took place after the snapshot was taken. Each block in the pool has a reference counter which keeps track of how many snapshots, clones, datasets, or volumes make use of that block. As files and snapshots are deleted, the reference count is decremented. When a block is no longer referenced, it is reclaimed as free space. Snapshots can also be marked with a <<zfs-zfs-snapshot,hold>>. When a snapshot is held, any attempt to destroy it will return an `EBUSY` error. Each snapshot can have multiple holds, each with a unique name. The <<zfs-zfs-snapshot,release>> command removes the hold so the snapshot can deleted. Snapshots can be taken on volumes, but they can only be cloned or rolled back, not mounted independently.

Loading…
Cancel
Save