diff --git a/en_US.ISO8859-1/books/handbook/zfs/chapter.xml b/en_US.ISO8859-1/books/handbook/zfs/chapter.xml
index 93700d430d..df0a0ef410 100644
--- a/en_US.ISO8859-1/books/handbook/zfs/chapter.xml
+++ b/en_US.ISO8859-1/books/handbook/zfs/chapter.xml
@@ -1103,8 +1103,8 @@ config:
errors: No known data errors
&prompt.root; zpool list
-NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
-healer 960M 92.5K 960M 0% 1.00x ONLINE -
+NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
+healer 960M 92.5K 960M - - 0% 0% 1.00x ONLINE -
Some important data that to be protected from data errors
using the self-healing feature is copied to the pool. A
@@ -2436,9 +2436,9 @@ usr/home/joe 1.3G 128k 1.3G 0% /usr/home/joe
replication with these two pools:
&prompt.root; zpool list
-NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
-backup 960M 77K 896M 0% 1.00x ONLINE -
-mypool 984M 43.7M 940M 4% 1.00x ONLINE -
+NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
+backup 960M 77K 896M - - 0% 0% 1.00x ONLINE -
+mypool 984M 43.7M 940M - - 0% 4% 1.00x ONLINE -
The pool named mypool is the
primary pool where data is written to and read from on a
@@ -2479,9 +2479,9 @@ You must redirect standard output.
&prompt.root; zfs send mypool@backup1 > /backup/backup1
&prompt.root; zpool list
-NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
-backup 960M 63.7M 896M 6% 1.00x ONLINE -
-mypool 984M 43.7M 940M 4% 1.00x ONLINE -
+NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
+backup 960M 63.7M 896M - - 0% 6% 1.00x ONLINE -
+mypool 984M 43.7M 940M - - 0% 4% 1.00x ONLINE -
The zfs send transferred all the data
in the snapshot called backup1 to
@@ -2508,9 +2508,9 @@ total estimated size is 50.1M
TIME SENT SNAPSHOT
&prompt.root; zpool list
-NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
-backup 960M 63.7M 896M 6% 1.00x ONLINE -
-mypool 984M 43.7M 940M 4% 1.00x ONLINE -
+NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
+backup 960M 63.7M 896M - - 0% 6% 1.00x ONLINE -
+mypool 984M 43.7M 940M - - 0% 4% 1.00x ONLINE -
Incremental Backups
@@ -2526,9 +2526,9 @@ NAME USED AVAIL REFER MOUNTPOINT
mypool@replica1 5.72M - 43.6M -
mypool@replica2 0 - 44.1M -
&prompt.root; zpool list
-NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
-backup 960M 61.7M 898M 6% 1.00x ONLINE -
-mypool 960M 50.2M 910M 5% 1.00x ONLINE -
+NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
+backup 960M 61.7M 898M - - 0% 6% 1.00x ONLINE -
+mypool 960M 50.2M 910M - - 0% 5% 1.00x ONLINE -
A second snapshot called
replica2 was created. This
@@ -2547,9 +2547,9 @@ total estimated size is 5.02M
TIME SENT SNAPSHOT
&prompt.root; zpool list
-NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
-backup 960M 80.8M 879M 8% 1.00x ONLINE -
-mypool 960M 50.2M 910M 5% 1.00x ONLINE -
+NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
+backup 960M 80.8M 879M - - 0% 8% 1.00x ONLINE -
+mypool 960M 50.2M 910M - - 0% 5% 1.00x ONLINE -
&prompt.root; zfs list
NAME USED AVAIL REFER MOUNTPOINT
@@ -2929,8 +2929,8 @@ mypool/compressed_dataset logicalused 496G -
like this example:
&prompt.root; zpool list
-NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
-pool 2.84G 2.19M 2.83G 0% 1.00x ONLINE -
+NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
+pool 2.84G 2.19M 2.83G - - 0% 0% 1.00x ONLINE -
The DEDUP column shows the actual rate
of deduplication for the pool. A value of
@@ -2946,8 +2946,8 @@ pool 2.84G 2.19M 2.83G 0% 1.00x ONLINE -
Redundant data is detected and deduplicated:
&prompt.root; zpool list
-NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
-pool 2.84G 20.9M 2.82G 0% 3.00x ONLINE -
+NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
+pool 2.84G 20.9M 2.82G - - 0% 0% 3.00x ONLINE -
The DEDUP column shows a factor of
3.00x. Multiple copies of the ports tree