Fix typo and renegade parenthesis.

Fix typo and renegade parenthesis in ZFS and virtualization
chapters in the handbook.

PR:         253264
Patch by:   panden(at)gmail.com
main
Sergio Carlavilla Delgado 3 years ago
parent 3c2a5e96f9
commit 1c7144822f

@ -471,7 +471,7 @@ own vboxnetctl root:vboxusers
perm vboxnetctl 0660
....
To launch VirtualBox(TM), type from a Xorg session:
To launch VirtualBox(TM), type from an Xorg session:
[source,bash]
....
@ -982,7 +982,7 @@ These lines are explained in more detail:
<.> Number of virtual CPUs available to the guest VM. For best performance, do not create guests with more virtual CPUs than the number of physical CPUs on the host.
<.> Virtual network adapter. This is the bridge connected to the network interface of the host. The `mac` parameter is the MAC address set on the virtual network interface. This parameter is optional, if no MAC is provided Xen(TM) will generate a random one.
<.> Full path to the disk, file, or ZFS volume of the disk storage for this VM. Options and multiple disk definitions are separated by commas.
<.> Defines the Boot medium from which the initial operating system is installed. In this example, it is the ISO imaged downloaded earlier. Consult the Xen(TM) documentation for other kinds of devices and options to set.
<.> Defines the Boot medium from which the initial operating system is installed. In this example, it is the ISO image downloaded earlier. Consult the Xen(TM) documentation for other kinds of devices and options to set.
<.> Options controlling VNC connectivity to the serial console of the DomU. In order, these are: active VNC support, define IP address on which to listen, device node for the serial console, and the input method for precise positioning of the mouse and other input methods. `keymap` defines which keymap to use, and is `english` by default.
After the file has been created with all the necessary options, the DomU is created by passing it to `xl create` as a parameter.

@ -2224,7 +2224,7 @@ In some specific cases, the smaller 512-byte block size might be preferable. Whe
* [[zfs-advanced-tuning-prefetch_disable]] `_vfs.zfs.prefetch_disable_` - Disable prefetch. A value of `0` is enabled and `1` is disabled. The default is `0`, unless the system has less than 4 GB of RAM. Prefetch works by reading larger blocks than were requested into the <<zfs-term-arc,ARC>> in hopes that the data will be needed soon. If the workload has a large number of random reads, disabling prefetch may actually improve performance by reducing unnecessary reads. This value can be adjusted at any time with man:sysctl[8].
* [[zfs-advanced-tuning-vdev-trim_on_init]] `_vfs.zfs.vdev.trim_on_init_` - Control whether new devices added to the pool have the `TRIM` command run on them. This ensures the best performance and longevity for SSDs, but takes extra time. If the device has already been secure erased, disabling this setting will make the addition of the new device faster. This value can be adjusted at any time with man:sysctl[8].
* [[zfs-advanced-tuning-vdev-max_pending]] `_vfs.zfs.vdev.max_pending_` - Limit the number of pending I/O requests per device. A higher value will keep the device command queue full and may give higher throughput. A lower value will reduce latency. This value can be adjusted at any time with man:sysctl[8].
* [[zfs-advanced-tuning-top_maxinflight]] `_vfs.zfs.top_maxinflight_` - Maxmimum number of outstanding I/Os per top-level <<zfs-term-vdev,vdev>>. Limits the depth of the command queue to prevent high latency. The limit is per top-level vdev, meaning the limit applies to each <<zfs-term-vdev-mirror,mirror>>, <<zfs-term-vdev-raidz,RAID-Z>>, or other vdev independently. This value can be adjusted at any time with man:sysctl[8].
* [[zfs-advanced-tuning-top_maxinflight]] `_vfs.zfs.top_maxinflight_` - Maximum number of outstanding I/Os per top-level <<zfs-term-vdev,vdev>>. Limits the depth of the command queue to prevent high latency. The limit is per top-level vdev, meaning the limit applies to each <<zfs-term-vdev-mirror,mirror>>, <<zfs-term-vdev-raidz,RAID-Z>>, or other vdev independently. This value can be adjusted at any time with man:sysctl[8].
* [[zfs-advanced-tuning-l2arc_write_max]] `_vfs.zfs.l2arc_write_max_` - Limit the amount of data written to the <<zfs-term-l2arc,L2ARC>> per second. This tunable is designed to extend the longevity of SSDs by limiting the amount of data written to the device. This value can be adjusted at any time with man:sysctl[8].
* [[zfs-advanced-tuning-l2arc_write_boost]] `_vfs.zfs.l2arc_write_boost_` - The value of this tunable is added to <<zfs-advanced-tuning-l2arc_write_max,`vfs.zfs.l2arc_write_max`>> and increases the write speed to the SSD until the first block is evicted from the <<zfs-term-l2arc,L2ARC>>. This "Turbo Warmup Phase" is designed to reduce the performance loss from an empty <<zfs-term-l2arc,L2ARC>> after a reboot. This value can be adjusted at any time with man:sysctl[8].
* [[zfs-advanced-tuning-scrub_delay]]`_vfs.zfs.scrub_delay_` - Number of ticks to delay between each I/O during a <<zfs-term-scrub,`scrub`>>. To ensure that a `scrub` does not interfere with the normal operation of the pool, if any other I/O is happening the `scrub` will delay between each command. This value controls the limit on the total IOPS (I/Os Per Second) generated by the `scrub`. The granularity of the setting is determined by the value of `kern.hz` which defaults to 1000 ticks per second. This setting may be changed, resulting in a different effective IOPS limit. The default value is `4`, resulting in a limit of: 1000 ticks/sec / 4 = 250 IOPS. Using a value of _20_ would give a limit of: 1000 ticks/sec / 20 = 50 IOPS. The speed of `scrub` is only limited when there has been recent activity on the pool, as determined by <<zfs-advanced-tuning-scan_idle,`vfs.zfs.scan_idle`>>. This value can be adjusted at any time with man:sysctl[8].
@ -2352,7 +2352,7 @@ A configuration of two RAID-Z2 vdevs consisting of 8 disks each would create som
|Snapshots can also be cloned. A clone is a writable version of a snapshot, allowing the file system to be forked as a new dataset. As with a snapshot, a clone initially consumes no additional space. As new data is written to a clone and new blocks are allocated, the apparent size of the clone grows. When blocks are overwritten in the cloned file system or volume, the reference count on the previous block is decremented. The snapshot upon which a clone is based cannot be deleted because the clone depends on it. The snapshot is the parent, and the clone is the child. Clones can be _promoted_, reversing this dependency and making the clone the parent and the previous parent the child. This operation requires no additional space. Since the amount of space used by the parent and child is reversed, existing quotas and reservations might be affected.
|[[zfs-term-checksum]]Checksum
|Every block that is allocated is also checksummed. The checksum algorithm used is a per-dataset property, see <<zfs-zfs-set,`set`>>. The checksum of each block is transparently validated as it is read, allowing ZFS to detect silent corruption. If the data that is read does not match the expected checksum, ZFS will attempt to recover the data from any available redundancy, like mirrors or RAID-Z). Validation of all checksums can be triggered with <<zfs-term-scrub,`scrub`>>. Checksum algorithms include:
|Every block that is allocated is also checksummed. The checksum algorithm used is a per-dataset property, see <<zfs-zfs-set,`set`>>. The checksum of each block is transparently validated as it is read, allowing ZFS to detect silent corruption. If the data that is read does not match the expected checksum, ZFS will attempt to recover the data from any available redundancy, like mirrors or RAID-Z. Validation of all checksums can be triggered with <<zfs-term-scrub,`scrub`>>. Checksum algorithms include:
* `fletcher2`
* `fletcher4`
@ -2422,5 +2422,5 @@ Reservations of any sort are useful in many situations, such as planning and tes
|A pool or vdev in the `Degraded` state has one or more disks that have been disconnected or have failed. The pool is still usable, but if additional devices fail, the pool could become unrecoverable. Reconnecting the missing devices or replacing the failed disks will return the pool to an <<zfs-term-online,Online>> state after the reconnected or new device has completed the <<zfs-term-resilver,Resilver>> process.
|[[zfs-term-faulted]]Faulted
|A pool or vdev in the `Faulted` state is no longer operational. The data on it can no longer be accessed. A pool or vdev enters the `Faulted` state when the number of missing or failed devices exceeds the level of redundancy in the vdev. If missing devices can be reconnected, the pool will return to a <<zfs-term-online,Online>> state. If there is insufficient redundancy to compensate for the number of failed disks, then the contents of the pool are lost and must be restored from backups.
|A pool or vdev in the `Faulted` state is no longer operational. The data on it can no longer be accessed. A pool or vdev enters the `Faulted` state when the number of missing or failed devices exceeds the level of redundancy in the vdev. If missing devices can be reconnected, the pool will return to an <<zfs-term-online,Online>> state. If there is insufficient redundancy to compensate for the number of failed disks, then the contents of the pool are lost and must be restored from backups.
|===

Loading…
Cancel
Save