White space fix only. Translators can ignore.

Sponsored by:	iXsystems
This commit is contained in:
Dru Lavigne 2014-04-08 15:48:46 +00:00
parent 0cb660bab4
commit ec56d937f3
Notes: svn2git 2020-12-08 03:00:23 +00:00
svn path=/head/; revision=44487

View file

@ -530,7 +530,7 @@ add path 'da*' mode 0660 group operator</programlisting>
<note>
<para>If <acronym>SCSI</acronym> disks are installed in the
system, change the second line as follows:</para>
system, change the second line as follows:</para>
<programlisting>add path 'da[3-9]*' mode 0660 group operator</programlisting>
@ -559,11 +559,12 @@ add path 'da*' mode 0660 group operator</programlisting>
system is to be mounted. This directory needs to be owned by
the user that is to mount the file system. One way to do that
is for <systemitem class="username">root</systemitem> to
create a subdirectory owned by that user as
<filename class="directory">/mnt/<replaceable>username</replaceable></filename>. In the following example,
replace <replaceable>username</replaceable> with the login
name of the user and <replaceable>usergroup</replaceable> with
the user's primary group:</para>
create a subdirectory owned by that user as <filename
class="directory">/mnt/<replaceable>username</replaceable></filename>.
In the following example, replace
<replaceable>username</replaceable> with the login name of the
user and <replaceable>usergroup</replaceable> with the user's
primary group:</para>
<screen>&prompt.root; <userinput>mkdir /mnt/<replaceable>username</replaceable></userinput>
&prompt.root; <userinput>chown <replaceable>username</replaceable>:<replaceable>usergroup</replaceable> /mnt/<replaceable>username</replaceable></userinput></screen>
@ -893,8 +894,8 @@ scsibus1:
<title><acronym>ATAPI</acronym> Drives</title>
<note>
<para>With the help of the
<link linkend="atapicam">ATAPI/CAM module</link>,
<para>With the help of the <link
linkend="atapicam">ATAPI/CAM module</link>,
<command>cdda2wav</command> can also be used on
<acronym>ATAPI</acronym> drives. This tool is usually a
better choice for most of users, as it supports jitter
@ -905,11 +906,11 @@ scsibus1:
<step>
<para>The <acronym>ATAPI</acronym> <acronym>CD</acronym>
driver makes each track available as
<filename>/dev/acd<replaceable>d</replaceable>t<replaceable>nn</replaceable></filename>, where
<replaceable>d</replaceable> is the drive number, and
<replaceable>nn</replaceable> is the track number written
with two decimal digits, prefixed with zero as needed. So
the first track on the first disk is
<filename>/dev/acd<replaceable>d</replaceable>t<replaceable>nn</replaceable></filename>,
where <replaceable>d</replaceable> is the drive number,
and <replaceable>nn</replaceable> is the track number
written with two decimal digits, prefixed with zero as
needed. So the first track on the first disk is
<filename>/dev/acd0t01</filename>, the second is
<filename>/dev/acd0t02</filename>, the third is
<filename>/dev/acd0t03</filename>, and so on.</para>
@ -1173,69 +1174,69 @@ cd0: Attempt to query device size failed: NOT READY, Medium not present - tray c
<secondary>burning</secondary>
</indexterm>
<para>Compared to the <acronym>CD</acronym>, the
<acronym>DVD</acronym> is the next generation of optical media
storage technology. The <acronym>DVD</acronym> can hold more
data than any <acronym>CD</acronym> and is the standard for
video publishing.</para>
<para>Compared to the <acronym>CD</acronym>, the
<acronym>DVD</acronym> is the next generation of optical media
storage technology. The <acronym>DVD</acronym> can hold more
data than any <acronym>CD</acronym> and is the standard for
video publishing.</para>
<para>Five physical recordable formats can be defined for a
recordable <acronym>DVD</acronym>:</para>
<para>Five physical recordable formats can be defined for a
recordable <acronym>DVD</acronym>:</para>
<itemizedlist>
<listitem>
<para>DVD-R: This was the first <acronym>DVD</acronym>
recordable format available. The DVD-R standard is
defined by the <link
xlink:href="http://www.dvdforum.com/forum.shtml"><acronym>DVD</acronym>
Forum</link>. This format is write once.</para>
</listitem>
<itemizedlist>
<listitem>
<para>DVD-R: This was the first <acronym>DVD</acronym>
recordable format available. The DVD-R standard is defined
by the <link
xlink:href="http://www.dvdforum.com/forum.shtml"><acronym>DVD</acronym>
Forum</link>. This format is write once.</para>
</listitem>
<listitem>
<para><acronym>DVD-RW</acronym>: This is the rewritable
version of the DVD-R standard. A
<acronym>DVD-RW</acronym> can be rewritten about 1000
times.</para>
</listitem>
<listitem>
<para><acronym>DVD-RW</acronym>: This is the rewritable
version of the DVD-R standard. A
<acronym>DVD-RW</acronym> can be rewritten about 1000
times.</para>
</listitem>
<listitem>
<para><acronym>DVD-RAM</acronym>: This is a rewritable
format which can be seen as a removable hard drive.
However, this media is not compatible with most
<acronym>DVD-ROM</acronym> drives and DVD-Video players
as only a few <acronym>DVD</acronym> writers support the
<acronym>DVD-RAM</acronym> format. Refer to <xref
linkend="creating-dvd-ram"/> for more information on
<acronym>DVD-RAM</acronym> use.</para>
</listitem>
<listitem>
<para><acronym>DVD-RAM</acronym>: This is a rewritable format
which can be seen as a removable hard drive. However, this
media is not compatible with most
<acronym>DVD-ROM</acronym> drives and DVD-Video players as
only a few <acronym>DVD</acronym> writers support the
<acronym>DVD-RAM</acronym> format. Refer to <xref
linkend="creating-dvd-ram"/> for more information on
<acronym>DVD-RAM</acronym> use.</para>
</listitem>
<listitem>
<para><acronym>DVD+RW</acronym>: This is a rewritable format
defined by the <link
xlink:href="http://www.dvdrw.com/"><acronym>DVD+RW</acronym>
<listitem>
<para><acronym>DVD+RW</acronym>: This is a rewritable format
defined by the <link
xlink:href="http://www.dvdrw.com/"><acronym>DVD+RW</acronym>
Alliance</link>. A <acronym>DVD+RW</acronym> can be
rewritten about 1000 times.</para>
</listitem>
rewritten about 1000 times.</para>
</listitem>
<listitem>
<para>DVD+R: This format is the write once variation
of the <acronym>DVD+RW</acronym> format.</para>
</listitem>
</itemizedlist>
<listitem>
<para>DVD+R: This format is the write once variation of the
<acronym>DVD+RW</acronym> format.</para>
</listitem>
</itemizedlist>
<para>A single layer recordable <acronym>DVD</acronym> can hold
up to 4,700,000,000&nbsp;bytes which is actually 4.38&nbsp;GB
or 4485&nbsp;MB as 1 kilobyte is 1024 bytes.</para>
<para>A single layer recordable <acronym>DVD</acronym> can hold up
to 4,700,000,000&nbsp;bytes which is actually 4.38&nbsp;GB or
4485&nbsp;MB as 1 kilobyte is 1024 bytes.</para>
<note>
<para>A distinction must be made between the physical media
and the application. For example, a DVD-Video is a specific
file layout that can be written on any recordable
<acronym>DVD</acronym> physical media such as DVD-R, DVD+R,
or <acronym>DVD-RW</acronym>. Before choosing the type of
media, ensure that both the burner and the DVD-Video player
are compatible with the media under consideration.</para>
</note>
<note>
<para>A distinction must be made between the physical media and
the application. For example, a DVD-Video is a specific file
layout that can be written on any recordable
<acronym>DVD</acronym> physical media such as DVD-R, DVD+R, or
<acronym>DVD-RW</acronym>. Before choosing the type of media,
ensure that both the burner and the DVD-Video player are
compatible with the media under consideration.</para>
</note>
<sect2>
<title>Configuration</title>
@ -1540,7 +1541,8 @@ cd0: Attempt to query device size failed: NOT READY, Medium not present - tray c
<title>For More Information</title>
<para>To obtain more information about a <acronym>DVD</acronym>,
use <command>dvd+rw-mediainfo <replaceable>/dev/cd0</replaceable></command> while the
use <command>dvd+rw-mediainfo
<replaceable>/dev/cd0</replaceable></command> while the
disc in the specified drive.</para>
<para>More information about
@ -2067,7 +2069,7 @@ cd0: Attempt to query device size failed: NOT READY, Medium not present - tray c
</itemizedlist>
<indexterm><primary>livefs
<acronym>CD</acronym></primary></indexterm>
<acronym>CD</acronym></primary></indexterm>
<para>Store this printout and a copy of the installation media
in a secure location. Should an emergency restore be
@ -2754,8 +2756,8 @@ Filesystem 1K-blocks Used Avail Capacity Mounted on
<xref linkend="disks-adding"/>. For the purposes of this
example, a new hard drive partition has been added as
<filename>/dev/ad4s1c</filename> and
<filename>/dev/ad0s1<replaceable>*</replaceable></filename> represents the existing
standard &os; partitions.</para>
<filename>/dev/ad0s1<replaceable>*</replaceable></filename>
represents the existing standard &os; partitions.</para>
<screen>&prompt.root; <userinput>ls /dev/ad*</userinput>
/dev/ad0 /dev/ad0s1b /dev/ad0s1e /dev/ad4s1
@ -2868,7 +2870,8 @@ sector_size = 2048
<note>
<para>&man.newfs.8; must be performed on an attached
<application>gbde</application> partition which is
identified by a <filename><replaceable>*</replaceable>.bde</filename>
identified by a
<filename><replaceable>*</replaceable>.bde</filename>
extension to the device name.</para>
</note>
</step>
@ -3297,7 +3300,8 @@ Device 1K-blocks Used Avail Capacity
<sect1 xml:id="disks-hast">
<info>
<title>Highly Available Storage (<acronym>HAST</acronym>)</title>
<title>Highly Available Storage
(<acronym>HAST</acronym>)</title>
<authorgroup>
<author>
@ -3348,57 +3352,56 @@ Device 1K-blocks Used Avail Capacity
<para>High availability is one of the main requirements in
serious business applications and highly-available storage is a
key component in such environments. In &os;, the Highly Available STorage
(<acronym>HAST</acronym>)
framework allows transparent storage of
the same data across several physically separated machines
connected by a <acronym>TCP/IP</acronym> network. <acronym>HAST</acronym> can be
understood as a network-based RAID1 (mirror), and is similar to
the DRBD&reg; storage system used in the GNU/&linux;
platform. In combination with other high-availability features
of &os; like <acronym>CARP</acronym>, <acronym>HAST</acronym>
makes it possible to build a highly-available storage cluster
that is resistant to hardware failures.</para>
key component in such environments. In &os;, the Highly
Available STorage (<acronym>HAST</acronym>) framework allows
transparent storage of the same data across several physically
separated machines connected by a <acronym>TCP/IP</acronym>
network. <acronym>HAST</acronym> can be understood as a
network-based RAID1 (mirror), and is similar to the DRBD&reg;
storage system used in the GNU/&linux; platform. In combination
with other high-availability features of &os; like
<acronym>CARP</acronym>, <acronym>HAST</acronym> makes it
possible to build a highly-available storage cluster that is
resistant to hardware failures.</para>
<para>The following are the main features of
<acronym>HAST</acronym>:</para>
<para>The following are the main features of
<acronym>HAST</acronym>:</para>
<itemizedlist>
<listitem>
<para>Can be used to mask <acronym>I/O</acronym> errors on local hard
drives.</para>
</listitem>
<itemizedlist>
<listitem>
<para>Can be used to mask <acronym>I/O</acronym> errors on
local hard drives.</para>
</listitem>
<listitem>
<para>File system agnostic as it works with any file
system supported by &os;.</para>
</listitem>
<listitem>
<para>File system agnostic as it works with any file system
supported by &os;.</para>
</listitem>
<listitem>
<para>Efficient and quick resynchronization as
only the blocks that were modified during the downtime of a
node are synchronized.</para>
</listitem>
<listitem>
<para>Efficient and quick resynchronization as only the blocks
that were modified during the downtime of a node are
synchronized.</para>
</listitem>
<!--
<listitem>
<para>Has several synchronization modes to allow for fast
failover.</para>
</listitem>
-->
<!--
<listitem>
<para>Has several synchronization modes to allow for fast
failover.</para>
</listitem>
-->
<listitem>
<para>Can be used in an already deployed environment to add
additional redundancy.</para>
</listitem>
<listitem>
<para>Can be used in an already deployed environment to add
additional redundancy.</para>
</listitem>
<listitem>
<para>Together with <acronym>CARP</acronym>,
<application>Heartbeat</application>, or other tools, it
can be used to build a robust and durable storage
system.</para>
</listitem>
</itemizedlist>
<listitem>
<para>Together with <acronym>CARP</acronym>,
<application>Heartbeat</application>, or other tools, it can
be used to build a robust and durable storage system.</para>
</listitem>
</itemizedlist>
<para>After reading this section, you will know:</para>
@ -3442,48 +3445,47 @@ Device 1K-blocks Used Avail Capacity
<para>The <acronym>HAST</acronym> project was sponsored by The
&os; Foundation with support from <link
xlink:href="http://www.omc.net/">http://www.omc.net/</link> and <link
xlink:href="http://www.omc.net/">http://www.omc.net/</link>
and <link
xlink:href="http://www.transip.nl/">http://www.transip.nl/</link>.</para>
<sect2>
<title>HAST Operation</title>
<para><acronym>HAST</acronym> provides synchronous
block-level replication between two
physical machines:
the <emphasis>primary</emphasis>, also known as the
<para><acronym>HAST</acronym> provides synchronous block-level
replication between two physical machines: the
<emphasis>primary</emphasis>, also known as the
<emphasis>master</emphasis> node, and the
<emphasis>secondary</emphasis>, or <emphasis>slave</emphasis>
node. These two machines together are referred to as a
cluster.</para>
<para>Since <acronym>HAST</acronym> works in a
primary-secondary configuration, it allows only one of the
cluster nodes to be active at any given time. The
primary node, also called
<para>Since <acronym>HAST</acronym> works in a primary-secondary
configuration, it allows only one of the cluster nodes to be
active at any given time. The primary node, also called
<emphasis>active</emphasis>, is the one which will handle all
the <acronym>I/O</acronym> requests to <acronym>HAST</acronym>-managed
devices. The secondary node is
automatically synchronized from the primary
node.</para>
the <acronym>I/O</acronym> requests to
<acronym>HAST</acronym>-managed devices. The secondary node
is automatically synchronized from the primary node.</para>
<para>The physical components of the <acronym>HAST</acronym>
system are the local disk on primary node, and the
disk on the remote, secondary node.</para>
system are the local disk on primary node, and the disk on the
remote, secondary node.</para>
<para><acronym>HAST</acronym> operates synchronously on a block
level, making it transparent to file systems and applications.
<acronym>HAST</acronym> provides regular GEOM providers in
<filename>/dev/hast/</filename> for use by
other tools or applications. There is no difference
between using <acronym>HAST</acronym>-provided devices and
raw disks or partitions.</para>
<filename>/dev/hast/</filename> for use by other tools or
applications. There is no difference between using
<acronym>HAST</acronym>-provided devices and raw disks or
partitions.</para>
<para>Each write, delete, or flush operation is sent to both the
local disk and to the remote disk over <acronym>TCP/IP</acronym>. Each read
operation is served from the local disk, unless the local disk
is not up-to-date or an <acronym>I/O</acronym> error occurs. In such cases, the
read operation is sent to the secondary node.</para>
local disk and to the remote disk over
<acronym>TCP/IP</acronym>. Each read operation is served from
the local disk, unless the local disk is not up-to-date or an
<acronym>I/O</acronym> error occurs. In such cases, the read
operation is sent to the secondary node.</para>
<para><acronym>HAST</acronym> tries to provide fast failure
recovery. For this reason, it is important to reduce
@ -3499,30 +3501,31 @@ Device 1K-blocks Used Avail Capacity
<itemizedlist>
<listitem>
<para><emphasis>memsync</emphasis>: This mode reports a write operation
as completed when the local write operation is finished
and when the remote node acknowledges data arrival, but
before actually storing the data. The data on the remote
node will be stored directly after sending the
acknowledgement. This mode is intended to reduce
latency, but still provides good
<para><emphasis>memsync</emphasis>: This mode reports a
write operation as completed when the local write
operation is finished and when the remote node
acknowledges data arrival, but before actually storing the
data. The data on the remote node will be stored directly
after sending the acknowledgement. This mode is intended
to reduce latency, but still provides good
reliability.</para>
</listitem>
<listitem>
<para><emphasis>fullsync</emphasis>: This mode reports a write
operation as completed when both the local write and the
remote write complete. This is the safest and the
<para><emphasis>fullsync</emphasis>: This mode reports a
write operation as completed when both the local write and
the remote write complete. This is the safest and the
slowest replication mode. This mode is the
default.</para>
</listitem>
<listitem>
<para><emphasis>async</emphasis>: This mode reports a write operation as
completed when the local write completes. This is the
fastest and the most dangerous replication mode. It
should only be used when replicating to a distant node where
latency is too high for other modes.</para>
<para><emphasis>async</emphasis>: This mode reports a write
operation as completed when the local write completes.
This is the fastest and the most dangerous replication
mode. It should only be used when replicating to a
distant node where latency is too high for other
modes.</para>
</listitem>
</itemizedlist>
</sect2>
@ -3541,8 +3544,8 @@ Device 1K-blocks Used Avail Capacity
</listitem>
<listitem>
<para>The userland management
utility, &man.hastctl.8;.</para>
<para>The userland management utility,
&man.hastctl.8;.</para>
</listitem>
<listitem>
@ -3553,26 +3556,26 @@ Device 1K-blocks Used Avail Capacity
</itemizedlist>
<para>Users who prefer to statically build
<literal>GEOM_GATE</literal> support into the kernel
should add this line to the custom kernel configuration
file, then rebuild the kernel using the instructions in <xref
<literal>GEOM_GATE</literal> support into the kernel should
add this line to the custom kernel configuration file, then
rebuild the kernel using the instructions in <xref
linkend="kernelconfig"/>:</para>
<programlisting>options GEOM_GATE</programlisting>
<para>The following example describes how to configure two nodes
in master-slave/primary-secondary
operation using <acronym>HAST</acronym> to replicate the data
between the two. The nodes will be called
<literal>hasta</literal>, with an <acronym>IP</acronym> address of
<literal>172.16.0.1</literal>, and
<literal>hastb</literal>, with an <acronym>IP</acronym> of address
in master-slave/primary-secondary operation using
<acronym>HAST</acronym> to replicate the data between the two.
The nodes will be called <literal>hasta</literal>, with an
<acronym>IP</acronym> address of
<literal>172.16.0.1</literal>, and <literal>hastb</literal>,
with an <acronym>IP</acronym> of address
<literal>172.16.0.2</literal>. Both nodes will have a
dedicated hard drive <filename>/dev/ad6</filename> of the same
size for <acronym>HAST</acronym> operation. The
<acronym>HAST</acronym> pool, sometimes referred to as a
resource or the <acronym>GEOM</acronym> provider in
<filename class="directory">/dev/hast/</filename>, will be called
resource or the <acronym>GEOM</acronym> provider in <filename
class="directory">/dev/hast/</filename>, will be called
<literal>test</literal>.</para>
<para>Configuration of <acronym>HAST</acronym> is done using
@ -3596,14 +3599,14 @@ Device 1K-blocks Used Avail Capacity
<tip>
<para>It is also possible to use host names in the
<literal>remote</literal> statements if
the hosts are resolvable and defined either in
<literal>remote</literal> statements if the hosts are
resolvable and defined either in
<filename>/etc/hosts</filename> or in the local
<acronym>DNS</acronym>.</para>
</tip>
<para>Once the configuration exists on both nodes,
the <acronym>HAST</acronym> pool can be created. Run these
<para>Once the configuration exists on both nodes, the
<acronym>HAST</acronym> pool can be created. Run these
commands on both nodes to place the initial metadata onto the
local disk and to start &man.hastd.8;:</para>
@ -3615,17 +3618,16 @@ Device 1K-blocks Used Avail Capacity
providers with an existing file system or to convert an
existing storage to a <acronym>HAST</acronym>-managed pool.
This procedure needs to store some metadata on the provider
and there will not be enough required space
available on an existing provider.</para>
and there will not be enough required space available on an
existing provider.</para>
</note>
<para>A HAST node's <literal>primary</literal> or
<literal>secondary</literal> role is selected by an
administrator, or software like
<application>Heartbeat</application>, using &man.hastctl.8;.
On the primary node,
<literal>hasta</literal>, issue
this command:</para>
On the primary node, <literal>hasta</literal>, issue this
command:</para>
<screen>&prompt.root; <userinput>hastctl role primary <replaceable>test</replaceable></userinput></screen>
@ -3634,25 +3636,25 @@ Device 1K-blocks Used Avail Capacity
<screen>&prompt.root; <userinput>hastctl role secondary <replaceable>test</replaceable></userinput></screen>
<para>Verify the result by running <command>hastctl</command> on each
node:</para>
<para>Verify the result by running <command>hastctl</command> on
each node:</para>
<screen>&prompt.root; <userinput>hastctl status <replaceable>test</replaceable></userinput></screen>
<para>Check the <literal>status</literal> line in the output.
If it says <literal>degraded</literal>,
something is wrong with the configuration file. It should say <literal>complete</literal>
on each node, meaning that the synchronization
between the nodes has started. The synchronization
completes when <command>hastctl status</command>
reports 0 bytes of <literal>dirty</literal> extents.</para>
If it says <literal>degraded</literal>, something is wrong
with the configuration file. It should say
<literal>complete</literal> on each node, meaning that the
synchronization between the nodes has started. The
synchronization completes when <command>hastctl
status</command> reports 0 bytes of <literal>dirty</literal>
extents.</para>
<para>The next step is to create a file system on the
<acronym>GEOM</acronym> provider and mount it. This must be done on the
<literal>primary</literal> node. Creating
the file system can take a few minutes, depending on the size
of the hard drive. This example creates a <acronym>UFS</acronym>
<acronym>GEOM</acronym> provider and mount it. This must be
done on the <literal>primary</literal> node. Creating the
file system can take a few minutes, depending on the size of
the hard drive. This example creates a <acronym>UFS</acronym>
file system on <filename>/dev/hast/test</filename>:</para>
<screen>&prompt.root; <userinput>newfs -U /dev/hast/<replaceable>test</replaceable></userinput>