Editorial review of Synopsis/Introduction and RAID0 sections.

Sponsored by:	iXsystems
This commit is contained in:
Dru Lavigne 2014-04-11 15:47:38 +00:00
parent b977a0f818
commit e4f3350c9f
Notes: svn2git 2020-12-08 03:00:23 +00:00
svn path=/head/; revision=44531

View file

@ -26,31 +26,34 @@
<title>Synopsis</title>
<indexterm>
<primary>GEOM</primary>
<primary><acronym>GEOM</acronym></primary>
</indexterm>
<indexterm>
<primary>GEOM Disk Framework</primary>
<see>GEOM</see>
<primary><acronym>GEOM</acronym> Disk Framework</primary>
<see><acronym>GEOM</acronym></see>
</indexterm>
<para>This chapter covers the use of disks under the GEOM
framework in &os;. This includes the major <acronym
role="Redundant Array of Inexpensive Disks">RAID</acronym>
<para>In &os;, the <acronym>GEOM</acronym> framework permits access and control to classes, such as Master
Boot Records and <acronym>BSD</acronym> labels, through the use
of providers, or the disk devices in <filename>/dev</filename>.
By supporting various software <acronym>RAID</acronym>
configurations, <acronym>GEOM</acronym> transparently provides access to the
operating system and operating system utilities.</para>
<para>This chapter covers the use of disks under the <acronym>GEOM</acronym>
framework in &os;. This includes the major <acronym>RAID</acronym>
control utilities which use the framework for configuration.
This chapter will not go into in depth discussion on how GEOM
handles or controls I/O, the underlying subsystem, or code.
This information is provided in &man.geom.4; and its various
<literal>SEE ALSO</literal> references. This chapter is also
This chapter is
not a definitive guide to <acronym>RAID</acronym> configurations
and only GEOM-supported <acronym>RAID</acronym> classifications
will be discussed.</para>
and only <acronym>GEOM</acronym>-supported <acronym>RAID</acronym> classifications
are discussed.</para>
<para>After reading this chapter, you will know:</para>
<itemizedlist>
<listitem>
<para>What type of <acronym>RAID</acronym> support is
available through GEOM.</para>
available through <acronym>GEOM</acronym>.</para>
</listitem>
<listitem>
@ -61,11 +64,11 @@
<listitem>
<para>How to mirror, stripe, encrypt, and remotely connect
disk devices through GEOM.</para>
disk devices through <acronym>GEOM</acronym>.</para>
</listitem>
<listitem>
<para>How to troubleshoot disks attached to the GEOM
<para>How to troubleshoot disks attached to the <acronym>GEOM</acronym>
framework.</para>
</listitem>
</itemizedlist>
@ -74,28 +77,17 @@
<itemizedlist>
<listitem>
<para>Understand how &os; treats <link linkend="disks">disk
devices</link>.</para>
<para>Understand how &os; treats disk devices (<xref
linkend="disks"/>).</para>
</listitem>
<listitem>
<para>Know how to configure and install a new
<link linkend="kernelconfig">&os; kernel</link>.</para>
<para>Know how to configure and install a new kernel
(<xref linkend="kernelconfig"/>.</para>
</listitem>
</itemizedlist>
</sect1>
<sect1 xml:id="geom-intro">
<title>GEOM Introduction</title>
<para>GEOM permits access and control to classes, such as Master
Boot Records and <acronym>BSD</acronym> labels, through the use
of providers, or the special files in <filename>/dev</filename>.
By supporting various software <acronym>RAID</acronym>
configurations, GEOM transparently provides access to the
operating system and operating system utilities.</para>
</sect1>
<sect1 xml:id="geom-striping">
<info>
<title>RAID0 - Striping</title>
@ -119,30 +111,29 @@
</info>
<indexterm>
<primary>GEOM</primary>
<primary><acronym>GEOM</acronym></primary>
</indexterm>
<indexterm>
<primary>Striping</primary>
</indexterm>
<para>Striping combine several disk drives into a single volume.
In many cases, this is done through the use of hardware
controllers. The GEOM disk subsystem provides software support
for <acronym>RAID</acronym>0, also known as disk
striping.</para>
<para>Striping combines several disk drives into a single volume.
Striping can be performed through the use of hardware
<acronym>RAID</acronym> controllers. The
<acronym>GEOM</acronym> disk subsystem provides software support
for disk striping, also known as <acronym>RAID0</acronym>,
without the need for a <acronym>RAID</acronym> disk
controller.</para>
<para>In a <acronym>RAID</acronym>0 system, data is split into
blocks that get written across all the drives in the array.
Instead of having to wait on the system to write 256k to one
disk, a <acronym>RAID</acronym>0 system can simultaneously write
64k to each of four different disks, offering superior I/O
<para>In <acronym>RAID0</acronym>, data is split into
blocks that are written across all the drives in the array. As
seen in the following illustration,
instead of having to wait on the system to write 256k to one
disk, <acronym>RAID0</acronym> can simultaneously write
64k to each of the four disks in the array, offering superior <acronym>I/O</acronym>
performance. This performance can be enhanced further by using
multiple disk controllers.</para>
<para>Each disk in a <acronym>RAID</acronym>0 stripe must be of
the same size, since I/O requests are interleaved to read or
write to multiple disks in parallel.</para>
<mediaobject>
<imageobject>
<imagedata fileref="geom/striping" align="center"/>
@ -153,8 +144,26 @@
</textobject>
</mediaobject>
<para>Each disk in a <acronym>RAID0</acronym> stripe must be of
the same size, since <acronym>I/O</acronym> requests are interleaved to read or
write to multiple disks in parallel.</para>
<note>
<para><acronym>RAID0</acronym> does <emphasis>not</emphasis>
provide any redundancy. This means that if one disk in the
array fails, all of the data on the disks is lost. If the
data is important, implement a backup strategy that regularly
saves backups to a remote system or device.</para>
</note>
<para>The process for creating a software,
<acronym>GEOM</acronym>-based <acronym>RAID0</acronym> on a &os;
system using commodity disks is as follows. Once the stripe is
created, refer to &man.gstripe..8; for more information on how
to control an existing stripe.</para>
<procedure>
<title>Creating a Stripe of Unformatted ATA Disks</title>
<title>Creating a Stripe of Unformatted <acronym>ATA</acronym> Disks</title>
<step>
<para>Load the <filename>geom_stripe.ko</filename>
@ -167,9 +176,7 @@
<para>Ensure that a suitable mount point exists. If this
volume will become a root partition, then temporarily use
another mount point such as
<filename>/mnt</filename>:</para>
<screen>&prompt.root; <userinput>mkdir /mnt</userinput></screen>
<filename>/mnt</filename>.</para>
</step>
<step>
@ -199,8 +206,8 @@ Done.</screen>
<filename>/dev/stripe</filename> in
addition to <filename>st0</filename>. Those include
<filename>st0a</filename> and
<filename>st0c</filename>. At this point, a file system
may be created on <filename>st0a</filename> using
<filename>st0c</filename>. At this point, a <acronym>UFS</acronym> file system
can be created on <filename>st0a</filename> using
<command>newfs</command>:</para>
<screen>&prompt.root; <userinput>newfs -U /dev/stripe/st0a</userinput></screen>
@ -209,12 +216,14 @@ Done.</screen>
few seconds, the process will be complete. The volume has
been created and is ready to be mounted.</para>
</step>
</procedure>
<step>
<para>To manually mount the created disk stripe:</para>
<screen>&prompt.root; <userinput>mount /dev/stripe/st0a /mnt</userinput></screen>
</step>
<step>
<para>To mount this striped file system automatically during the
boot process, place the volume information in
<filename>/etc/fstab</filename>. In this example, a permanent
@ -224,19 +233,23 @@ Done.</screen>
<screen>&prompt.root; <userinput>mkdir /stripe</userinput>
&prompt.root; <userinput>echo "/dev/stripe/st0a /stripe ufs rw 2 2" \</userinput>
<userinput>&gt;&gt; /etc/fstab</userinput></screen>
</step>
<step>
<para>The <filename>geom_stripe.ko</filename> module must also be
automatically loaded during system initialization, by adding a
line to <filename>/boot/loader.conf</filename>:</para>
<screen>&prompt.root; <userinput>echo 'geom_stripe_load="YES"' &gt;&gt; /boot/loader.conf</userinput></screen>
</step>
</procedure>
</sect1>
<sect1 xml:id="geom-mirror">
<title>RAID1 - Mirroring</title>
<indexterm>
<primary>GEOM</primary>
<primary><acronym>GEOM</acronym></primary>
</indexterm>
<indexterm>
<primary>Disk Mirroring</primary>
@ -856,7 +869,7 @@ mountroot&gt;</screen>
</info>
<indexterm>
<primary>GEOM</primary>
<primary><acronym>GEOM</acronym></primary>
</indexterm>
<indexterm>
<primary>Software RAID Devices</primary>
@ -1193,7 +1206,7 @@ raid/r0 OPTIMAL ada0 (ACTIVE (ACTIVE))
</info>
<indexterm>
<primary>GEOM</primary>
<primary><acronym>GEOM</acronym></primary>
</indexterm>
<indexterm>
<primary>RAID3</primary>
@ -1325,9 +1338,9 @@ Done.</screen>
</sect1>
<sect1 xml:id="geom-ggate">
<title>GEOM Gate Network Devices</title>
<title><acronym>GEOM</acronym> Gate Network Devices</title>
<para>GEOM supports the remote use of devices, such as disks,
<para><acronym>GEOM</acronym> supports the remote use of devices, such as disks,
CD-ROMs, and files through the use of the gate utilities.
This is similar to <acronym>NFS</acronym>.</para>
@ -1373,7 +1386,7 @@ ggate0
<title>Labeling Disk Devices</title>
<indexterm>
<primary>GEOM</primary>
<primary><acronym>GEOM</acronym></primary>
</indexterm>
<indexterm>
<primary>Disk Labels</primary>
@ -1579,10 +1592,10 @@ ufsid/486b6fc16926168e N/A ad4s1f</screen>
</sect1>
<sect1 xml:id="geom-gjournal">
<title>UFS Journaling Through GEOM</title>
<title>UFS Journaling Through <acronym>GEOM</acronym></title>
<indexterm>
<primary>GEOM</primary>
<primary><acronym>GEOM</acronym></primary>
</indexterm>
<indexterm>
<primary>Journaling</primary>