- add a section covering graid3

PR:		docs/164228
Submitted by:	Mark Gladman <mark@legios.org>
Reviewed by:	wblock, bcr
This commit is contained in:
Daniel Gerzo 2012-02-06 17:13:58 +00:00
parent 9e8ef61542
commit f95372cee1
Notes: svn2git 2020-12-08 03:00:23 +00:00
svn path=/head/; revision=38401

View file

@ -440,6 +440,169 @@ OK? <userinput>boot</userinput></screen>
</sect2>
</sect1>
<sect1 id="GEOM-raid3">
<sect1info>
<authorgroup>
<author>
<firstname>Mark</firstname>
<surname>Gladman</surname>
<contrib>Written by </contrib>
</author>
<author>
<firstname>Daniel</firstname>
<surname>Gerzo</surname>
</author>
</authorgroup>
<authorgroup>
<author>
<firstname>Tom</firstname>
<surname>Rhodes</surname>
<contrib>Based on documentation by </contrib>
</author>
<author>
<firstname>Murray</firstname>
<surname>Stokely</surname>
</author>
</authorgroup>
</sect1info>
<indexterm>
<primary>GEOM</primary>
</indexterm>
<indexterm>
<primary>RAID3</primary>
</indexterm>
<title><acronym>RAID</acronym>3 - Byte-level Striping with Dedicated
Parity</title>
<para><acronym>RAID</acronym>3 is a method used to combine several
disk drives into a single volume with a dedicated parity
disk. In a <acronym>RAID</acronym>3 system, data is split up
into a number of bytes that are written across all the drives in
the array except for one disk which acts as a dedicated parity
disk. This means that reading 1024KB from a
<acronym>RAID</acronym>3 implementation will access all disks in
the array. Performance can be enhanced by using multiple
disk controllers. The <acronym>RAID</acronym>3 array provides a
fault tolerance of 1 drive, while providing a capacity of 1 - 1/n
times the total capacity of all drives in the array, where n is the
number of hard drives in the array. Such a configuration is
mostly suitable for storing data of larger sizes, e.g.
multimedia files.</para>
<para>At least 3 physical hard drives are required to build a
<acronym>RAID</acronym>3 array. Each disk must be of the same
size, since I/O requests are interleaved to read or write to
multiple disks in parallel. Also, due to the nature of
<acronym>RAID</acronym>3, the number of drives must be
equal to 3, 5, 9, 17, etc. (2^n + 1).</para>
<sect2>
<title>Creating a Dedicated <acronym>RAID</acronym>3 Array</title>
<para>In &os;, support for <acronym>RAID</acronym>3 is
implemented by the &man.graid3.8; <acronym>GEOM</acronym>
class. Creating a dedicated
<acronym>RAID</acronym>3 array on &os; requires the following
steps.</para>
<note>
<para>While it is theoretically possible to boot from a
<acronym>RAID</acronym>3 array on &os;, that configuration
is uncommon and is not advised.</para>
</note>
<procedure>
<step>
<para>First, load the <filename>geom_raid3.ko</filename>
kernel module by issuing the following command:</para>
<screen>&prompt.root; <userinput>graid3 load</userinput></screen>
<para>Alternatively, it is possible to manually load the
<filename>geom_raid3.ko</filename> module:</para>
<screen>&prompt.root; <userinput>kldload geom_raid3.ko</userinput></screen>
</step>
<step>
<para>Create or ensure that a suitable mount point
exists:</para>
<screen>&prompt.root; <userinput>mkdir <replaceable>/multimedia/</replaceable></userinput></screen>
</step>
<step>
<para>Determine the device names for the disks which will be
added to the array, and create the new
<acronym>RAID</acronym>3 device. The final device listed
will act as the dedicated parity disk. This
example uses three unpartitioned
<acronym>ATA</acronym> drives:
<devicename><replaceable>ada1</replaceable></devicename>
and <devicename><replaceable>ada2</replaceable></devicename>
for data, and
<devicename><replaceable>ada3</replaceable></devicename>
for parity.</para>
<screen>&prompt.root; <userinput>graid3 label -v gr0 /dev/ada1 /dev/ada2 /dev/ada3</userinput>
Metadata value stored on /dev/ada1.
Metadata value stored on /dev/ada2.
Metadata value stored on /dev/ada3.
Done.</screen>
</step>
<step>
<para>Partition the newly created
<devicename>gr0</devicename> device and put a UFS file
system on it:</para>
<screen>&prompt.root; <userinput>gpart create -s GPT /dev/raid3/gr0</userinput>
&prompt.root; <userinput>gpart add -t freebsd-ufs /dev/raid3/gr0</userinput>
&prompt.root; <userinput>newfs -j /dev/raid3/gr0p1</userinput></screen>
<para>Many numbers will glide across the screen, and after a
bit of time, the process will be complete. The volume has
been created and is ready to be mounted.</para>
</step>
<step>
<para>The last step is to mount the file system:</para>
<screen>&prompt.root; <userinput>mount /dev/raid3/gr0p1 /multimedia/</userinput></screen>
<para>The <acronym>RAID</acronym>3 array is now ready to
use.</para>
</step>
</procedure>
<para>Additional configuration is needed to retain the above
setup across system reboots.</para>
<procedure>
<step>
<para>The <filename>geom_raid3.ko</filename> module must be
loaded before the array can be mounted. To automatically
load the kernel module during the system initialization, add
the following line to the
<filename>/boot/loader.conf</filename> file:</para>
<programlisting>geom_raid3_load="YES"</programlisting>
</step>
<step>
<para>The following volume information must be added to the
<filename>/etc/fstab</filename> file in order to
automatically mount the array's file system during
the system boot process:</para>
<programlisting>/dev/raid3/gr0p1 /multimedia ufs rw 2 2</programlisting>
</step>
</procedure>
</sect2>
</sect1>
<sect1 id="geom-ggate">
<title>GEOM Gate Network Devices</title>