Put RAID3 chapter before Software RAID Devices chapter.
Sponsored by: iXsystems
This commit is contained in:
parent
2fbb99f2c7
commit
45a04e8f8c
Notes:
svn2git
2020-12-08 03:00:23 +00:00
svn path=/head/; revision=44663
1 changed files with 173 additions and 173 deletions
|
@ -842,6 +842,179 @@ mountroot></screen>
|
|||
</sect2>
|
||||
</sect1>
|
||||
|
||||
<sect1 xml:id="geom-raid3">
|
||||
<info>
|
||||
|
||||
<title><acronym>RAID</acronym>3 - Byte-level Striping with
|
||||
Dedicated Parity</title>
|
||||
|
||||
<authorgroup>
|
||||
<author>
|
||||
<personname>
|
||||
<firstname>Mark</firstname>
|
||||
<surname>Gladman</surname>
|
||||
</personname>
|
||||
<contrib>Written by </contrib>
|
||||
</author>
|
||||
|
||||
<author>
|
||||
<personname>
|
||||
<firstname>Daniel</firstname>
|
||||
<surname>Gerzo</surname>
|
||||
</personname>
|
||||
</author>
|
||||
</authorgroup>
|
||||
|
||||
<authorgroup>
|
||||
<author>
|
||||
<personname>
|
||||
<firstname>Tom</firstname>
|
||||
<surname>Rhodes</surname>
|
||||
</personname>
|
||||
<contrib>Based on documentation by </contrib>
|
||||
</author>
|
||||
|
||||
<author>
|
||||
<personname>
|
||||
<firstname>Murray</firstname>
|
||||
<surname>Stokely</surname>
|
||||
</personname>
|
||||
</author>
|
||||
</authorgroup>
|
||||
</info>
|
||||
|
||||
<indexterm>
|
||||
<primary><acronym>GEOM</acronym></primary>
|
||||
</indexterm>
|
||||
<indexterm>
|
||||
<primary>RAID3</primary>
|
||||
</indexterm>
|
||||
|
||||
<para><acronym>RAID</acronym>3 is a method used to combine several
|
||||
disk drives into a single volume with a dedicated parity disk.
|
||||
In a <acronym>RAID</acronym>3 system, data is split up into a
|
||||
number of bytes that are written across all the drives in the
|
||||
array except for one disk which acts as a dedicated parity disk.
|
||||
This means that reading 1024KB from a
|
||||
<acronym>RAID</acronym>3 implementation will access all disks in
|
||||
the array. Performance can be enhanced by using multiple disk
|
||||
controllers. The <acronym>RAID</acronym>3 array provides a
|
||||
fault tolerance of 1 drive, while providing a capacity of 1 -
|
||||
1/n times the total capacity of all drives in the array, where n
|
||||
is the number of hard drives in the array. Such a configuration
|
||||
is mostly suitable for storing data of larger sizes such as
|
||||
multimedia files.</para>
|
||||
|
||||
<para>At least 3 physical hard drives are required to build a
|
||||
<acronym>RAID</acronym>3 array. Each disk must be of the same
|
||||
size, since I/O requests are interleaved to read or write to
|
||||
multiple disks in parallel. Also, due to the nature of
|
||||
<acronym>RAID</acronym>3, the number of drives must be
|
||||
equal to 3, 5, 9, 17, and so on, or 2^n + 1.</para>
|
||||
|
||||
<sect2>
|
||||
<title>Creating a Dedicated <acronym>RAID</acronym>3
|
||||
Array</title>
|
||||
|
||||
<para>In &os;, support for <acronym>RAID</acronym>3 is
|
||||
implemented by the &man.graid3.8; <acronym>GEOM</acronym>
|
||||
class. Creating a dedicated
|
||||
<acronym>RAID</acronym>3 array on &os; requires the following
|
||||
steps.</para>
|
||||
|
||||
<note>
|
||||
<para>While it is theoretically possible to boot from a
|
||||
<acronym>RAID</acronym>3 array on &os;, that configuration
|
||||
is uncommon and is not advised.</para>
|
||||
</note>
|
||||
|
||||
<procedure>
|
||||
<step>
|
||||
<para>First, load the <filename>geom_raid3.ko</filename>
|
||||
kernel module by issuing the following command:</para>
|
||||
|
||||
<screen>&prompt.root; <userinput>graid3 load</userinput></screen>
|
||||
|
||||
<para>Alternatively, it is possible to manually load the
|
||||
<filename>geom_raid3.ko</filename> module:</para>
|
||||
|
||||
<screen>&prompt.root; <userinput>kldload geom_raid3.ko</userinput></screen>
|
||||
</step>
|
||||
|
||||
<step>
|
||||
<para>Create or ensure that a suitable mount point
|
||||
exists:</para>
|
||||
|
||||
<screen>&prompt.root; <userinput>mkdir <replaceable>/multimedia/</replaceable></userinput></screen>
|
||||
</step>
|
||||
|
||||
<step>
|
||||
<para>Determine the device names for the disks which will be
|
||||
added to the array, and create the new
|
||||
<acronym>RAID</acronym>3 device. The final device listed
|
||||
will act as the dedicated parity disk. This
|
||||
example uses three unpartitioned
|
||||
<acronym>ATA</acronym> drives:
|
||||
<filename><replaceable>ada1</replaceable></filename>
|
||||
and
|
||||
<filename><replaceable>ada2</replaceable></filename>
|
||||
for data, and
|
||||
<filename><replaceable>ada3</replaceable></filename>
|
||||
for parity.</para>
|
||||
|
||||
<screen>&prompt.root; <userinput>graid3 label -v gr0 /dev/ada1 /dev/ada2 /dev/ada3</userinput>
|
||||
Metadata value stored on /dev/ada1.
|
||||
Metadata value stored on /dev/ada2.
|
||||
Metadata value stored on /dev/ada3.
|
||||
Done.</screen>
|
||||
</step>
|
||||
|
||||
<step>
|
||||
<para>Partition the newly created
|
||||
<filename>gr0</filename> device and put a UFS file
|
||||
system on it:</para>
|
||||
|
||||
<screen>&prompt.root; <userinput>gpart create -s GPT /dev/raid3/gr0</userinput>
|
||||
&prompt.root; <userinput>gpart add -t freebsd-ufs /dev/raid3/gr0</userinput>
|
||||
&prompt.root; <userinput>newfs -j /dev/raid3/gr0p1</userinput></screen>
|
||||
|
||||
<para>Many numbers will glide across the screen, and after a
|
||||
bit of time, the process will be complete. The volume has
|
||||
been created and is ready to be mounted:</para>
|
||||
|
||||
<screen>&prompt.root; <userinput>mount /dev/raid3/gr0p1 /multimedia/</userinput></screen>
|
||||
|
||||
<para>The <acronym>RAID</acronym>3 array is now ready to
|
||||
use.</para>
|
||||
</step>
|
||||
</procedure>
|
||||
|
||||
<para>Additional configuration is needed to retain the above
|
||||
setup across system reboots.</para>
|
||||
|
||||
<procedure>
|
||||
<step>
|
||||
<para>The <filename>geom_raid3.ko</filename> module must be
|
||||
loaded before the array can be mounted. To automatically
|
||||
load the kernel module during system initialization, add
|
||||
the following line to
|
||||
<filename>/boot/loader.conf</filename>:</para>
|
||||
|
||||
<programlisting>geom_raid3_load="YES"</programlisting>
|
||||
</step>
|
||||
|
||||
<step>
|
||||
<para>The following volume information must be added to
|
||||
<filename>/etc/fstab</filename> in order to
|
||||
automatically mount the array's file system during
|
||||
the system boot process:</para>
|
||||
|
||||
<programlisting>/dev/raid3/gr0p1 /multimedia ufs rw 2 2</programlisting>
|
||||
</step>
|
||||
</procedure>
|
||||
</sect2>
|
||||
</sect1>
|
||||
|
||||
<sect1 xml:id="geom-graid">
|
||||
<info>
|
||||
<title>Software <acronym>RAID</acronym> Devices</title>
|
||||
|
@ -1153,179 +1326,6 @@ raid/r0 OPTIMAL ada0 (ACTIVE (ACTIVE))
|
|||
</sect2>
|
||||
</sect1>
|
||||
|
||||
<sect1 xml:id="geom-raid3">
|
||||
<info>
|
||||
|
||||
<title><acronym>RAID</acronym>3 - Byte-level Striping with
|
||||
Dedicated Parity</title>
|
||||
|
||||
<authorgroup>
|
||||
<author>
|
||||
<personname>
|
||||
<firstname>Mark</firstname>
|
||||
<surname>Gladman</surname>
|
||||
</personname>
|
||||
<contrib>Written by </contrib>
|
||||
</author>
|
||||
|
||||
<author>
|
||||
<personname>
|
||||
<firstname>Daniel</firstname>
|
||||
<surname>Gerzo</surname>
|
||||
</personname>
|
||||
</author>
|
||||
</authorgroup>
|
||||
|
||||
<authorgroup>
|
||||
<author>
|
||||
<personname>
|
||||
<firstname>Tom</firstname>
|
||||
<surname>Rhodes</surname>
|
||||
</personname>
|
||||
<contrib>Based on documentation by </contrib>
|
||||
</author>
|
||||
|
||||
<author>
|
||||
<personname>
|
||||
<firstname>Murray</firstname>
|
||||
<surname>Stokely</surname>
|
||||
</personname>
|
||||
</author>
|
||||
</authorgroup>
|
||||
</info>
|
||||
|
||||
<indexterm>
|
||||
<primary><acronym>GEOM</acronym></primary>
|
||||
</indexterm>
|
||||
<indexterm>
|
||||
<primary>RAID3</primary>
|
||||
</indexterm>
|
||||
|
||||
<para><acronym>RAID</acronym>3 is a method used to combine several
|
||||
disk drives into a single volume with a dedicated parity disk.
|
||||
In a <acronym>RAID</acronym>3 system, data is split up into a
|
||||
number of bytes that are written across all the drives in the
|
||||
array except for one disk which acts as a dedicated parity disk.
|
||||
This means that reading 1024KB from a
|
||||
<acronym>RAID</acronym>3 implementation will access all disks in
|
||||
the array. Performance can be enhanced by using multiple disk
|
||||
controllers. The <acronym>RAID</acronym>3 array provides a
|
||||
fault tolerance of 1 drive, while providing a capacity of 1 -
|
||||
1/n times the total capacity of all drives in the array, where n
|
||||
is the number of hard drives in the array. Such a configuration
|
||||
is mostly suitable for storing data of larger sizes such as
|
||||
multimedia files.</para>
|
||||
|
||||
<para>At least 3 physical hard drives are required to build a
|
||||
<acronym>RAID</acronym>3 array. Each disk must be of the same
|
||||
size, since I/O requests are interleaved to read or write to
|
||||
multiple disks in parallel. Also, due to the nature of
|
||||
<acronym>RAID</acronym>3, the number of drives must be
|
||||
equal to 3, 5, 9, 17, and so on, or 2^n + 1.</para>
|
||||
|
||||
<sect2>
|
||||
<title>Creating a Dedicated <acronym>RAID</acronym>3
|
||||
Array</title>
|
||||
|
||||
<para>In &os;, support for <acronym>RAID</acronym>3 is
|
||||
implemented by the &man.graid3.8; <acronym>GEOM</acronym>
|
||||
class. Creating a dedicated
|
||||
<acronym>RAID</acronym>3 array on &os; requires the following
|
||||
steps.</para>
|
||||
|
||||
<note>
|
||||
<para>While it is theoretically possible to boot from a
|
||||
<acronym>RAID</acronym>3 array on &os;, that configuration
|
||||
is uncommon and is not advised.</para>
|
||||
</note>
|
||||
|
||||
<procedure>
|
||||
<step>
|
||||
<para>First, load the <filename>geom_raid3.ko</filename>
|
||||
kernel module by issuing the following command:</para>
|
||||
|
||||
<screen>&prompt.root; <userinput>graid3 load</userinput></screen>
|
||||
|
||||
<para>Alternatively, it is possible to manually load the
|
||||
<filename>geom_raid3.ko</filename> module:</para>
|
||||
|
||||
<screen>&prompt.root; <userinput>kldload geom_raid3.ko</userinput></screen>
|
||||
</step>
|
||||
|
||||
<step>
|
||||
<para>Create or ensure that a suitable mount point
|
||||
exists:</para>
|
||||
|
||||
<screen>&prompt.root; <userinput>mkdir <replaceable>/multimedia/</replaceable></userinput></screen>
|
||||
</step>
|
||||
|
||||
<step>
|
||||
<para>Determine the device names for the disks which will be
|
||||
added to the array, and create the new
|
||||
<acronym>RAID</acronym>3 device. The final device listed
|
||||
will act as the dedicated parity disk. This
|
||||
example uses three unpartitioned
|
||||
<acronym>ATA</acronym> drives:
|
||||
<filename><replaceable>ada1</replaceable></filename>
|
||||
and
|
||||
<filename><replaceable>ada2</replaceable></filename>
|
||||
for data, and
|
||||
<filename><replaceable>ada3</replaceable></filename>
|
||||
for parity.</para>
|
||||
|
||||
<screen>&prompt.root; <userinput>graid3 label -v gr0 /dev/ada1 /dev/ada2 /dev/ada3</userinput>
|
||||
Metadata value stored on /dev/ada1.
|
||||
Metadata value stored on /dev/ada2.
|
||||
Metadata value stored on /dev/ada3.
|
||||
Done.</screen>
|
||||
</step>
|
||||
|
||||
<step>
|
||||
<para>Partition the newly created
|
||||
<filename>gr0</filename> device and put a UFS file
|
||||
system on it:</para>
|
||||
|
||||
<screen>&prompt.root; <userinput>gpart create -s GPT /dev/raid3/gr0</userinput>
|
||||
&prompt.root; <userinput>gpart add -t freebsd-ufs /dev/raid3/gr0</userinput>
|
||||
&prompt.root; <userinput>newfs -j /dev/raid3/gr0p1</userinput></screen>
|
||||
|
||||
<para>Many numbers will glide across the screen, and after a
|
||||
bit of time, the process will be complete. The volume has
|
||||
been created and is ready to be mounted:</para>
|
||||
|
||||
<screen>&prompt.root; <userinput>mount /dev/raid3/gr0p1 /multimedia/</userinput></screen>
|
||||
|
||||
<para>The <acronym>RAID</acronym>3 array is now ready to
|
||||
use.</para>
|
||||
</step>
|
||||
</procedure>
|
||||
|
||||
<para>Additional configuration is needed to retain the above
|
||||
setup across system reboots.</para>
|
||||
|
||||
<procedure>
|
||||
<step>
|
||||
<para>The <filename>geom_raid3.ko</filename> module must be
|
||||
loaded before the array can be mounted. To automatically
|
||||
load the kernel module during system initialization, add
|
||||
the following line to
|
||||
<filename>/boot/loader.conf</filename>:</para>
|
||||
|
||||
<programlisting>geom_raid3_load="YES"</programlisting>
|
||||
</step>
|
||||
|
||||
<step>
|
||||
<para>The following volume information must be added to
|
||||
<filename>/etc/fstab</filename> in order to
|
||||
automatically mount the array's file system during
|
||||
the system boot process:</para>
|
||||
|
||||
<programlisting>/dev/raid3/gr0p1 /multimedia ufs rw 2 2</programlisting>
|
||||
</step>
|
||||
</procedure>
|
||||
</sect2>
|
||||
</sect1>
|
||||
|
||||
<sect1 xml:id="geom-ggate">
|
||||
<title><acronym>GEOM</acronym> Gate Network Devices</title>
|
||||
|
||||
|
|
Loading…
Reference in a new issue