* Clarify / Remove first person exposition and personal anecdotes.

PR:		docs/50664
This commit is contained in:
Murray Stokely 2003-05-04 11:00:06 +00:00
parent 90ff2cb926
commit 771e009df1
Notes: svn2git 2020-12-08 03:00:23 +00:00
svn path=/head/; revision=16777

View file

@ -345,57 +345,57 @@
<author> <author>
<firstname>Christopher</firstname> <firstname>Christopher</firstname>
<surname>Shumway</surname> <surname>Shumway</surname>
<contrib>Written by </contrib> <contrib>Original work by </contrib>
</author> </author>
</authorgroup> </authorgroup>
<authorgroup> <authorgroup>
<author> <author>
<firstname>Valentino</firstname> <firstname>Valentino</firstname>
<surname>Vaschetto</surname> <surname>Vaschetto</surname>
<contrib>Marked up by </contrib> <contrib>Original markup by </contrib>
</author>
</authorgroup>
<authorgroup>
<author>
<firstname>Jim</firstname>
<surname>Brown</surname>
<contrib>Revised by </contrib>
</author> </author>
</authorgroup> </authorgroup>
</sect3info> </sect3info>
<title>ccd (Concatenated Disk Configuration)</title> <title>Concatenated Disk Driver (CCD) Configuration</title>
<para>When choosing a mass storage solution the most important <para>When choosing a mass storage solution the most important
factors to consider are speed, reliability, and cost. It is very factors to consider are speed, reliability, and cost. It is
rare to have all three in favor; normally a fast, reliable mass rare to have all three in balance; normally a fast, reliable mass
storage device is expensive, and to cut back on cost either speed storage device is expensive, and to cut back on cost either speed
or reliability must be sacrificed. In designing my system, I or reliability must be sacrificed.</para>
ranked the requirements by most favorable to least favorable. In
this situation, cost was the biggest factor. I needed a lot of <para>In designing the system described below, cost was chosen
storage for a reasonable price. The next factor, speed, is not as the most important factor, followed by speed, then reliability.
quite as important, since most of the usage would be over a one Data transfer speed for this system is ulitmately
hundred megabit switched Ethernet, and that would most likely be constrained by the network. And while reliability is very important,
the bottleneck. The ability to spread the file input/output the CCD drive described below serves online data that is already
operations out over several disks would be more than enough speed fully backed up on CD-R's and can easily be replaced.</para>
for this network. Finally, the consideration of reliability was
an easy one to answer. All of the data being put on this mass <para>Defining your own requirements is the first step
storage device was already backed up on CD-R's. This drive was in choosing a mass storage solution. If your requirements prefer
primarily here for online live storage for easy access, so if a speed or reliability over cost, your solution will differ from
drive went bad, I could just replace it, rebuild the file system, the system described in this section.</para>
and copy back the data from CD-R's.</para>
<para>To sum it up, I need something that will give me the most
amount of storage space for my money. The cost of large IDE disks
are cheap these days. I found a place that was selling Western
Digital 30.7GB 5400 RPM IDE disks for about one-hundred and thirty
US dollars. I bought three of them, giving me approximately
ninety gigabytes of online storage.</para>
<sect4 id="ccd-installhw"> <sect4 id="ccd-installhw">
<title>Installing the Hardware</title> <title>Installing the Hardware</title>
<para>I installed the hard drives in a system that already <para>In addition to the IDE system disk, three Western
had one IDE disk in as the system disk. The ideal solution Digital 30GB, 5400 RPM IDE disks form the core
would be for each IDE disk to have its own IDE controller of the CCD disk described below providing approximately
and cable, but without fronting more costs to acquire a dual 90GB of online storage. Ideally,
IDE controller this would not be a possibility. So, I each IDE disk would have its own IDE controller
jumpered two disks as slaves, and one as master. One went and cable, but to minimize cost, additional
on the first IDE controller as a slave to the system disk, IDE controllers were not used. Instead the disks were
and the other two where slave/master on the secondary IDE configured with jumpers so that each IDE controller has
controller.</para> one master, and one slave.</para>
<para>Upon reboot, the system BIOS was configured to <para>Upon reboot, the system BIOS was configured to
automatically detect the disks attached. More importantly, automatically detect the disks attached. More importantly,
@ -406,74 +406,75 @@ ad1: 29333MB &lt;WDC WD307AA&gt; [59598/16/63] at ata0-slave UDMA33
ad2: 29333MB &lt;WDC WD307AA&gt; [59598/16/63] at ata1-master UDMA33 ad2: 29333MB &lt;WDC WD307AA&gt; [59598/16/63] at ata1-master UDMA33
ad3: 29333MB &lt;WDC WD307AA&gt; [59598/16/63] at ata1-slave UDMA33</programlisting> ad3: 29333MB &lt;WDC WD307AA&gt; [59598/16/63] at ata1-slave UDMA33</programlisting>
<para>At this point, if FreeBSD does not detect the disks, be <note><para>If FreeBSD does not detect all the disks, ensure
sure that you have jumpered them correctly. I have heard that you have jumpered them correctly. Most IDE drives
numerous reports with problems using cable select instead of also have a <quote>Cable Select</quote> jumper. This is
true slave/master configuration.</para> <emphasis>not</emphasis> the jumper for the master/slave
relationship. Consult the drive documentation for help in
<para>The next consideration was how to attach them as part of identifying the correct jumper.</para></note>
the file system. I did a little research on &man.vinum.8;
(<xref linkend="vinum-vinum">) and <para>Next, consider how to attach them as part of the file
&man.ccd.4;. In this particular configuration, &man.ccd.4; system. You should research both &man.vinum.8; (<xref
appeared to be a better choice mainly because it has fewer linkend="vinum-vinum">) and &man.ccd.4;. In this
parts. Less parts tends to indicate less chance of breakage. particular configuration, &man.ccd.4; was chosen.</para>
Vinum appears to be a bit of an overkill for my needs.</para>
</sect4> </sect4>
<sect4 id="ccd-setup"> <sect4 id="ccd-setup">
<title>Setting up the CCD</title> <title>Setting up the CCD</title>
<para><application>CCD</application> allows me to take <para><application>CCD</application> allows you to take
several identical disks and concatenate them into one several identical disks and concatenate them into one
logical file system. In order to use logical file system. In order to use
<application>ccd</application>, I need a kernel with <application>ccd</application>, you need a kernel with
<application>ccd</application> support built into it. I <application>ccd</application> support built in.
added this line to my kernel configuration file and rebuilt Add this line to your kernel configuration file, rebuild, and
the kernel:</para> reinstall the kernel:</para>
<programlisting>pseudo-device ccd 4</programlisting> <programlisting>pseudo-device ccd 4</programlisting>
<note><para>In FreeBSD 5.0, it is not necessary to specify <note><para>In FreeBSD 5.0, it is not necessary to specify
a number of ccd devices, as the ccd device driver is now a number of ccd devices, as the ccd device driver is now
cloning -- new device instances will automatically be self-cloning -- new device instances will automatically be
created on demand.</para></note> created on demand.</para></note>
<para><application>ccd</application> support can also be <para><application>ccd</application> support can also be
loaded as a kernel loadable module in FreeBSD 4.0 or loaded as a kernel loadable module in FreeBSD 3.0 or
later.</para> later.</para>
<para>To set up <application>ccd</application>, first I need <para>To set up <application>ccd</application>, you must first use
to disklabel the disks. Here is how I disklabeled &man.disklabel.8 to label the disks:</para>
them:</para>
<programlisting>disklabel -r -w ad1 auto <programlisting>disklabel -r -w ad1 auto
disklabel -r -w ad2 auto disklabel -r -w ad2 auto
disklabel -r -w ad3 auto</programlisting> disklabel -r -w ad3 auto</programlisting>
<para>This created a disklabel ad1c, ad2c and ad3c that <para>This creates a disklabel for ad1c, ad2c and ad3c that
spans the entire disk.</para> spans the entire disk.</para>
<para>The next step is to change the disklabel type. To do <para>The next step is to change the disklabel type. You
that I had to edit the disklabel:</para> can use <application>disklabel</application> to edit the
disks:</para>
<programlisting>disklabel -e ad1 <programlisting>disklabel -e ad1
disklabel -e ad2 disklabel -e ad2
disklabel -e ad3</programlisting> disklabel -e ad3</programlisting>
<para>This opened up the current disklabel on each disk <para>This opens up the current disklabel on each disk with
respectively in whatever editor the <envar>EDITOR</envar> the editor specified by the <envar>EDITOR</envar>
environment variable was set to, in my case, &man.vi.1;. environment variable, typically &man.vi.1;.</para>
Inside the editor I had a section like this:</para>
<para>An unmodified disklabel will look something like
this:</para>
<programlisting>8 partitions: <programlisting>8 partitions:
# size offset fstype [fsize bsize bps/cpg] # size offset fstype [fsize bsize bps/cpg]
c: 60074784 0 unused 0 0 0 # (Cyl. 0 - 59597)</programlisting> c: 60074784 0 unused 0 0 0 # (Cyl. 0 - 59597)</programlisting>
<para>I needed to add a new "e" partition for &man.ccd.4; to <para>Add a new "e" partition for &man.ccd.4; to use. This
use. This usually can be copied of the "c" partition, but can usually be copied from the <quote>c</quote> partition,
the <option>fstype</option> must be <userinput>4.2BSD</userinput>. but the <option>fstype</option> <emphasis>must</emphasis>
Once I was done, be <userinput>4.2BSD</userinput>. The disklabel should
my disklabel should look like this:</para> now look something like this:</para>
<programlisting>8 partitions: <programlisting>8 partitions:
# size offset fstype [fsize bsize bps/cpg] # size offset fstype [fsize bsize bps/cpg]
@ -485,12 +486,7 @@ disklabel -e ad3</programlisting>
<sect4 id="ccd-buildingfs"> <sect4 id="ccd-buildingfs">
<title>Building the File System</title> <title>Building the File System</title>
<para>Now that I have all of the disks labeled, I needed to <para>The device node for
build the <application>ccd</application>. To do that, I
used a utility called &man.ccdconfig.8;.
<command>ccdconfig</command> takes several arguments, the
first argument being the device to configure, in this case,
<devicename>/dev/ccd0c</devicename>. The device node for
<devicename>ccd0c</devicename> may not exist yet, so to <devicename>ccd0c</devicename> may not exist yet, so to
create it, perform the following commands:</para> create it, perform the following commands:</para>
@ -501,58 +497,79 @@ sh MAKEDEV ccd0</programlisting>
manage device nodes in <filename>/dev</filename>, so use of manage device nodes in <filename>/dev</filename>, so use of
<command>MAKEDEV</command> is not necessary.</para></note> <command>MAKEDEV</command> is not necessary.</para></note>
<para>The next argument <command>ccdconfig</command> expects <para>Now that you have all of the disks labeled, you must
is the interleave for the file system. The interleave build the <application>ccd</application>. To do that,
defines the size of a stripe in disk blocks, normally five use &man.ccdconfig.8;, with options similar to the following:
hundred and twelve bytes. So, an interleave of thirty-two
would be sixteen thousand three hundred and eighty-four
bytes.</para>
<para>After the interleave comes the flags for <programlisting>ccdconfig ccd0<co id="co-ccd-dev"> 32<co id="co-ccd-interleave"> 0<co id="co-ccd-flags"> /dev/ad1e<co id="co-ccd-devs"> /dev/ad2e /dev/ad3e</programlisting>
<command>ccdconfig</command>. If you want to enable drive
mirroring, you can specify a flag here. In this
configuration, I am not mirroring the
<application>ccd</application>, so I left it as zero.</para>
<para>The final arguments to <command>ccdconfig</command> The use and meaning of each option is shown below:</para>
are the devices to place into the array. Putting it all
together I get this command:</para>
<programlisting>ccdconfig ccd0 32 0 /dev/ad1e /dev/ad2e /dev/ad3e</programlisting> <calloutlist>
<callout arearefs="co-ccd-dev">
<para>The first argument is the device to configure, in this case,
<devicename>/dev/ccd0c</devicename>. The <filename>/dev/</filename>
portion is optional.</para>
</callout>
<para>This configures the <application>ccd</application>. <callout arearefs="co-ccd-interleave">
I can now &man.newfs.8; the file system.</para>
<para>The interleave for the file system. The interleave
defines the size of a stripe in disk blocks, each normally 512 bytes.
So, an interleave of 32 would be 16,384 bytes.</para>
</callout>
<callout arearefs="co-ccd-flags">
<para>Flags for <command>ccdconfig</command>. If you want to enable drive
mirroring, you can specify a flag here. This
configuration does not provide mirroring for
<application>ccd</application>, so it is set at 0 (zero).</para>
</callout>
<callout arearefs="co-ccd-devs">
<para>The final arguments to <command>ccdconfig</command>
are the devices to place into the array. Use the complete pathname
for each device.</para>
</callout>
</calloutlist>
<para>After running <command>ccdconfig</command> the <application>ccd</application>
is configured. A file system can be installed. Refer to &man.newfs.8;
for options, or simply run: </para>
<programlisting>newfs /dev/ccd0c</programlisting> <programlisting>newfs /dev/ccd0c</programlisting>
</sect4> </sect4>
<sect4 id="ccd-auto"> <sect4 id="ccd-auto">
<title>Making it all Automatic</title> <title>Making it all Automatic</title>
<para>Finally, if I want to be able to mount the <para>Generally, you will want to mount the
<application>ccd</application>, I need to <application>ccd</application> upon each reboot. To do this, you must
configure it first. I write out my current configuration to configure it first. Write out your current configuration to
<filename>/etc/ccd.conf</filename> using the following command:</para> <filename>/etc/ccd.conf</filename> using the following command:</para>
<programlisting>ccdconfig -g &gt; /etc/ccd.conf</programlisting> <programlisting>ccdconfig -g &gt; /etc/ccd.conf</programlisting>
<para>When I reboot, the script <command>/etc/rc</command> <para>During reboot, the script <command>/etc/rc</command>
runs <command>ccdconfig -C</command> if /etc/ccd.conf runs <command>ccdconfig -C</command> if <filename>/etc/ccd.conf</filename>
exists. This automatically configures the exists. This automatically configures the
<application>ccd</application> so it can be mounted.</para> <application>ccd</application> so it can be mounted.</para>
<para>If you are booting into single user mode, before you can <note><para>If you are booting into single user mode, before you can
<command>mount</command> the <application>ccd</application>, you <command>mount</command> the <application>ccd</application>, you
need to issue the following command to configure the need to issue the following command to configure the
array:</para> array:</para>
<programlisting>ccdconfig -C</programlisting> <programlisting>ccdconfig -C</programlisting>
</note>
<para>Then, we need an entry for the <para>To automatically mount the <application>ccd</application>,
<application>ccd</application> in place an entry for the <application>ccd</application> in
<filename>/etc/fstab</filename> so it will be mounted at <filename>/etc/fstab</filename> so it will be mounted at
boot time.</para> boot time:</para>
<programlisting>/dev/ccd0c /media ufs rw 2 2</programlisting> <programlisting>/dev/ccd0c /media ufs rw 2 2</programlisting>
</sect4> </sect4>
@ -569,7 +586,7 @@ sh MAKEDEV ccd0</programlisting>
storage. &man.vinum.8; implements the RAID-0, RAID-1 and storage. &man.vinum.8; implements the RAID-0, RAID-1 and
RAID-5 models, both individually and in combination.</para> RAID-5 models, both individually and in combination.</para>
<para>See the <xref linkend="vinum-vinum"> for more <para>See <xref linkend="vinum-vinum"> for more
information about &man.vinum.8;.</para> information about &man.vinum.8;.</para>
</sect3> </sect3>
</sect2> </sect2>
@ -581,16 +598,19 @@ sh MAKEDEV ccd0</programlisting>
<primary>RAID</primary> <primary>RAID</primary>
<secondary>Hardware</secondary> <secondary>Hardware</secondary>
</indexterm> </indexterm>
<para>FreeBSD also supports a variety of hardware <acronym>RAID</acronym> <para>FreeBSD also supports a variety of hardware <acronym>RAID</acronym>
controllers. In which case the actual <acronym>RAID</acronym> system controllers. These devices control a <acronym>RAID</acronym> subsystem
is built and controlled by the card itself. Using an on-card without the need for FreeBSD specific software to manage the
<acronym>BIOS</acronym>, the card will control most of the disk operations array.</para>
itself. The following is a brief setup using a Promise <acronym>IDE RAID</acronym>
controller. When this card is installed and the system started up, it will <para>Using an on-card <acronym>BIOS</acronym>, the card controls most of the disk operations
display a prompt requesting information. Follow the on screen instructions itself. The following is a brief setup description using a <acronym>Promise IDE RAID</acronym>
to enter the cards setup screen. From here a user should have the ability to controller. When this card is installed and the system is started up, it
combine all the attached drives. When doing this, the disk(s) will look like displays a prompt requesting information. Follow the instructions
a single drive to FreeBSD. Other <acronym>RAID</acronym> levels can be setup to enter the card's setup screen. From here, you have the ability to
combine all the attached drives. After doing so, the disk(s) will look like
a single drive to FreeBSD. Other <acronym>RAID</acronym> levels can be set up
accordingly. accordingly.
</para> </para>
</sect2> </sect2>
@ -611,7 +631,7 @@ ata3: resetting devices .. done
ad6: hard error reading fsbn 1116119 of 0-7 (ad6 bn 1116119; cn 1107 tn 4 sn 11) status=59 error=40 ad6: hard error reading fsbn 1116119 of 0-7 (ad6 bn 1116119; cn 1107 tn 4 sn 11) status=59 error=40
ar0: WARNING - mirror lost</programlisting> ar0: WARNING - mirror lost</programlisting>
<para>Using &man.atacontrol.8;, check to see how things look:</para> <para>Using &man.atacontrol.8;, check for further information:</para>
<screen>&prompt.root; <userinput>atacontrol list</userinput> <screen>&prompt.root; <userinput>atacontrol list</userinput>
ATA channel 0: ATA channel 0:
@ -659,8 +679,9 @@ Slave: no device present</screen>
</step> </step>
<step> <step>
<para>The rebuild command hangs until complete, its possible to open another <para>The rebuild command hangs until complete. However, it is possible to open another
terminal and check on the progress by issuing the following command:</para> terminal (using <keycombo action="simul"><keycap>Alt</keycap> <keycap>F<replaceable>n</replaceable></keycap></keycombo>)
and check on the progress by issuing the following command:</para>
<screen>&prompt.root; <userinput>dmesg | tail -10</userinput> <screen>&prompt.root; <userinput>dmesg | tail -10</userinput>
[output removed] [output removed]