diff --git a/en_US.ISO8859-1/books/handbook/disks/chapter.sgml b/en_US.ISO8859-1/books/handbook/disks/chapter.sgml index 3706583ebd..04c2c4052d 100644 --- a/en_US.ISO8859-1/books/handbook/disks/chapter.sgml +++ b/en_US.ISO8859-1/books/handbook/disks/chapter.sgml @@ -345,57 +345,57 @@ Christopher Shumway - Written by + Original work by Valentino Vaschetto - Marked up by + Original markup by + + + + + Jim + Brown + Revised by - ccd (Concatenated Disk Configuration) + Concatenated Disk Driver (CCD) Configuration When choosing a mass storage solution the most important - factors to consider are speed, reliability, and cost. It is very - rare to have all three in favor; normally a fast, reliable mass + factors to consider are speed, reliability, and cost. It is + rare to have all three in balance; normally a fast, reliable mass storage device is expensive, and to cut back on cost either speed - or reliability must be sacrificed. In designing my system, I - ranked the requirements by most favorable to least favorable. In - this situation, cost was the biggest factor. I needed a lot of - storage for a reasonable price. The next factor, speed, is not - quite as important, since most of the usage would be over a one - hundred megabit switched Ethernet, and that would most likely be - the bottleneck. The ability to spread the file input/output - operations out over several disks would be more than enough speed - for this network. Finally, the consideration of reliability was - an easy one to answer. All of the data being put on this mass - storage device was already backed up on CD-R's. This drive was - primarily here for online live storage for easy access, so if a - drive went bad, I could just replace it, rebuild the file system, - and copy back the data from CD-R's. + or reliability must be sacrificed. + + In designing the system described below, cost was chosen + as the most important factor, followed by speed, then reliability. + Data transfer speed for this system is ulitmately + constrained by the network. And while reliability is very important, + the CCD drive described below serves online data that is already + fully backed up on CD-R's and can easily be replaced. + + Defining your own requirements is the first step + in choosing a mass storage solution. If your requirements prefer + speed or reliability over cost, your solution will differ from + the system described in this section. - To sum it up, I need something that will give me the most - amount of storage space for my money. The cost of large IDE disks - are cheap these days. I found a place that was selling Western - Digital 30.7GB 5400 RPM IDE disks for about one-hundred and thirty - US dollars. I bought three of them, giving me approximately - ninety gigabytes of online storage. Installing the Hardware - I installed the hard drives in a system that already - had one IDE disk in as the system disk. The ideal solution - would be for each IDE disk to have its own IDE controller - and cable, but without fronting more costs to acquire a dual - IDE controller this would not be a possibility. So, I - jumpered two disks as slaves, and one as master. One went - on the first IDE controller as a slave to the system disk, - and the other two where slave/master on the secondary IDE - controller. + In addition to the IDE system disk, three Western + Digital 30GB, 5400 RPM IDE disks form the core + of the CCD disk described below providing approximately + 90GB of online storage. Ideally, + each IDE disk would have its own IDE controller + and cable, but to minimize cost, additional + IDE controllers were not used. Instead the disks were + configured with jumpers so that each IDE controller has + one master, and one slave. Upon reboot, the system BIOS was configured to automatically detect the disks attached. More importantly, @@ -406,74 +406,75 @@ ad1: 29333MB <WDC WD307AA> [59598/16/63] at ata0-slave UDMA33 ad2: 29333MB <WDC WD307AA> [59598/16/63] at ata1-master UDMA33 ad3: 29333MB <WDC WD307AA> [59598/16/63] at ata1-slave UDMA33 - At this point, if FreeBSD does not detect the disks, be - sure that you have jumpered them correctly. I have heard - numerous reports with problems using cable select instead of - true slave/master configuration. - - The next consideration was how to attach them as part of - the file system. I did a little research on &man.vinum.8; - () and - &man.ccd.4;. In this particular configuration, &man.ccd.4; - appeared to be a better choice mainly because it has fewer - parts. Less parts tends to indicate less chance of breakage. - Vinum appears to be a bit of an overkill for my needs. + If FreeBSD does not detect all the disks, ensure + that you have jumpered them correctly. Most IDE drives + also have a Cable Select jumper. This is + not the jumper for the master/slave + relationship. Consult the drive documentation for help in + identifying the correct jumper. + + Next, consider how to attach them as part of the file + system. You should research both &man.vinum.8; () and &man.ccd.4;. In this + particular configuration, &man.ccd.4; was chosen. Setting up the CCD - CCD allows me to take - several identical disks and concatenate them into one - logical file system. In order to use - ccd, I need a kernel with - ccd support built into it. I - added this line to my kernel configuration file and rebuilt - the kernel: + CCD allows you to take + several identical disks and concatenate them into one + logical file system. In order to use + ccd, you need a kernel with + ccd support built in. + Add this line to your kernel configuration file, rebuild, and + reinstall the kernel: pseudo-device ccd 4 In FreeBSD 5.0, it is not necessary to specify a number of ccd devices, as the ccd device driver is now - cloning -- new device instances will automatically be + self-cloning -- new device instances will automatically be created on demand. ccd support can also be - loaded as a kernel loadable module in FreeBSD 4.0 or + loaded as a kernel loadable module in FreeBSD 3.0 or later. - To set up ccd, first I need - to disklabel the disks. Here is how I disklabeled - them: + To set up ccd, you must first use + &man.disklabel.8 to label the disks: disklabel -r -w ad1 auto disklabel -r -w ad2 auto disklabel -r -w ad3 auto - This created a disklabel ad1c, ad2c and ad3c that - spans the entire disk. + This creates a disklabel for ad1c, ad2c and ad3c that + spans the entire disk. - The next step is to change the disklabel type. To do - that I had to edit the disklabel: + The next step is to change the disklabel type. You + can use disklabel to edit the + disks: disklabel -e ad1 disklabel -e ad2 disklabel -e ad3 - This opened up the current disklabel on each disk - respectively in whatever editor the EDITOR - environment variable was set to, in my case, &man.vi.1;. - Inside the editor I had a section like this: + This opens up the current disklabel on each disk with + the editor specified by the EDITOR + environment variable, typically &man.vi.1;. + + An unmodified disklabel will look something like + this: 8 partitions: # size offset fstype [fsize bsize bps/cpg] c: 60074784 0 unused 0 0 0 # (Cyl. 0 - 59597) - I needed to add a new "e" partition for &man.ccd.4; to - use. This usually can be copied of the "c" partition, but - the must be 4.2BSD. - Once I was done, - my disklabel should look like this: + Add a new "e" partition for &man.ccd.4; to use. This + can usually be copied from the c partition, + but the must + be 4.2BSD. The disklabel should + now look something like this: 8 partitions: # size offset fstype [fsize bsize bps/cpg] @@ -485,12 +486,7 @@ disklabel -e ad3 Building the File System - Now that I have all of the disks labeled, I needed to - build the ccd. To do that, I - used a utility called &man.ccdconfig.8;. - ccdconfig takes several arguments, the - first argument being the device to configure, in this case, - /dev/ccd0c. The device node for + The device node for ccd0c may not exist yet, so to create it, perform the following commands: @@ -501,58 +497,79 @@ sh MAKEDEV ccd0 manage device nodes in /dev, so use of MAKEDEV is not necessary. - The next argument ccdconfig expects - is the interleave for the file system. The interleave - defines the size of a stripe in disk blocks, normally five - hundred and twelve bytes. So, an interleave of thirty-two - would be sixteen thousand three hundred and eighty-four - bytes. + Now that you have all of the disks labeled, you must + build the ccd. To do that, + use &man.ccdconfig.8;, with options similar to the following: - After the interleave comes the flags for - ccdconfig. If you want to enable drive - mirroring, you can specify a flag here. In this - configuration, I am not mirroring the - ccd, so I left it as zero. + ccdconfig ccd0 32 0 /dev/ad1e /dev/ad2e /dev/ad3e - The final arguments to ccdconfig - are the devices to place into the array. Putting it all - together I get this command: + The use and meaning of each option is shown below: - ccdconfig ccd0 32 0 /dev/ad1e /dev/ad2e /dev/ad3e + + + The first argument is the device to configure, in this case, + /dev/ccd0c. The /dev/ + portion is optional. + - This configures the ccd. - I can now &man.newfs.8; the file system. + + + The interleave for the file system. The interleave + defines the size of a stripe in disk blocks, each normally 512 bytes. + So, an interleave of 32 would be 16,384 bytes. + + + + Flags for ccdconfig. If you want to enable drive + mirroring, you can specify a flag here. This + configuration does not provide mirroring for + ccd, so it is set at 0 (zero). + + + + The final arguments to ccdconfig + are the devices to place into the array. Use the complete pathname + for each device. + + + + + After running ccdconfig the ccd + is configured. A file system can be installed. Refer to &man.newfs.8; + for options, or simply run: newfs /dev/ccd0c + Making it all Automatic - Finally, if I want to be able to mount the - ccd, I need to - configure it first. I write out my current configuration to + Generally, you will want to mount the + ccd upon each reboot. To do this, you must + configure it first. Write out your current configuration to /etc/ccd.conf using the following command: ccdconfig -g > /etc/ccd.conf - When I reboot, the script /etc/rc - runs ccdconfig -C if /etc/ccd.conf + During reboot, the script /etc/rc + runs ccdconfig -C if /etc/ccd.conf exists. This automatically configures the ccd so it can be mounted. - If you are booting into single user mode, before you can + If you are booting into single user mode, before you can mount the ccd, you need to issue the following command to configure the array: ccdconfig -C + - Then, we need an entry for the - ccd in + To automatically mount the ccd, + place an entry for the ccd in /etc/fstab so it will be mounted at - boot time. + boot time: /dev/ccd0c /media ufs rw 2 2 @@ -569,7 +586,7 @@ sh MAKEDEV ccd0 storage. &man.vinum.8; implements the RAID-0, RAID-1 and RAID-5 models, both individually and in combination. - See the for more + See for more information about &man.vinum.8;. @@ -581,16 +598,19 @@ sh MAKEDEV ccd0 RAID Hardware + FreeBSD also supports a variety of hardware RAID - controllers. In which case the actual RAID system - is built and controlled by the card itself. Using an on-card - BIOS, the card will control most of the disk operations - itself. The following is a brief setup using a Promise IDE RAID - controller. When this card is installed and the system started up, it will - display a prompt requesting information. Follow the on screen instructions - to enter the cards setup screen. From here a user should have the ability to - combine all the attached drives. When doing this, the disk(s) will look like - a single drive to FreeBSD. Other RAID levels can be setup + controllers. These devices control a RAID subsystem + without the need for FreeBSD specific software to manage the + array. + + Using an on-card BIOS, the card controls most of the disk operations + itself. The following is a brief setup description using a Promise IDE RAID + controller. When this card is installed and the system is started up, it + displays a prompt requesting information. Follow the instructions + to enter the card's setup screen. From here, you have the ability to + combine all the attached drives. After doing so, the disk(s) will look like + a single drive to FreeBSD. Other RAID levels can be set up accordingly. @@ -611,7 +631,7 @@ ata3: resetting devices .. done ad6: hard error reading fsbn 1116119 of 0-7 (ad6 bn 1116119; cn 1107 tn 4 sn 11) status=59 error=40 ar0: WARNING - mirror lost - Using &man.atacontrol.8;, check to see how things look: + Using &man.atacontrol.8;, check for further information: &prompt.root; atacontrol list ATA channel 0: @@ -659,8 +679,9 @@ Slave: no device present - The rebuild command hangs until complete, its possible to open another - terminal and check on the progress by issuing the following command: + The rebuild command hangs until complete. However, it is possible to open another + terminal (using Alt Fn) + and check on the progress by issuing the following command: &prompt.root; dmesg | tail -10 [output removed]