Typo & grammar fixes.

Add <acronym>s.
Update disk speeds section.

PR:		docs/41934
Submitted by:	Christian Brueffer <chris@unixpages.org>
This commit is contained in:
Giorgos Keramidas 2002-08-26 00:09:07 +00:00
parent f5d3e40309
commit 2912a86246
Notes: svn2git 2020-12-08 03:00:23 +00:00
svn path=/head/; revision=14018

View file

@ -55,7 +55,7 @@
<para>Disks are getting bigger, but so are data storage requirements.
Often you ill find you want a file system that is bigger than the disks
Often you will find you want a file system that is bigger than the disks
you have available. Admittedly, this problem is not as acute as it was
ten years ago, but it still exists. Some systems have solved this by
creating an abstract device which stores its data on a number of disks.</para>
@ -70,7 +70,7 @@
disks.</para>
<para>Current disk drives can transfer data sequentially at up to
30 MB/s, but this value is of little importance in an environment
70 MB/s, but this value is of little importance in an environment
where many independent processes access a drive, where they may
achieve only a fraction of these values. In such cases it is more
interesting to view the problem from the viewpoint of the disk
@ -85,10 +85,10 @@
<para><anchor id="vinum-latency">
Consider a typical transfer of about 10 kB: the current generation of
high-performance disks can position the heads in an average of 6 ms. The
fastest drives spin at 10,000 rpm, so the average rotational latency
(half a revolution) is 3 ms. At 30 MB/s, the transfer itself takes about
350 &mu;s, almost nothing compared to the positioning time. In such a
high-performance disks can position the heads in an average of 3.5 ms. The
fastest drives spin at 15,000 rpm, so the average rotational latency
(half a revolution) is 2 ms. At 70 MB/s, the transfer itself takes about
150 &mu;s, almost nothing compared to the positioning time. In such a
case, the effective transfer rate drops to a little over 1 MB/s and is
clearly highly dependent on the transfer size.</para>
@ -151,7 +151,7 @@
For example, the first 256 sectors may be stored on the first disk, the
next 256 sectors on the next disk and so on. After filling the last
disk, the process repeats until the disks are full. This mapping is called
<emphasis>striping</emphasis> or RAID-0.
<emphasis>striping</emphasis> or <acronym>RAID-0</acronym>.
<footnote>
<indexterm>
@ -250,7 +250,7 @@
</figure>
</para>
<para>Compared to mirroring, RAID-5 has the advantage of requiring
<para>Compared to mirroring, <acronym>RAID-5</acronym> has the advantage of requiring
significantly less storage space. Read access is similar to that of
striped organizations, but write access is significantly slower,
approximately 25% of the read performance. If one drive fails, the array
@ -470,7 +470,7 @@
the system automatically assigns names derived from the plex name by
adding the suffix <emphasis>.s</emphasis><emphasis>x</emphasis>, where
<emphasis>x</emphasis> is the number of the subdisk in the plex. Thus
Vinum gives this subdisk the name <emphasis>myvol.p0.s0</emphasis></para>
Vinum gives this subdisk the name <emphasis>myvol.p0.s0</emphasis>.</para>
</listitem>
</itemizedlist>
@ -736,8 +736,8 @@
</listitem>
<listitem>
<para>The directories <devicename>/dev/vinum/plex</devicename> and
<devicename>/dev/vinum/sd</devicename>,
<para>The directories <devicename>/dev/vinum/plex</devicename>,
<devicename>/dev/vinum/sd</devicename>, and
<devicename>/dev/vinum/rsd</devicename>, which contain block device
nodes for each plex and block and character device nodes respectively
for each subdisk.</para>