Add non-breaking spaces where needed.

This commit is contained in:
Marc Fonvieille 2002-10-19 13:41:54 +00:00
parent 96ee4b8f9e
commit 2173c0d0fe
Notes: svn2git 2020-12-08 03:00:23 +00:00
svn path=/head/; revision=14700

View file

@ -65,12 +65,12 @@
<title>Access bottlenecks</title>
<para>Modern systems frequently need to access data in a highly
concurrent manner. For example, large FTP or HTTP servers can maintain
thousands of concurrent sessions and have multiple 100 Mbit/s connections
thousands of concurrent sessions and have multiple 100&nbsp;Mbit/s connections
to the outside world, well beyond the sustained transfer rate of most
disks.</para>
<para>Current disk drives can transfer data sequentially at up to
70 MB/s, but this value is of little importance in an environment
70&nbsp;MB/s, but this value is of little importance in an environment
where many independent processes access a drive, where they may
achieve only a fraction of these values. In such cases it is more
interesting to view the problem from the viewpoint of the disk
@ -84,12 +84,12 @@
any sense to interrupt them.</para>
<para><anchor id="vinum-latency">
Consider a typical transfer of about 10 kB: the current generation of
high-performance disks can position the heads in an average of 3.5 ms. The
fastest drives spin at 15,000 rpm, so the average rotational latency
(half a revolution) is 2 ms. At 70 MB/s, the transfer itself takes about
150 &mu;s, almost nothing compared to the positioning time. In such a
case, the effective transfer rate drops to a little over 1 MB/s and is
Consider a typical transfer of about 10&nbsp;kB: the current generation of
high-performance disks can position the heads in an average of 3.5&nbsp;ms. The
fastest drives spin at 15,000&nbsp;rpm, so the average rotational latency
(half a revolution) is 2&nbsp;ms. At 70&nbsp;MB/s, the transfer itself takes about
150&nbsp;&mu;s, almost nothing compared to the positioning time. In such a
case, the effective transfer rate drops to a little over 1&nbsp;MB/s and is
clearly highly dependent on the transfer size.</para>
<para>The traditional and obvious solution to this bottleneck is
@ -363,7 +363,7 @@
<listitem>
<para>The greatest advantage of striped (<acronym>RAID-0</acronym>)
plexes is that they reduce hot spots: by choosing an optimum sized stripe
(about 256 kB), you can even out the load on the component drives.
(about 256&nbsp;kB), you can even out the load on the component drives.
The disadvantages of this approach are (fractionally) more complex
code and restrictions on subdisks: they must be all the same size, and
extending a plex by adding new subdisks is so complicated that Vinum
@ -565,7 +565,7 @@
</figure>
</para>
<para>In this example, each plex contains the full 512 MB of address
<para>In this example, each plex contains the full 512&nbsp;MB of address
space. As in the previous example, each plex contains only a single
subdisk.</para>
</sect2>