Sweep through english docs and replace &ldquot; and friends with <quote>.

Approved by:	nik
This commit is contained in:
Giorgos Keramidas 2001-10-10 11:47:30 +00:00
parent b2f9d5ab79
commit 4593fd98a7
Notes: svn2git 2020-12-08 03:00:23 +00:00
svn path=/head/; revision=10908
8 changed files with 103 additions and 103 deletions

View file

@ -1,4 +1,4 @@
<!-- $FreeBSD: doc/en_US.ISO8859-1/articles/vm-design/article.sgml,v 1.5 2001/07/10 13:24:13 dd Exp $ -->
<!-- $FreeBSD: doc/en_US.ISO8859-1/articles/vm-design/article.sgml,v 1.6 2001/07/17 20:51:49 chern Exp $ -->
<!-- FreeBSD Documentation Project -->
<!DOCTYPE ARTICLE PUBLIC "-//FreeBSD//DTD DocBook V4.1-Based Extension//EN" [
@ -29,8 +29,8 @@
attempt to describe the whole VM enchilada, hopefully in a way that
everyone can follow. For the last year I have concentrated on a number
of major kernel subsystems within FreeBSD, with the VM and Swap
subsystems being the most interesting and NFS being &lsquo;a necessary
chore&rsquo;. I rewrote only small portions of the code. In the VM
subsystems being the most interesting and NFS being <quote>a necessary
chore</quote>. I rewrote only small portions of the code. In the VM
arena the only major rewrite I have done is to the swap subsystem.
Most of my work was cleanup and maintenance, with only moderate code
rewriting and no major algorithmic adjustments within the VM
@ -59,9 +59,9 @@
a great deal of attention was paid to algorithm design from the
beginning. More attention paid to the design generally leads to a clean
and flexible codebase that can be fairly easily modified, extended, or
replaced over time. While BSD is considered an &lsquo;old&rsquo;
replaced over time. While BSD is considered an <quote>old</quote>
operating system by some people, those of us who work on it tend to view
it more as a &lsquo;mature&rsquo; codebase which has various components
it more as a <quote>mature</quote> codebase which has various components
modified, extended, or replaced with modern code. It has evolved, and
FreeBSD is at the bleeding edge no matter how old some of the code might
be. This is an important distinction to make and one that is
@ -77,8 +77,8 @@
community&mdash;by continuous code development. The NT folk, on the
other hand, repeatedly make the same mistakes solved by Unix decades ago
and then spend years fixing them. Over and over again. They have a
severe case of &lsquo;not designed here&rsquo; and &lsquo;we are always
right because our marketing department says so&rsquo;. I have little
severe case of <quote>not designed here</quote> and <quote>we are always
right because our marketing department says so</quote>. I have little
tolerance for anyone who cannot learn from history.</para>
<para>Much of the apparent complexity of the FreeBSD design, especially in
@ -89,8 +89,8 @@
these issues become most apparent when system resources begin to get
stressed. As I describe FreeBSD's VM/Swap subsystem the reader should
always keep two points in mind. First, the most important aspect of
performance design is what is known as &ldquo;Optimizing the Critical
Path&rdquo;. It is often the case that performance optimizations add a
performance design is what is known as <quote>Optimizing the Critical
Path</quote>. It is often the case that performance optimizations add a
little bloat to the code in order to make the critical path perform
better. Second, a solid, generalized design outperforms a
heavily-optimized design over the long run. While a generalized design
@ -267,7 +267,7 @@
no longer accessible by anyone. That page in B can be freed.</para>
<para>FreeBSD solves the deep layering problem with a special optimization
called the &ldquo;All Shadowed Case&rdquo;. This case occurs if either
called the <quote>All Shadowed Case</quote>. This case occurs if either
C1 or C2 take sufficient COW faults to completely shadow all pages in B.
Lets say that C1 achieves this. C1 can now bypass B entirely, so rather
then have C1->B->A and C2->B->A we now have C1->A and C2->B->A. But
@ -317,7 +317,7 @@
store&mdash;even if only a few pages of that object are swap-backed.
This creates a kernel memory fragmentation problem when large objects
are mapped, or processes with large runsizes (RSS) fork. Also, in order
to keep track of swap space, a &lsquo;list of holes&rsquo; is kept in
to keep track of swap space, a <quote>list of holes</quote> is kept in
kernel memory, and this tends to get severely fragmented as well. Since
the 'list of holes' is a linear list, the swap allocation and freeing
performance is a non-optimal O(n)-per-page. It also requires kernel
@ -411,7 +411,7 @@
flushed pages are moved from the inactive queue to the cache queue. At
this point, pages in the cache queue can still be reactivated by a VM
fault at relatively low cost. However, pages in the cache queue are
considered to be &lsquo;immediately freeable&rsquo; and will be reused
considered to be <quote>immediately freeable</quote> and will be reused
in an LRU (least-recently used) fashion when the system needs to
allocate new memory.</para>
@ -557,7 +557,7 @@
<qandaset>
<qandaentry>
<question>
<para>What is &ldquo;the interleaving algorithm&rdquo; that you
<para>What is <quote>the interleaving algorithm</quote> that you
refer to in your listing of the ills of the FreeBSD 3.x swap
arrangements?</para>
</question>
@ -566,7 +566,7 @@
<para>FreeBSD uses a fixed swap interleave which defaults to 4. This
means that FreeBSD reserves space for four swap areas even if you
only have one, two, or three. Since swap is interleaved the linear
address space representing the &lsquo;four swap areas&rsquo; will be
address space representing the <quote>four swap areas</quote> will be
fragmented if you don't actually have four swap areas. For
example, if you have two swap areas A and B FreeBSD's address
space representation for that swap area will be interleaved in
@ -574,15 +574,15 @@
<literallayout>A B C D A B C D A B C D A B C D</literallayout>
<para>FreeBSD 3.x uses a &lsquo;sequential list of free
regions&rsquo; approach to accounting for the free swap areas.
<para>FreeBSD 3.x uses a <quote>sequential list of free
regions</quote> approach to accounting for the free swap areas.
The idea is that large blocks of free linear space can be
represented with a single list node
(<filename>kern/subr_rlist.c</filename>). But due to the
fragmentation the sequential list winds up being insanely
fragmented. In the above example, completely unused swap will
have A and B shown as &lsquo;free&rsquo; and C and D shown as
&lsquo;all allocated&rsquo;. Each A-B sequence requires a list
have A and B shown as <quote>free</quote> and C and D shown as
<quote>all allocated</quote>. Each A-B sequence requires a list
node to account for because C and D are holes, so the list node
cannot be combined with the next A-B sequence.</para>
@ -637,7 +637,7 @@
<answer>
<para>Yes, that is confusing. The relationship is
&ldquo;goal&rdquo; verses &ldquo;reality&rdquo;. Our goal is to
<quote>goal</quote> verses <quote>reality</quote>. Our goal is to
separate the pages but the reality is that if we are not in a
memory crunch, we don't really have to.</para>
@ -734,9 +734,9 @@
<para>In regards to the memory overhead of a page table verses the
<literal>pv_entry</literal> scheme: Linux uses
&lsquo;permanent&rsquo; page tables that are not throw away, but
<quote>permanent</quote> page tables that are not throw away, but
does not need a <literal>pv_entry</literal> for each potentially
mapped pte. FreeBSD uses &lsquo;throw away&rsquo; page tables but
mapped pte. FreeBSD uses <quote>throw away</quote> page tables but
adds in a <literal>pv_entry</literal> structure for each
actually-mapped pte. I think memory utilization winds up being
about the same, giving FreeBSD an algorithmic advantage with its
@ -762,7 +762,7 @@
cached data you read from offset 0!</para>
<para>Now, I am simplifying things greatly. What I just described
is what is called a &lsquo;direct mapped&rsquo; hardware memory
is what is called a <quote>direct mapped</quote> hardware memory
cache. Most modern caches are what are called
2-way-set-associative or 4-way-set-associative caches. The
set-associatively allows you to access up to N different memory
@ -819,8 +819,8 @@
be if the program had been run directly in a physical address
space.</para>
<para>Note that I say &lsquo;reasonably&rsquo; contiguous rather
than simply &lsquo;contiguous&rsquo;. From the point of view of a
<para>Note that I say <quote>reasonably</quote> contiguous rather
than simply <quote>contiguous</quote>. From the point of view of a
128K direct mapped cache, the physical address 0 is the same as
the physical address 128K. So two side-by-side pages in your
virtual address space may wind up being offset 128K and offset