Revert one of my previous changes. Sentences now have two spaces after

the period. Apologies for the repository bloat. This is entirely a
whitespace change.
This commit is contained in:
Nik Clayton 1999-03-04 22:42:55 +00:00
parent 772051fe94
commit fe79ecbe4d
Notes: svn2git 2020-12-08 03:00:23 +00:00
svn path=/head/; revision=4465
88 changed files with 11040 additions and 11040 deletions

View file

@ -11,7 +11,7 @@
<para>Booting FreeBSD is essentially a three step process: load the
kernel, determine the root filesystem and initialize user-land
things. This leads to some interesting possibilities shown
things. This leads to some interesting possibilities shown
below.</para>
@ -26,7 +26,7 @@
<variablelist>
<varlistentry><term>Biosboot</term>
<listitem>
<para>Biosboot is our &ldquo;bootblocks&rdquo;. It consists of two
<para>Biosboot is our &ldquo;bootblocks&rdquo;. It consists of two
files which will be installed in the first 8Kbytes of the
floppy or hard-disk slice to be booted from.</para>
@ -38,13 +38,13 @@
<varlistentry><term>Dosboot</term>
<listitem>
<para>Dosboot was written by DI. Christian Gusenbauer, and
<para>Dosboot was written by DI. Christian Gusenbauer, and
is unfortunately at this time one of the few pieces of
code that will not compile under FreeBSD itself because it
is written for Microsoft compilers.</para>
<para>Dosboot will boot the kernel from a MS-DOS file or
from a FreeBSD filesystem partition on the disk. It
from a FreeBSD filesystem partition on the disk. It
attempts to negotiate with the various and strange kinds
of memory manglers that lurk in high memory on MS/DOS
systems and usually wins them for its case.</para>
@ -80,7 +80,7 @@
<variablelist>
<varlistentry><term>UFS</term>
<listitem>
<para>This is the most normal type of root filesystem. It
<para>This is the most normal type of root filesystem. It
can reside on a floppy or on hard disk.</para>
</listitem>
</varlistentry>
@ -99,7 +99,7 @@
<listitem>
<para>This is actually a UFS filesystem which has been
compiled into the kernel. That means that the kernel does
compiled into the kernel. That means that the kernel does
not really need any hard disks, floppies or other hardware
to function.</para>
</listitem>
@ -137,8 +137,8 @@
<command>/sbin/init</command>, as long as you keep in mind
that:</para>
<para>there is no stdin/out/err unless you open it yourself. If you
exit, the machine panics. Signal handling is special for
<para>there is no stdin/out/err unless you open it yourself. If you
exit, the machine panics. Signal handling is special for
<literal>pid == 1</literal>.</para>
<para>An example of this is the
@ -259,16 +259,16 @@
<para>It then loads the first 15 sectors at <literal>0x10000</literal>
(segment <makevar>BOOTSEG</makevar> in the biosboot Makefile), and sets up the stack to
work below <literal>0x1fff0</literal>. After this, it jumps to the
entry of boot2 within that code. I.e., it jumps over itself and the
work below <literal>0x1fff0</literal>. After this, it jumps to the
entry of boot2 within that code. I.e., it jumps over itself and the
(dummy) partition table, and it is going to adjust the %cs
selector&mdash;we are still in 16-bit mode there.</para>
<para>boot2 asks for the boot file, and examines the
<filename>a.out</filename> header. It masks the file entry point
<filename>a.out</filename> header. It masks the file entry point
(usually <literal>0xf0100000</literal>) by
<literal>0x00ffffff</literal>, and loads the file there. Hence the
usual load point is 1 MB (<literal>0x00100000</literal>). During
<literal>0x00ffffff</literal>, and loads the file there. Hence the
usual load point is 1 MB (<literal>0x00100000</literal>). During
load, the boot code toggles back and forth between real and
protected mode, to use the BIOS in real mode.</para>
@ -276,11 +276,11 @@
<literal>0x18</literal> and <literal>0x20</literal> for
<literal>%cs</literal> and <literal>%ds/%es</literal> in
protected mode, and <literal>0x28</literal> to jump back into real
mode. The kernel is finally started with <literal>%cs</literal> <literal>0x08</literal> and
mode. The kernel is finally started with <literal>%cs</literal> <literal>0x08</literal> and
<literal>%ds/%es/%ss</literal> <literal>0x10</literal>, which
refer to dummy descriptors covering the entire address space.</para>
<para>The kernel will be started at its load point. Since it has been
<para>The kernel will be started at its load point. Since it has been
linked for another (high) address, it will have to execute PIC until
the page table and page directory stuff is setup properly, at which
point paging will be enabled and the kernel will finally run at the
@ -290,7 +290,7 @@
1995.</emphasis></para>
<para>The physical pages immediately following the kernel BSS contain
proc0's page directory, page tables, and upages. Some time later
proc0's page directory, page tables, and upages. Some time later
when the VM system is initialized, the physical memory between
<literal>0x1000-0x9ffff</literal> and the physical memory after the
kernel (text+data+bss+proc0 stuff+other misc) is made available in
@ -303,7 +303,7 @@
<title>DMA: What it Is and How it Works</title>
<para><emphasis>Copyright &copy; 1995,1997 &a.uhclem;, All Rights
Reserved.<!-- <br> --> 10 December 1996. Last Update 8 October
Reserved.<!-- <br> --> 10 December 1996. Last Update 8 October
1997.</emphasis></para>
<para>Direct Memory Access (DMA) is a method of allowing data to be
@ -319,25 +319,25 @@
<para>The PC DMA subsystem is based on the Intel 8237 DMA controller.
The 8237 contains four DMA channels that can be programmed
independently and any one of the channels may be active at any
moment. These channels are numbered 0, 1, 2 and 3. Starting with
moment. These channels are numbered 0, 1, 2 and 3. Starting with
the PC/AT, IBM added a second 8237 chip, and numbered those channels
4, 5, 6 and 7.</para>
<para>The original DMA controller (0, 1, 2 and 3) moves one byte in
each transfer. The second DMA controller (4, 5, 6, and 7) moves
each transfer. The second DMA controller (4, 5, 6, and 7) moves
16-bits from two adjacent memory locations in each transfer, with
the first byte always coming from an even-numbered address. The two
the first byte always coming from an even-numbered address. The two
controllers are identical components and the difference in transfer
size is caused by the way the second controller is wired into the
system.</para>
<para>The 8237 has two electrical signals for each channel, named DRQ
and -DACK. There are additional signals with the names HRQ (Hold
and -DACK. There are additional signals with the names HRQ (Hold
Request), HLDA (Hold Acknowledge), -EOP (End of Process), and the
bus control signals -MEMR (Memory Read), -MEMW (Memory Write), -IOR
(I/O Read), and -IOW (I/O Write).</para>
<para>The 8237 DMA is known as a &ldquo;fly-by&rdquo; DMA controller. This
<para>The 8237 DMA is known as a &ldquo;fly-by&rdquo; DMA controller. This
means that the data being moved from one location to another does
not pass through the DMA chip and is not stored in the DMA chip.
Subsequently, the DMA can only transfer data between an I/O port and
@ -361,24 +361,24 @@
<title>A Sample DMA transfer</title>
<para>Here is an example of the steps that occur to cause and
perform a DMA transfer. In this example, the floppy disk
perform a DMA transfer. In this example, the floppy disk
controller (FDC) has just read a byte from a diskette and wants
the DMA to place it in memory at location 0x00123456. The process
the DMA to place it in memory at location 0x00123456. The process
begins by the FDC asserting the DRQ2 signal (the DRQ line for DMA
channel 2) to alert the DMA controller.</para>
<para>The DMA controller will note that the DRQ2 signal is asserted.
The DMA controller will then make sure that DMA channel 2 has been
programmed and is unmasked (enabled). The DMA controller also
programmed and is unmasked (enabled). The DMA controller also
makes sure that none of the other DMA channels are active or want
to be active and have a higher priority. Once these checks are
to be active and have a higher priority. Once these checks are
complete, the DMA asks the CPU to release the bus so that the DMA
may use the bus. The DMA requests the bus by asserting the HRQ
may use the bus. The DMA requests the bus by asserting the HRQ
signal which goes to the CPU.</para>
<para>The CPU detects the HRQ signal, and will complete executing
the current instruction. Once the processor has reached a state
where it can release the bus, it will. Now all of the signals
the current instruction. Once the processor has reached a state
where it can release the bus, it will. Now all of the signals
normally generated by the CPU (-MEMR, -MEMW, -IOR, -IOW and a few
others) are placed in a tri-stated condition (neither high or low)
and then the CPU asserts the HLDA signal which tells the DMA
@ -397,12 +397,12 @@
location.</para>
<para>The DMA will then let the device that requested the DMA
transfer know that the transfer is commencing. This is done by
transfer know that the transfer is commencing. This is done by
asserting the -DACK signal, or in the case of the floppy disk
controller, -DACK2 is asserted.</para>
<para>The floppy disk controller is now responsible for placing the
byte to be transferred on the bus Data lines. Unless the floppy
byte to be transferred on the bus Data lines. Unless the floppy
controller needs more time to get the data byte on the bus (and if
the peripheral does need more time it alerts the DMA via the READY
signal), the DMA will wait one DMA clock, and then de-assert the
@ -412,22 +412,22 @@
<para>Since the DMA cycle only transfers a single byte at a time,
the FDC now drops the DRQ2 signal, so the DMA knows that it is no
longer needed. The DMA will de-assert the -DACK2 signal, so that
longer needed. The DMA will de-assert the -DACK2 signal, so that
the FDC knows it must stop placing data on the bus.</para>
<para>The DMA will now check to see if any of the other DMA channels
have any work to do. If none of the channels have their DRQ lines
have any work to do. If none of the channels have their DRQ lines
asserted, the DMA controller has completed its work and will now
tri-state the -MEMR, -MEMW, -IOR, -IOW and address signals.</para>
<para>Finally, the DMA will de-assert the HRQ signal. The CPU sees
this, and de-asserts the HOLDA signal. Now the CPU activates its
<para>Finally, the DMA will de-assert the HRQ signal. The CPU sees
this, and de-asserts the HOLDA signal. Now the CPU activates its
-MEMR, -MEMW, -IOR, -IOW and address lines, and it resumes
executing instructions and accessing main memory and the
peripherals.</para>
<para>For a typical floppy disk sector, the above process is
repeated 512 times, once for each byte. Each time a byte is
repeated 512 times, once for each byte. Each time a byte is
transferred, the address register in the DMA is incremented and
the counter in the DMA that shows how many bytes are to be
transferred is decremented.</para>
@ -435,7 +435,7 @@
<para>When the counter reaches zero, the DMA asserts the EOP signal,
which indicates that the counter has reached zero and no more data
will be transferred until the DMA controller is reprogrammed by
the CPU. This event is also called the Terminal Count (TC).
the CPU. This event is also called the Terminal Count (TC).
There is only one EOP signal, and since only DMA channel can be
active at any instant, the DMA channel that is currently active
must be the DMA channel that just completed its task.</para>
@ -446,10 +446,10 @@
When that happens, it means the DMA will not transfer any more
information for that peripheral without intervention by the CPU.
The peripheral can then assert one of the interrupt signals to get
the processors' attention. In the PC architecture, the DMA chip
itself is not capable of generating an interrupt. The peripheral
the processors' attention. In the PC architecture, the DMA chip
itself is not capable of generating an interrupt. The peripheral
and its associated hardware is responsible for generating any
interrupt that occurs. Subsequently, it is possible to have a
interrupt that occurs. Subsequently, it is possible to have a
peripheral that uses DMA but does not use interrupts.</para>
<para>It is important to understand that although the CPU always
@ -470,53 +470,53 @@
<para>You may have noticed earlier that instead of the DMA setting
the address lines to 0x00123456 as we said earlier, the DMA only
set 0x3456. The reason for this takes a bit of explaining.</para>
set 0x3456. The reason for this takes a bit of explaining.</para>
<para>When the original IBM PC was designed, IBM elected to use both
DMA and interrupt controller chips that were designed for use with
the 8085, an 8-bit processor with an address space of 16 bits
(64K). Since the IBM PC supported more than 64K of memory,
(64K). Since the IBM PC supported more than 64K of memory,
something had to be done to allow the DMA to read or write memory
locations above the 64K mark. What IBM did to solve this problem
locations above the 64K mark. What IBM did to solve this problem
was to add an external data latch for each DMA channel that holds
the upper bits of the address to be read to or written from.
Whenever a DMA channel is active, the contents of that latch are
written to the address bus and kept there until the DMA operation
for the channel ends. IBM called these latches &ldquo;Page
for the channel ends. IBM called these latches &ldquo;Page
Registers&rdquo;.</para>
<para>So for our example above, the DMA would put the 0x3456 part of
the address on the bus, and the Page Register for DMA channel 2
would put 0x0012xxxx on the bus. Together, these two values form
would put 0x0012xxxx on the bus. Together, these two values form
the complete address in memory that is to be accessed.</para>
<para>Because the Page Register latch is independent of the DMA
chip, the area of memory to be read or written must not span a 64K
physical boundary. For example, if the DMA accesses memory
physical boundary. For example, if the DMA accesses memory
location 0xffff, after that transfer the DMA will then increment
the address register and the DMA will access the next byte at
location 0x0000, not 0x10000. The results of letting this happen
location 0x0000, not 0x10000. The results of letting this happen
are probably not intended.</para>
<note>
<para>&ldquo;Physical&rdquo; 64K boundaries should not be
confused with 8086-mode 64K &ldquo;Segments&rdquo;, which are
created by mathematically adding a segment register with an
offset register. Page Registers have no address overlap and are
offset register. Page Registers have no address overlap and are
mathematically OR-ed together.</para>
</note>
<para>To further complicate matters, the external DMA address
latches on the PC/AT hold only eight bits, so that gives us
8+16=24 bits, which means that the DMA can only point at memory
locations between 0 and 16Meg. For newer computers that allow
locations between 0 and 16Meg. For newer computers that allow
more than 16Meg of memory, the standard PC-compatible DMA cannot
access memory locations above 16Meg.</para>
<para>To get around this restriction, operating systems will reserve
a RAM buffer in an area below 16Meg that also does not span a
physical 64K boundary. Then the DMA will be programmed to
transfer data from the peripheral and into that buffer. Once the
physical 64K boundary. Then the DMA will be programmed to
transfer data from the peripheral and into that buffer. Once the
DMA has moved the data into this buffer, the operating system will
then copy the data from the buffer to the address where the data
is really supposed to be stored.</para>
@ -524,8 +524,8 @@
<para>When writing data from an address above 16Meg to a DMA-based
peripheral, the data must be first copied from where it resides
into a buffer located below 16Meg, and then the DMA can copy the
data from the buffer to the hardware. In FreeBSD, these reserved
buffers are called &ldquo;Bounce Buffers&rdquo;. In the MS-DOS world, they
data from the buffer to the hardware. In FreeBSD, these reserved
buffers are called &ldquo;Bounce Buffers&rdquo;. In the MS-DOS world, they
are sometimes called &ldquo;Smart Buffers&rdquo;.</para>
<note>
@ -539,17 +539,17 @@
<sect2>
<title>DMA Operational Modes and Settings</title>
<para>The 8237 DMA can be operated in several modes. The main ones
<para>The 8237 DMA can be operated in several modes. The main ones
are:</para>
<variablelist>
<varlistentry><term>Single</term>
<listitem>
<para>A single byte (or word) is transferred. The DMA must
<para>A single byte (or word) is transferred. The DMA must
release and re-acquire the bus for each additional byte.
This is commonly-used by devices that cannot transfer the
entire block of data immediately. The peripheral will
entire block of data immediately. The peripheral will
request the DMA each time it is ready for another
transfer.</para>
@ -563,19 +563,19 @@
<listitem>
<para>Once the DMA acquires the system bus, an entire block
of data is transferred, up to a maximum of 64K. If the
of data is transferred, up to a maximum of 64K. If the
peripheral needs additional time, it can assert the READY
signal to suspend the transfer briefly. READY should not
signal to suspend the transfer briefly. READY should not
be used excessively, and for slow peripheral transfers,
the Single Transfer Mode should be used instead.</para>
<para>The difference between Block and Demand is that once a
Block transfer is started, it runs until the transfer
count reaches zero. DRQ only needs to be asserted until
-DACK is asserted. Demand Mode will transfer one more
count reaches zero. DRQ only needs to be asserted until
-DACK is asserted. Demand Mode will transfer one more
bytes until DRQ is de-asserted, at which point the DMA
suspends the transfer and releases the bus back to the
CPU. When DRQ is asserted later, the transfer resumes
CPU. When DRQ is asserted later, the transfer resumes
where it was suspended.</para>
<para>Older hard disk controllers used Demand Mode until CPU
@ -592,36 +592,36 @@
<para>This mechanism allows a DMA channel to request the
bus, but then the attached peripheral device is
responsible for placing the addressing information on the
bus instead of the DMA. This is also used to implement a
bus instead of the DMA. This is also used to implement a
technique known as &ldquo;Bus Mastering&rdquo;.</para>
<para>When a DMA channel in Cascade Mode receives control of
the bus, the DMA does not place addresses and I/O control
signals on the bus like the DMA normally does when it is
active. Instead, the DMA only asserts the -DACK signal
active. Instead, the DMA only asserts the -DACK signal
for the active DMA channel.</para>
<para>At this point it is up to the peripheral connected to
that DMA channel to provide address and bus control
signals. The peripheral has complete control over the
signals. The peripheral has complete control over the
system bus, and can do reads and/or writes to any address
below 16Meg. When the peripheral is finished with the
below 16Meg. When the peripheral is finished with the
bus, it de-asserts the DRQ line, and the DMA controller
can then return control to the CPU or to some other DMA
channel.</para>
<para>Cascade Mode can be used to chain multiple DMA
controllers together, and this is exactly what DMA Channel
4 is used for in the PC architecture. When a peripheral
4 is used for in the PC architecture. When a peripheral
requests the bus on DMA channels 0, 1, 2 or 3, the slave
DMA controller asserts HLDREQ, but this wire is actually
connected to DRQ4 on the primary DMA controller instead of
to the CPU. The primary DMA controller, thinking it has
to the CPU. The primary DMA controller, thinking it has
work to do on Channel 4, requests the bus from the CPU
using HLDREQ signal. Once the CPU grants the bus to the
using HLDREQ signal. Once the CPU grants the bus to the
primary DMA controller, -DACK4 is asserted, and that wire
is actually connected to the HLDA signal on the slave DMA
controller. The slave DMA controller then transfers data
controller. The slave DMA controller then transfers data
for the DMA channel that requested it (0, 1, 2 or 3), or
the slave DMA may grant the bus to a peripheral that wants
to perform its own bus-mastering, such as a SCSI
@ -639,24 +639,24 @@
<para>When a peripheral is performing Bus Mastering, it is
important that the peripheral transmit data to or from
memory constantly while it holds the system bus. If the
memory constantly while it holds the system bus. If the
peripheral cannot do this, it must release the bus
frequently so that the system can perform refresh
operations on main memory.</para>
<para>The Dynamic RAM used in all PCs for main memory must
be accessed frequently to keep the bits stored in the
components &ldquo;charged&rdquo;. Dynamic RAM essentially consists of
components &ldquo;charged&rdquo;. Dynamic RAM essentially consists of
millions of capacitors with each one holding one bit of
data. These capacitors are charged with power to
represent a <literal>1</literal> or drained to represent a <literal>0</literal>. Because
data. These capacitors are charged with power to
represent a <literal>1</literal> or drained to represent a <literal>0</literal>. Because
all capacitors leak, power must be added at regular
intervals to keep the <literal>1</literal> values intact. The RAM chips
intervals to keep the <literal>1</literal> values intact. The RAM chips
actually handle the task of pumping power back into all of
the appropriate locations in RAM, but they must be told
when to do it by the rest of the computer so that the
refresh activity won't interfere with the computer wanting
to access RAM normally. If the computer is unable to
to access RAM normally. If the computer is unable to
refresh memory, the contents of memory will become
corrupted in just a few milliseconds.</para>
@ -679,8 +679,8 @@
Demand transfers, but when the DMA transfer counter
reaches zero, the counter and address are set back to
where they were when the DMA channel was originally
programmed. This means that as long as the peripheral
requests transfers, they will be granted. It is up to the
programmed. This means that as long as the peripheral
requests transfers, they will be granted. It is up to the
CPU to move new data into the fixed buffer ahead of where
the DMA is about to transfer it when doing output
operations, and read new data out of the buffer behind
@ -688,7 +688,7 @@
operations.</para>
<para>This technique is frequently used on audio devices
that have small or no hardware &ldquo;sample&rdquo; buffers. There
that have small or no hardware &ldquo;sample&rdquo; buffers. There
is additional CPU overhead to manage this &ldquo;circular&rdquo;
buffer, but in some cases this may be the only way to
eliminate the latency that occurs when the DMA counter
@ -706,7 +706,7 @@
<title>Programming the DMA</title>
<para>The DMA channel that is to be programmed should always be
&ldquo;masked&rdquo; before loading any settings. This is because the
&ldquo;masked&rdquo; before loading any settings. This is because the
hardware might unexpectedly assert the DRQ for that channel, and
the DMA might respond, even though not all of the parameters have
been loaded or updated.</para>
@ -715,8 +715,8 @@
transfer (memory-to-I/O or I/O-to-memory), what mode of DMA
operation is to be used for the transfer (Single, Block, Demand,
Cascade, etc), and finally the address and length of the transfer
are loaded. The length that is loaded is one less than the amount
you expect the DMA to transfer. The LSB and MSB of the address
are loaded. The length that is loaded is one less than the amount
you expect the DMA to transfer. The LSB and MSB of the address
and length are written to the same 8-bit I/O port, so another port
must be written to first to guarantee that the DMA accepts the
first byte as the LSB and the second byte as the MSB of the length
@ -727,14 +727,14 @@
ports.</para>
<para>Once all the settings are ready, the DMA channel can be
un-masked. That DMA channel is now considered to be &ldquo;armed&rdquo;,
un-masked. That DMA channel is now considered to be &ldquo;armed&rdquo;,
and will respond when the DRQ line for that channel is
asserted.</para>
<para>Refer to a hardware data book for precise programming details
for the 8237. You will also need to refer to the I/O port map for
for the 8237. You will also need to refer to the I/O port map for
the PC system, which describes where the DMA and Page Register
ports are located. A complete port map table is located
ports are located. A complete port map table is located
below.</para>
</sect2>
@ -743,8 +743,8 @@
<title>DMA Port Map</title>
<para>All systems based on the IBM-PC and PC/AT have the DMA
hardware located at the same I/O ports. The complete list is
provided below. Ports assigned to DMA Controller #2 are undefined
hardware located at the same I/O ports. The complete list is
provided below. Ports assigned to DMA Controller #2 are undefined
on non-AT designs.</para>
@ -1241,14 +1241,14 @@
<para>The Intel 82374 EISA System Component (ESC) was introduced
in early 1996 and includes a DMA controller that provides a
superset of 8237 functionality as well as other PC-compatible
core peripheral components in a single package. This chip is
core peripheral components in a single package. This chip is
targeted at both EISA and PCI platforms, and provides modern DMA
features like scatter-gather, ring buffers as well as direct
access by the system DMA to all 32 bits of address space.</para>
<para>If these features are used, code should also be included to
provide similar functionality in the previous 16 years worth of
PC-compatible computers. For compatibility reasons, some of the
PC-compatible computers. For compatibility reasons, some of the
82374 registers must be programmed <emphasis>after</emphasis>
programming the traditional 8237 registers for each transfer.
Writing to a traditional 8237 register forces the contents of
@ -1653,7 +1653,7 @@
<sect1 id="internals-vm">
<title>The FreeBSD VM System</title>
<para><emphasis>Contributed by &a.dillon;. 6 Feb 1999</emphasis></para>
<para><emphasis>Contributed by &a.dillon;. 6 Feb 1999</emphasis></para>
<sect2>
<title>Management of physical
@ -1666,7 +1666,7 @@
queues.</para>
<para>A page can be in a wired, active, inactive, cache, or free
state. Except for the wired state, the page is typically placed in a
state. Except for the wired state, the page is typically placed in a
doubly link list queue representing the state that it is in. Wired
pages are not placed on any queue.</para>
@ -1684,9 +1684,9 @@
in the page's flags.</para>
<para>In general terms, each of the paging queues operates in a LRU
fashion. A page is typicaly placed in a wired or active state
fashion. A page is typicaly placed in a wired or active state
initially. When wired, the page is usually associated with a page
table somewhere. The VM system ages the page by scanning pages in a
table somewhere. The VM system ages the page by scanning pages in a
more active paging queue (LRU) in order to move them to a
less-active paging queue. Pages that get moved into the cache are
still associated with a VM object but are candidates for immediate
@ -1707,12 +1707,12 @@
maintain reasonable ratios of pages in the various queues as well as
attempts to maintain a reasonable breakdown of clean vs dirty pages.
The amount of rebalancing that occurs depends on the system's memory
load. This rebalancing is implemented by the pageout daemon and
load. This rebalancing is implemented by the pageout daemon and
involves laundering dirty pages (syncing them with their backing
store), noticing when pages are activity referenced (resetting their
position in the LRU queues or moving them between queues), migrating
pages between queues when the queues are out of balance, and so
forth. FreeBSD's VM system is willing to take a reasonable number of
forth. FreeBSD's VM system is willing to take a reasonable number of
reactivation page faults to determine how active or how idle a page
actually is. This leads to better decisions being made as to when
to launder or swap-out a page.</para>
@ -1725,7 +1725,7 @@
<para>FreeBSD implements the idea of a generic &ldquo;VM
object&rdquo;. VM objects can be associated with backing store of
various types&mdash;unbacked, swap-backed, physical device-backed,
or file-backed storage. Since the filesystem uses the same VM
or file-backed storage. Since the filesystem uses the same VM
objects to manage in-core data relating to files, the result is a
unified buffer cache.</para>
@ -1762,7 +1762,7 @@
the same manner, disk I/O is typically issued by mapping portions of
objects into buffer structures and then issuing the I/O on the
buffer structures. The underlying vm_page_t's are typically busied
for the duration of the I/O. Filesystem buffers also have their own
for the duration of the I/O. Filesystem buffers also have their own
notion of being busy, which is useful to filesystem driver code
which would rather operate on filesystem buffers instead of hard VM
pages.</para>
@ -1812,7 +1812,7 @@
mappings relating to <literal>struct buf</literal> entities.</para>
<para>Unlike Linux, FreeBSD does NOT map all of physical memory into
KVM. This means that FreeBSD can handle memory configurations up to
KVM. This means that FreeBSD can handle memory configurations up to
4G on 32 bit platforms. In fact, if the mmu were capable of it,
FreeBSD could theoretically handle memory configurations up to 8TB
on a 32 bit platform. However, since most 32 bit platforms are only
@ -1837,7 +1837,7 @@
<filename>/usr/src/sys/i386/conf/<replaceable>CONFIG_FILE</replaceable></filename>. A description of all available kernel configuration options can be found in <filename>/usr/src/sys/i386/conf/LINT</filename>.</para>
<para>In a large system configuration you may wish to increase
<literal>maxusers</literal>. Values typically range from 10 to 128.
<literal>maxusers</literal>. Values typically range from 10 to 128.
Note that raising <literal>maxusers</literal> too high can cause the
system to overflow available KVM resulting in unpredictable
operation. It is better to leave maxusers at some reasonable number
@ -1849,7 +1849,7 @@
from 1024 to 4096.</para>
<para>The <literal>NBUF</literal> parameter is also traditionally used
to scale the system. This parameter determines the amount of KVA the
to scale the system. This parameter determines the amount of KVA the
system can use to map filesystem buffers for I/O. Note that this
parameter has nothing whatsoever to do with the unified buffer
cache! This parameter is dynamically tuned in 3.0-CURRENT and