diff --git a/en_US.ISO8859-1/books/handbook/vinum/chapter.xml b/en_US.ISO8859-1/books/handbook/vinum/chapter.xml
index a631b66326..62de877c08 100644
--- a/en_US.ISO8859-1/books/handbook/vinum/chapter.xml
+++ b/en_US.ISO8859-1/books/handbook/vinum/chapter.xml
@@ -31,25 +31,25 @@
- They can be too small.
+ They can be too small.
- They can be too slow.
+ They can be too slow.
- They can be too unreliable.
+ They can be too unreliable.Various solutions to these problems have been proposed and
- implemented. One way some users safeguard themselves against such
- issues is through the use of multiple, and sometimes redundant,
- disks. In addition to supporting various cards and controllers
- for hardware RAID systems, the base &os; system includes the
- Vinum Volume Manager, a block device driver that implements
- virtual disk drives. Vinum is a
+ implemented. One way some users safeguard themselves against
+ such issues is through the use of multiple, and sometimes
+ redundant, disks. In addition to supporting various cards and
+ controllers for hardware RAID systems, the base &os; system
+ includes the Vinum Volume Manager, a block device driver that
+ implements virtual disk drives. Vinum is a
so-called Volume Manager, a virtual disk
driver that addresses these three problems. Vinum provides more
flexibility, performance, and reliability than traditional disk
@@ -57,26 +57,27 @@
individually and in combination.This chapter provides an overview of potential problems with
- traditional disk storage, and an introduction to the Vinum Volume
- Manager.
+ traditional disk storage, and an introduction to the Vinum
+ Volume Manager.
- Starting with &os; 5, Vinum has been rewritten in order
- to fit into the GEOM architecture (),
- retaining the original ideas, terminology, and on-disk
- metadata. This rewrite is called gvinum
- (for GEOM vinum). The following text
- usually refers to Vinum as an abstract
- name, regardless of the implementation variant. Any command
- invocations should now be done using
- the gvinum command, and the name of the
- kernel module has been changed
- from vinum.ko
- to geom_vinum.ko, and all device nodes
- reside under /dev/gvinum instead
- of /dev/vinum. As of &os; 6, the old
- Vinum implementation is no longer available in the code
- base.
+ Starting with &os; 5, Vinum has been rewritten in
+ order to fit into the GEOM architecture (), retaining the original ideas,
+ terminology, and on-disk metadata. This rewrite is called
+ gvinum (for GEOM
+ vinum). The following text usually refers to
+ Vinum as an abstract name, regardless of
+ the implementation variant. Any command invocations should
+ now be done using the gvinum command, and
+ the name of the kernel module has been changed from
+ vinum.ko to
+ geom_vinum.ko, and all device nodes
+ reside under /dev/gvinum instead of
+ /dev/vinum. As of
+ &os; 6, the old Vinum implementation is no longer
+ available in the code base.
@@ -86,7 +87,7 @@
VinumRAID
- software
+ softwareDisks are getting bigger, but so are data storage
requirements. Often you will find you want a file system that
@@ -137,8 +138,7 @@
it uses several smaller disks with the same aggregate storage
space. Each disk is capable of positioning and transferring
independently, so the effective throughput increases by a factor
- close to the number of disks used.
-
+ close to the number of disks used.
The exact throughput improvement is, of course, smaller than
the number of disks involved: although each drive is capable of
@@ -175,9 +175,9 @@
Concatenated Organization
+
-
-
+ disk striping
@@ -200,152 +200,150 @@
RAID stands for Redundant
- Array of Inexpensive Disks and offers various forms
- of fault tolerance, though the latter term is somewhat
- misleading: it provides no redundancy..
+ Array of Inexpensive Disks and offers various
+ forms of fault tolerance, though the latter term is somewhat
+ misleading: it provides no redundancy. .
- Striping requires somewhat more effort to locate the data, and it
- can cause additional I/O load where a transfer is spread over
- multiple disks, but it can also provide a more constant load
- across the disks. illustrates the
- sequence in which storage units are allocated in a striped
- organization.
+ Striping requires somewhat more effort to locate the
+ data, and it can cause additional I/O load where a transfer is
+ spread over multiple disks, but it can also provide a more
+ constant load across the disks.
+ illustrates the sequence in which storage units are allocated in
+ a striped organization.
- Striped Organization
+ Striped Organization
+
-
-
+
Data Integrity
- The final problem with current disks is that they are
- unreliable. Although disk drive reliability has increased
- tremendously over the last few years, they are still the most
- likely core component of a server to fail. When they do, the
- results can be catastrophic: replacing a failed disk drive and
- restoring data to it can take days.
+ The final problem with current disks is that they are
+ unreliable. Although disk drive reliability has increased
+ tremendously over the last few years, they are still the most
+ likely core component of a server to fail. When they do, the
+ results can be catastrophic: replacing a failed disk drive and
+ restoring data to it can take days.
-
- disk mirroring
-
-
- Vinum
- mirroring
-
-
- RAID-1
-
+
+ disk mirroring
+
+ Vinum
+ mirroring
+
+ RAID-1
+
- The traditional way to approach this problem has been
- mirroring, keeping two copies of the data
- on different physical hardware. Since the advent of the
- RAID levels, this technique has also been
- called RAID level 1 or
- RAID-1. Any write to the volume writes to
- both locations; a read can be satisfied from either, so if one
- drive fails, the data is still available on the other
- drive.
+ The traditional way to approach this problem has been
+ mirroring, keeping two copies of the data
+ on different physical hardware. Since the advent of the
+ RAID levels, this technique has also been
+ called RAID level 1 or
+ RAID-1. Any write to the volume writes to
+ both locations; a read can be satisfied from either, so if one
+ drive fails, the data is still available on the other
+ drive.
- Mirroring has two problems:
+ Mirroring has two problems:
-
-
- The price. It requires twice as much disk storage as
- a non-redundant solution.
-
+
+
+ The price. It requires twice as much disk storage as
+ a non-redundant solution.
+
-
- The performance impact. Writes must be performed to
- both drives, so they take up twice the bandwidth of a
- non-mirrored volume. Reads do not suffer from a
- performance penalty: it even looks as if they are
- faster.
-
-
+
+ The performance impact. Writes must be performed to
+ both drives, so they take up twice the bandwidth of a
+ non-mirrored volume. Reads do not suffer from a
+ performance penalty: it even looks as if they are
+ faster.
+
+
- RAID-5An
- alternative solution is parity,
- implemented in the RAID levels 2, 3, 4 and
- 5. Of these, RAID-5 is the most
- interesting. As implemented in Vinum, it is a variant on a
- striped organization which dedicates one block of each stripe
- to parity one of the other blocks. As implemented by Vinum, a
- RAID-5 plex is similar to a striped plex,
- except that it implements RAID-5 by
- including a parity block in each stripe. As required by
- RAID-5, the location of this parity block
- changes from one stripe to the next. The numbers in the data
- blocks indicate the relative block numbers.
+ RAID-5An
+ alternative solution is parity, implemented
+ in the RAID levels 2, 3, 4 and 5. Of these,
+ RAID-5 is the most interesting. As
+ implemented in Vinum, it is a variant on a striped organization
+ which dedicates one block of each stripe to parity one of the
+ other blocks. As implemented by Vinum, a
+ RAID-5 plex is similar to a striped plex,
+ except that it implements RAID-5 by
+ including a parity block in each stripe. As required by
+ RAID-5, the location of this parity block
+ changes from one stripe to the next. The numbers in the data
+ blocks indicate the relative block numbers.
-
-
- RAID-5 Organization
-
-
-
+
+
+ RAID-5 Organization
- Compared to mirroring, RAID-5 has the
- advantage of requiring significantly less storage space. Read
- access is similar to that of striped organizations, but write
- access is significantly slower, approximately 25% of the read
- performance. If one drive fails, the array can continue to
- operate in degraded mode: a read from one of the remaining
- accessible drives continues normally, but a read from the
- failed drive is recalculated from the corresponding block from
- all the remaining drives.
-
+
+
+
+ Compared to mirroring, RAID-5 has the
+ advantage of requiring significantly less storage space. Read
+ access is similar to that of striped organizations, but write
+ access is significantly slower, approximately 25% of the read
+ performance. If one drive fails, the array can continue to
+ operate in degraded mode: a read from one of the remaining
+ accessible drives continues normally, but a read from the
+ failed drive is recalculated from the corresponding block from
+ all the remaining drives.Vinum Objects
- In order to address these problems, Vinum implements a four-level
- hierarchy of objects:
-
-
- The most visible object is the virtual disk, called a
- volume. Volumes have essentially the same
- properties as a &unix; disk drive, though there are some minor
- differences. They have no size limitations.
-
+ In order to address these problems, Vinum implements a
+ four-level hierarchy of objects:
-
- Volumes are composed of plexes,
- each of which represent the total address space of a
- volume. This level in the hierarchy thus provides
- redundancy. Think of plexes as individual disks in a
- mirrored array, each containing the same data.
-
+
+
+ The most visible object is the virtual disk, called a
+ volume. Volumes have essentially the
+ same properties as a &unix; disk drive, though there are
+ some minor differences. They have no size
+ limitations.
+
-
- Since Vinum exists within the &unix; disk storage
- framework, it would be possible to use &unix;
- partitions as the building block for multi-disk plexes,
- but in fact this turns out to be too inflexible:
- &unix; disks can have only a limited number of
- partitions. Instead, Vinum subdivides a single
- &unix; partition (the drive)
- into contiguous areas called
- subdisks, which it uses as building
- blocks for plexes.
-
+
+ Volumes are composed of plexes,
+ each of which represent the total address space of a
+ volume. This level in the hierarchy thus provides
+ redundancy. Think of plexes as individual disks in a
+ mirrored array, each containing the same data.
+
-
- Subdisks reside on Vinum drives,
- currently &unix; partitions. Vinum drives can
- contain any number of subdisks. With the exception of a
- small area at the beginning of the drive, which is used
- for storing configuration and state information, the
- entire drive is available for data storage.
-
-
+
+ Since Vinum exists within the &unix; disk storage
+ framework, it would be possible to use &unix; partitions
+ as the building block for multi-disk plexes, but in fact
+ this turns out to be too inflexible: &unix; disks can have
+ only a limited number of partitions. Instead, Vinum
+ subdivides a single &unix; partition (the
+ drive) into contiguous areas called
+ subdisks, which it uses as building
+ blocks for plexes.
+
- The following sections describe the way these objects provide the
- functionality required of Vinum.
+
+ Subdisks reside on Vinum drives,
+ currently &unix; partitions. Vinum drives can contain any
+ number of subdisks. With the exception of a small area at
+ the beginning of the drive, which is used for storing
+ configuration and state information, the entire drive is
+ available for data storage.
+
+
+
+ The following sections describe the way these objects
+ provide the functionality required of Vinum.Volume Size Considerations
@@ -358,6 +356,7 @@
Redundant Data Storage
+
Vinum implements mirroring by attaching multiple plexes to
a volume. Each plex is a representation of the data in a
volume. A volume may contain between one and eight
@@ -395,8 +394,9 @@
Which Plex Organization?
- The version of Vinum supplied with &os; &rel.current; implements
- two kinds of plex:
+
+ The version of Vinum supplied with &os; &rel.current;
+ implements two kinds of plex:
@@ -409,7 +409,7 @@
measurable. On the other hand, they are most susceptible
to hot spots, where one disk is very active and others are
idle.
-
+
The greatest advantage of striped
@@ -427,19 +427,20 @@
- summarizes the advantages
- and disadvantages of each plex organization.
+ summarizes the
+ advantages and disadvantages of each plex organization.
Vinum Plex Organizations
+
Plex type
- Minimum subdisks
- Can add subdisks
- Must be equal size
- Application
+ Minimum subdisks
+ Can add subdisks
+ Must be equal size
+ Application
@@ -449,8 +450,8 @@
1yesno
- Large data storage with maximum placement flexibility
- and moderate performance
+ Large data storage with maximum placement
+ flexibility and moderate performance
@@ -458,8 +459,8 @@
2noyes
- High performance in combination with highly concurrent
- access
+ High performance in combination with highly
+ concurrent access
@@ -471,7 +472,7 @@
Some ExamplesVinum maintains a configuration
- database which describes the objects known to an
+ database which describes the objects known to an
individual system. Initially, the user creates the
configuration database from one or more configuration files with
the aid of the &man.gvinum.8; utility program. Vinum stores a
@@ -482,11 +483,11 @@
The Configuration File
- The configuration file describes individual Vinum objects. The
- definition of a simple volume might be:
-
- drive a device /dev/da3h
+ The configuration file describes individual Vinum objects.
+ The definition of a simple volume might be:
+
+ drive a device /dev/da3h
volume myvol
plex org concat
sd length 512m drive a
@@ -505,9 +506,9 @@
- The volume line describes a volume.
- The only required attribute is the name, in this case
- myvol.
+ The volume line describes a
+ volume. The only required attribute is the name, in this
+ case myvol.
@@ -535,8 +536,8 @@
- After processing this file, &man.gvinum.8; produces the following
- output:
+ After processing this file, &man.gvinum.8; produces the
+ following output:
&prompt.root; gvinum -> create config1
@@ -554,15 +555,16 @@
S myvol.p0.s0 State: up PO: 0 B Size: 512 MB
- This output shows the brief listing format of &man.gvinum.8;. It
- is represented graphically in .
+ This output shows the brief listing format of
+ &man.gvinum.8;. It is represented graphically in .A Simple Vinum Volume
+
-
-
+ This figure, and the ones which follow, represent a
volume, which contains the plexes, which in turn contain the
@@ -587,8 +589,7 @@
that a drive failure will not take down both plexes. The
following configuration mirrors a volume:
-
- drive b device /dev/da4h
+ drive b device /dev/da4h
volume mirror
plex org concat
sd length 512m drive a
@@ -628,9 +629,9 @@
A Mirrored Vinum Volume
+
-
-
+
In this example, each plex contains the full 512 MB
of address space. As in the previous example, each plex
@@ -650,8 +651,7 @@
shows a volume with a plex striped across four disk
drives:
-
- drive c device /dev/da5h
+ drive c device /dev/da5h
drive d device /dev/da6h
volume stripe
plex org striped 512k
@@ -660,9 +660,9 @@
sd length 128m drive c
sd length 128m drive d
- As before, it is not necessary to define the drives which are
- already known to Vinum. After processing this definition, the
- configuration looks like:
+ As before, it is not necessary to define the drives which
+ are already known to Vinum. After processing this definition,
+ the configuration looks like:
Drives: 4 (4 configured)
@@ -695,27 +695,26 @@
A Striped Vinum Volume
+
-
-
+
This volume is represented in
- . The darkness of the stripes
- indicates the position within the plex address space: the lightest stripes
- come first, the darkest last.
+ . The darkness of the
+ stripes indicates the position within the plex address space:
+ the lightest stripes come first, the darkest last.
Resilience and Performance
- With sufficient hardware, it
- is possible to build volumes which show both increased
+ With sufficient hardware,
+ it is possible to build volumes which show both increased
resilience and increased performance compared to standard
&unix; partitions. A typical configuration file might
be:
-
- volume raid10
+ volume raid10
plex org striped 512k
sd length 102480k drive a
sd length 102480k drive b
@@ -729,19 +728,20 @@
sd length 102480k drive a
sd length 102480k drive b
- The subdisks of the second plex are offset by two drives from those
- of the first plex: this helps ensure that writes do not go to the same
- subdisks even if a transfer goes over two drives.
+ The subdisks of the second plex are offset by two drives
+ from those of the first plex: this helps ensure that writes do
+ not go to the same subdisks even if a transfer goes over two
+ drives.
- represents the structure
- of this volume.
+ represents the
+ structure of this volume.A Mirrored, Striped Vinum Volume
+
-
-
+
@@ -762,19 +762,21 @@
drives may be up to 32 characters long.
Vinum objects are assigned device nodes in the hierarchy
- /dev/gvinum. The configuration shown above
- would cause Vinum to create the following device nodes:
+ /dev/gvinum. The
+ configuration shown above would cause Vinum to create the
+ following device nodes:
Device entries for each volume.
- These are the main devices used by Vinum. Thus the configuration
- above would include the devices
+ These are the main devices used by Vinum. Thus the
+ configuration above would include the devices
/dev/gvinum/myvol,
/dev/gvinum/mirror,
/dev/gvinum/striped,
- /dev/gvinum/raid5 and
- /dev/gvinum/raid10.
+ /dev/gvinum/raid5
+ and /dev/gvinum/raid10.
@@ -785,15 +787,15 @@
The directories
/dev/gvinum/plex, and
- /dev/gvinum/sd, which contain
- device nodes for each plex and for each subdisk,
+ /dev/gvinum/sd, which
+ contain device nodes for each plex and for each subdisk,
respectively.
- For example, consider the following configuration file:
-
- drive drive1 device /dev/sd1h
+ For example, consider the following configuration
+ file:
+ drive drive1 device /dev/sd1h
drive drive2 device /dev/sd2h
drive drive3 device /dev/sd3h
drive drive4 device /dev/sd4h
@@ -804,11 +806,11 @@
sd length 100m drive drive3
sd length 100m drive drive4
- After processing this file, &man.gvinum.8; creates the following
- structure in /dev/gvinum:
+ After processing this file, &man.gvinum.8; creates the
+ following structure in /dev/gvinum:
-
- drwxr-xr-x 2 root wheel 512 Apr 13 16:46 plex
+ drwxr-xr-x 2 root wheel 512 Apr 13 16:46 plex
crwxr-xr-- 1 root wheel 91, 2 Apr 13 16:46 s64
drwxr-xr-x 2 root wheel 512 Apr 13 16:46 sd
@@ -839,15 +841,16 @@
utilities, notably &man.newfs.8;, which previously tried to
interpret the last letter of a Vinum volume name as a
partition identifier. For example, a disk drive may have a
- name like /dev/ad0a or
- /dev/da2h. These names represent
- the first partition (a) on the
- first (0) IDE disk (ad) and the
- eighth partition (h) on the third
- (2) SCSI disk (da) respectively.
- By contrast, a Vinum volume might be called
- /dev/gvinum/concat, a name which has
- no relationship with a partition name.
+ name like /dev/ad0a
+ or /dev/da2h. These
+ names represent the first partition
+ (a) on the first (0) IDE disk
+ (ad) and the eighth partition
+ (h) on the third (2) SCSI disk
+ (da) respectively. By contrast, a
+ Vinum volume might be called /dev/gvinum/concat, a name
+ which has no relationship with a partition name.
In order to create a file system on this volume, use
&man.newfs.8;:
@@ -864,8 +867,8 @@
Vinum, but this is not recommended. The standard way to start
Vinum is as a kernel module (kld). You do
not even need to use &man.kldload.8; for Vinum: when you start
- &man.gvinum.8;, it checks whether the module has been loaded, and
- if it is not, it loads it automatically.
+ &man.gvinum.8;, it checks whether the module has been loaded,
+ and if it is not, it loads it automatically.
@@ -878,7 +881,7 @@
configuration files. For example, a disk configuration might
contain the following text:
- volume myvol state up
+ volume myvol state up
volume bigraid state down
plex name myvol.p0 state up org concat vol myvol
plex name myvol.p1 state up org concat vol myvol
@@ -909,96 +912,96 @@ sd name bigraid.p0.s4 drive e plex bigraid.p0 state initializing len 4194304b dr
if they have been assigned different &unix; drive
IDs.
-
- Automatic Startup
+
+ Automatic Startup
-
- Gvinum always
- features an automatic startup once the kernel module is
- loaded, via &man.loader.conf.5;. To load the
- Gvinum module at boot time, add
- geom_vinum_load="YES" to
- /boot/loader.conf.
+ Gvinum always features an
+ automatic startup once the kernel module is loaded, via
+ &man.loader.conf.5;. To load the
+ Gvinum module at boot time, add
+ geom_vinum_load="YES" to
+ /boot/loader.conf.
- When you start Vinum with the gvinum
- start command, Vinum reads the configuration
- database from one of the Vinum drives. Under normal
- circumstances, each drive contains an identical copy of the
- configuration database, so it does not matter which drive is
- read. After a crash, however, Vinum must determine which
- drive was updated most recently and read the configuration
- from this drive. It then updates the configuration if
- necessary from progressively older drives.
+ When you start Vinum with the gvinum
+ start command, Vinum reads the configuration
+ database from one of the Vinum drives. Under normal
+ circumstances, each drive contains an identical copy of
+ the configuration database, so it does not matter which
+ drive is read. After a crash, however, Vinum must
+ determine which drive was updated most recently and read
+ the configuration from this drive. It then updates the
+ configuration if necessary from progressively older
+ drives.
+
+
+
-
-
-
+
+ Using Vinum for the Root Filesystem
-
- Using Vinum for the Root Filesystem
-
- For a machine that has fully-mirrored filesystems using
- Vinum, it is desirable to also mirror the root filesystem.
- Setting up such a configuration is less trivial than mirroring
- an arbitrary filesystem because:
-
-
-
- The root filesystem must be available very early during
- the boot process, so the Vinum infrastructure must already be
- available at this time.
-
-
- The volume containing the root filesystem also contains
- the system bootstrap and the kernel, which must be read
- using the host system's native utilities (e. g. the BIOS on
- PC-class machines) which often cannot be taught about the
- details of Vinum.
-
-
-
- In the following sections, the term root
- volume is generally used to describe the Vinum volume
- that contains the root filesystem. It is probably a good idea
- to use the name "root" for this volume, but
- this is not technically required in any way. All command
- examples in the following sections assume this name though.
-
-
- Starting up Vinum Early Enough for the Root
- Filesystem
-
- There are several measures to take for this to
- happen:
+ For a machine that has fully-mirrored filesystems using
+ Vinum, it is desirable to also mirror the root filesystem.
+ Setting up such a configuration is less trivial than mirroring
+ an arbitrary filesystem because:
- Vinum must be available in the kernel at boot-time.
- Thus, the method to start Vinum automatically described in
- is not applicable to
- accomplish this task, and the
- start_vinum parameter must actually
- not be set when the following setup
- is being arranged. The first option would be to compile
- Vinum statically into the kernel, so it is available all
- the time, but this is usually not desirable. There is
- another option as well, to have
- /boot/loader () load the vinum kernel module
- early, before starting the kernel. This can be
- accomplished by putting the line:
+ The root filesystem must be available very early
+ during the boot process, so the Vinum infrastructure must
+ alrqeady be available at this time.
+
+
+ The volume containing the root filesystem also
+ contains the system bootstrap and the kernel, which must
+ be read using the host system's native utilities (e. g.
+ the BIOS on PC-class machines) which often cannot be
+ taught about the details of Vinum.
+
+
- geom_vinum_load="YES"
+ In the following sections, the term root
+ volume is generally used to describe the Vinum
+ volume that contains the root filesystem. It is probably a
+ good idea to use the name "root" for this
+ volume, but this is not technically required in any way. All
+ command examples in the following sections assume this name
+ though.
+
+
+ Starting up Vinum Early Enough for the Root
+ Filesystem
+
+ There are several measures to take for this to
+ happen:
+
+
+
+ Vinum must be available in the kernel at boot-time.
+ Thus, the method to start Vinum automatically described
+ in is not applicable
+ to accomplish this task, and the
+ start_vinum parameter must actually
+ not be set when the following setup
+ is being arranged. The first option would be to compile
+ Vinum statically into the kernel, so it is available all
+ the time, but this is usually not desirable. There is
+ another option as well, to have
+ /boot/loader () load the vinum kernel module
+ early, before starting the kernel. This can be
+ accomplished by putting the line:
+
+ geom_vinum_load="YES"into the file
/boot/loader.conf.
- For Gvinum, all startup
- is done automatically once the kernel module has been
- loaded, so the procedure described above is all that is
- needed.
+ For Gvinum, all startup is done
+ automatically once the kernel module has been loaded, so
+ the procedure described above is all that is
+ needed.
@@ -1012,7 +1015,7 @@ sd name bigraid.p0.s4 drive e plex bigraid.p0 state initializing len 4194304b dr
/boot/loader) from the UFS filesystem, it
is sheer impossible to also teach it about internal Vinum
structures so it could parse the Vinum configuration data, and
- figure out about the elements of a boot volume itself. Thus,
+ figure out about the elements of a boot volume itself. Thus,
some tricks are necessary to provide the bootstrap code with
the illusion of a standard "a" partition
that contains the root filesystem.
@@ -1036,19 +1039,19 @@ sd name bigraid.p0.s4 drive e plex bigraid.p0 state initializing len 4194304b dr
filesystem. The bootstrap process will, however, only use one
of these replica for finding the bootstrap and all the files,
until the kernel will eventually mount the root filesystem
- itself. Each single subdisk within these plexes will then
+ itself. Each single subdisk within these plexes will then
need its own "a" partition illusion, for
the respective device to become bootable. It is not strictly
needed that each of these faked "a"
partitions is located at the same offset within its device,
compared with other devices containing plexes of the root
- volume. However, it is probably a good idea to create the
+ volume. However, it is probably a good idea to create the
Vinum volumes that way so the resulting mirrored devices are
symmetric, to avoid confusion.
- In order to set up these "a" partitions,
- for each device containing part of the root volume, the
- following needs to be done:
+ In order to set up these "a"
+ partitions, for each device containing part of the root
+ volume, the following needs to be done:
@@ -1094,9 +1097,9 @@ sd name bigraid.p0.s4 drive e plex bigraid.p0 state initializing len 4194304b dr
"offset" value for the new
"a" partition. The
"size" value for this partition can be
- taken verbatim from the calculation above. The
+ taken verbatim from the calculation above. The
"fstype" should be
- 4.2BSD. The
+ 4.2BSD. The
"fsize", "bsize",
and "cpg" values should best be chosen
to match the actual filesystem, though they are fairly
@@ -1144,8 +1147,7 @@ sd name bigraid.p0.s4 drive e plex bigraid.p0 state initializing len 4194304b dr
After the Vinum root volume has been set up, the output of
gvinum l -rv root could look like:
-
-...
+ ...
Subdisk root.p0.s0:
Size: 125829120 bytes (120 MB)
State: up
@@ -1156,37 +1158,35 @@ Subdisk root.p1.s0:
Size: 125829120 bytes (120 MB)
State: up
Plex root.p1 at offset 0 (0 B)
- Drive disk1 (/dev/da1h) at offset 135680 (132 kB)
-
+ Drive disk1 (/dev/da1h) at offset 135680 (132 kB)The values to note are 135680 for the
offset (relative to partition
- /dev/da0h). This translates to 265
- 512-byte disk blocks in bsdlabel's terms.
- Likewise, the size of this root volume is 245760 512-byte
- blocks. /dev/da1h, containing the
+ /dev/da0h). This
+ translates to 265 512-byte disk blocks in
+ bsdlabel's terms. Likewise, the size of
+ this root volume is 245760 512-byte blocks. /dev/da1h, containing the
second replica of this root volume, has a symmetric
setup.The bsdlabel for these devices might look like:
-
-...
+ ...
8 partitions:
# size offset fstype [fsize bsize bps/cpg]
a: 245760 281 4.2BSD 2048 16384 0 # (Cyl. 0*- 15*)
c: 71771688 0 unused 0 0 # (Cyl. 0 - 4467*)
- h: 71771672 16 vinum # (Cyl. 0*- 4467*)
-
+ h: 71771672 16 vinum # (Cyl. 0*- 4467*)It can be observed that the "size"
parameter for the faked "a" partition
matches the value outlined above, while the
"offset" parameter is the sum of the offset
within the Vinum partition "h", and the
- offset of this partition within the device (or slice). This
+ offset of this partition within the device (or slice). This
is a typical setup that is necessary to avoid the problem
- described in . It can also
+ described in . It can also
be seen that the entire "a" partition is
completely within the "h" partition
containing all the Vinum data for this device.
@@ -1201,15 +1201,16 @@ Subdisk root.p1.s0:
TroubleshootingIf something goes wrong, a way is needed to recover from
- the situation. The following list contains few known pitfalls
+ the situation. The following list contains few known pitfalls
and solutions.
- System Bootstrap Loads, but System Does Not Boot
+ System Bootstrap Loads, but System Does Not
+ BootIf for any reason the system does not continue to boot,
the bootstrap can be interrupted with by pressing the
- space key at the 10-seconds warning. The
+ space key at the 10-seconds warning. The
loader variables (like vinum.autostart)
can be examined using the show, and
manipulated using set or
@@ -1220,7 +1221,7 @@ Subdisk root.p1.s0:
simple load geom_vinum will help.When ready, the boot process can be continued with a
- boot -as. The options
+ boot -as. The options
will request the kernel to ask for the
root filesystem to mount (), and make the
boot process stop in single-user mode (),
@@ -1233,8 +1234,8 @@ Subdisk root.p1.s0:
device that contains a valid root filesystem can be entered.
If /etc/fstab had been set up
correctly, the default should be something like
- ufs:/dev/gvinum/root. A typical alternate
- choice would be something like
+ ufs:/dev/gvinum/root. A typical
+ alternate choice would be something like
ufs:da0d which could be a
hypothetical partition that contains the pre-Vinum root
filesystem. Care should be taken if one of the alias
@@ -1268,10 +1269,10 @@ Subdisk root.p1.s0:
Panics
This situation will happen if the bootstrap had been
- destroyed by the Vinum installation. Unfortunately, Vinum
+ destroyed by the Vinum installation. Unfortunately, Vinum
accidentally currently leaves only 4 KB at the beginning of
its partition free before starting to write its Vinum header
- information. However, the stage one and two bootstraps plus
+ information. However, the stage one and two bootstraps plus
the bsdlabel embedded between them currently require 8 KB.
So if a Vinum partition was started at offset 0 within a
slice or disk that was meant to be bootable, the Vinum setup