From 6aea3fc76d6f27c35be8f5ae782b77c985571f5b Mon Sep 17 00:00:00 2001 From: Dru Lavigne Date: Mon, 11 Feb 2013 14:58:34 +0000 Subject: [PATCH] This patch addresses the following: - removes you - fixes xref - modernizes the intro - modernizes the ZFS RAM section - updates the date in one sample output Approved by: gjb (mentor) --- .../books/handbook/filesystems/chapter.xml | 533 ++++++++---------- 1 file changed, 250 insertions(+), 283 deletions(-) diff --git a/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml b/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml index 3602d925ca..407b67d6e1 100644 --- a/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml +++ b/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml @@ -27,32 +27,30 @@ File systems are an integral part of any operating system. - They allow for users to upload and store files, provide access - to data, and of course, make hard drives useful. Different - operating systems usually have one major aspect in common, that - is their native file system. On &os; this file system is known - as the Fast File System or FFS which is built - on the original Unix™ File System, also known as - UFS. This is the native file system on &os; - which is placed on hard disks for access to data. - - &os; also supports a multitude of different file systems to - provide support for accessing data from other operating systems - locally, i.e., data stored on locally attached - USB storage devices, flash drives, and hard - disks. There is also support for some non-native file systems. - These are file systems developed on other - operating systems, like the &linux; Extended File System - (EXT), and the &sun; Z File System - (ZFS). - - There are different levels of support for the various file - systems in &os;. Some will require a kernel module to be - loaded, others may require a toolset to be installed. This - chapter is designed to help users of &os; access other file - systems on their systems, starting with the &sun; Z file + They allow users to upload and store files, provide access + to data, and make hard drives useful. Different operating + systems differ in their native file system. Traditionally, the + native &os; file system has been the Unix File System + UFS which has been recently modernized as + UFS2. Since &os; 7.0, the Z File + System ZFS is also available as a native file system. + In addition to its native file systems, &os; supports a + multitude of other file systems so that data from other + operating systems can be accessed locally, such as data stored + on locally attached USB storage devices, + flash drives, and hard disks. This includes support for the + &linux; Extended File System (EXT) and the + µsoft; New Technology File System + (NTFS). + + There are different levels of &os; support for the various + file systems. Some require a kernel module to be loaded and + others may require a toolset to be installed. Some non-native + file system support is full read-write while others are + read-only. + After reading this chapter, you will know: @@ -62,11 +60,11 @@ - What file systems are supported by &os;. + Which file systems are supported by &os;. - How to enable, configure, access and make use of + How to enable, configure, access, and make use of non-native file systems. @@ -75,24 +73,25 @@ - Understand &unix; and &os; basics - (). + Understand &unix; and &os; basics. - Be familiar with - the basics of kernel configuration/compilation - (). + Be familiar with the basics of kernel configuration and + compilation. - Feel comfortable installing third party software - in &os; (). + Feel comfortable installing + software in &os;. - Have some familiarity with disks, storage and - device names in &os; (). + Have some familiarity with disks, storage, and device names in + &os;. @@ -100,73 +99,67 @@ The Z File System (ZFS) - The Z file system, developed by &sun;, is a new - technology designed to use a pooled storage method. This means - that space is only used as it is needed for data storage. It - has also been designed for maximum data integrity, supporting - data snapshots, multiple copies, and data checksums. A new - data replication model, known as RAID-Z has - been added. The RAID-Z model is similar - to RAID5 but is designed to prevent data - write corruption. + The Z file system, originally developed by &sun;, + is designed to use a pooled storage method in that space is only + used as it is needed for data storage. It is also designed for + maximum data integrity, supporting data snapshots, multiple + copies, and data checksums. It uses a software data replication + model, known as RAID-Z. + RAID-Z provides redundancy similar to + hardware RAID, but is designed to prevent + data write corruption and to overcome some of the limitations + of hardware RAID. ZFS Tuning - The ZFS subsystem utilizes much of - the system resources, so some tuning may be required to - provide maximum efficiency during every-day use. As an - experimental feature in &os; this may change in the near - future; however, at this time, the following steps are - recommended. + Some of the features provided by ZFS + are RAM-intensive, so some tuning may be required to provide + maximum efficiency on systems with limited RAM. Memory - The total system memory should be at least one gigabyte, - with two gigabytes or more recommended. In all of the - examples here, the system has one gigabyte of memory with - several other tuning mechanisms in place. - - Some people have had luck using fewer than one gigabyte - of memory, but with such a limited amount of physical - memory, when the system is under heavy load, it is very - plausible that &os; will panic due to memory - exhaustion. + At a bare minimum, the total system memory should be at + least one gigabyte. The amount of recommended RAM depends + upon the size of the pool and the ZFS features which are + used. A general rule of thumb is 1GB of RAM for every 1TB + of storage. If the deduplication feature is used, a general + rule of thumb is 5GB of RAM per TB of storage to be + deduplicated. While some users successfully use ZFS with + less RAM, it is possible that when the system is under heavy + load, it may panic due to memory exhaustion. Further tuning + may be required for systems with less than the recommended + RAM requirements. Kernel Configuration - It is recommended that unused drivers and options - be removed from the kernel configuration file. Since most - devices are available as modules, they may be loaded - using the /boot/loader.conf - file. - - Users of the &i386; architecture should add the - following option to their kernel configuration file, - rebuild their kernel, and reboot: + Due to the RAM limitations of the &i386; platform, users + using ZFS on the &i386; architecture should add the + following option to a custom kernel configuration file, + rebuild the kernel, and reboot: options KVA_PAGES=512 - This option will expand the kernel address space, thus - allowing the vm.kvm_size tunable to be - pushed beyond the currently imposed limit of 1 GB - (2 GB for PAE). To find the most - suitable value for this option, divide the desired address - space in megabytes by four (4). In this case, it is - 512 for 2 GB. + This option expands the kernel address space, allowing + the vm.kvm_size tunable to be pushed + beyond the currently imposed limit of 1 GB, or the + limit of 2 GB for PAE. To find the + most suitable value for this option, divide the desired + address space in megabytes by four (4). In this example, it + is 512 for 2 GB. Loader Tunables - The kmem address space should - be increased on all &os; architectures. On the test system + The kmem address space can + be increased on all &os; architectures. On a test system with one gigabyte of physical memory, success was achieved - with the following options which should be placed in the - /boot/loader.conf file and the system + with the following options added to + /boot/loader.conf, and the system restarted: vm.kmem_size="330M" @@ -191,22 +184,21 @@ vfs.zfs.vdev.cache.size="5M" &prompt.root; echo 'zfs_enable="YES"' >> /etc/rc.conf &prompt.root; service zfs start - The remainder of this document assumes three - SCSI disks are available, and their - device names are + The examples in this section assume three + SCSI disks with the device names da0, - da1 + da1, and da2. - Users of IDE hardware may use the + Users of IDE hardware should instead use ad - devices in place of SCSI hardware. + device names. Single Disk Pool To create a simple, non-redundant ZFS - pool using a single disk device, use the - zpool command: + pool using a single disk device, use + zpool: &prompt.root; zpool create example /dev/da0 @@ -220,12 +212,11 @@ devfs 1 1 0 100% /dev /dev/ad0s1d 54098308 1032846 48737598 2% /usr example 17547136 0 17547136 0% /example - This output clearly shows the example - pool has not only been created but - mounted as well. It is also accessible - just like a normal file system, files may be created on it - and users are able to browse it as in the - following example: + This output shows that the example + pool has been created and mounted. It + is now accessible as a file system. Files may be created + on it and users can browse it, as seen in the following + example: &prompt.root; cd /example &prompt.root; ls @@ -236,25 +227,24 @@ drwxr-xr-x 2 root wheel 3 Aug 29 23:15 . drwxr-xr-x 21 root wheel 512 Aug 29 23:12 .. -rw-r--r-- 1 root wheel 0 Aug 29 23:15 testfile - Unfortunately this pool is not taking advantage of - any ZFS features. Create a file system - on this pool, and enable compression on it: + However, this pool is not taking advantage of any + ZFS features. To create a dataset on + this pool with compression enabled: &prompt.root; zfs create example/compressed &prompt.root; zfs set compression=gzip example/compressed - The example/compressed is now a - ZFS compressed file system. Try copying - some large files to it by copying them to The example/compressed dataset is now + a ZFS compressed file system. Try + copying some large files to /example/compressed. - The compression may now be disabled with: + Compression can be disabled with: &prompt.root; zfs set compression=off example/compressed - To unmount the file system, issue the following command - and then verify by using the df - utility: + To unmount a file system, issue the following command + and then verify by using df: &prompt.root; zfs umount example/compressed &prompt.root; df @@ -264,7 +254,7 @@ devfs 1 1 0 100% /dev /dev/ad0s1d 54098308 1032864 48737580 2% /usr example 17547008 0 17547008 0% /example - Re-mount the file system to make it accessible + To re-mount the file system to make it accessible again, and verify with df: &prompt.root; zfs mount example/compressed @@ -287,18 +277,19 @@ example on /example (zfs, local) example/data on /example/data (zfs, local) example/compressed on /example/compressed (zfs, local) - As observed, ZFS file systems, after - creation, may be used like ordinary file systems; however, - many other features are also available. In the following - example, a new file system, data is - created. Important files will be stored here, so the file - system is set to keep two copies of each data block: + ZFS datasets, after creation, may be + used like any file systems. However, many other features + are available which can be set on a per-dataset basis. In + the following example, a new file system, + data is created. Important files will be + stored here, the file system is set to keep two copies of + each data block: &prompt.root; zfs create example/data &prompt.root; zfs set copies=2 example/data It is now possible to see the data and space utilization - by issuing df again: + by issuing df: &prompt.root; df Filesystem 1K-blocks Used Avail Capacity Mounted on @@ -311,64 +302,56 @@ example/data 17547008 0 17547008 0% /example/data Notice that each file system on the pool has the same amount of available space. This is the reason for using - df through these examples, to show - that the file systems are using only the amount of space - they need and will all draw from the same pool. The - ZFS file system does away with concepts - such as volumes and partitions, and allows for several file - systems to occupy the same pool. Destroy the file systems, - and then destroy the pool as they are no longer - needed: + df in these examples, to show that the + file systems use only the amount of space they need and all + draw from the same pool. The ZFS file + system does away with concepts such as volumes and + partitions, and allows for several file systems to occupy + the same pool. + + To destroy the file systems and then destroy the pool as + they are no longer needed: &prompt.root; zfs destroy example/compressed &prompt.root; zfs destroy example/data &prompt.root; zpool destroy example - Disks go bad and fail, an unavoidable trait. When - this disk goes bad, the data will be lost. One method of - avoiding data loss due to a failed hard disk is to implement - a RAID. ZFS supports - this feature in its pool design which is covered in - the next section. <acronym>ZFS</acronym> RAID-Z - As previously noted, this section will assume that - three SCSI disks exist as devices - da0, da1 - and da2 (or - ad0 and beyond in case IDE disks - are being used). To create a RAID-Z - pool, issue the following command: + There is no way to prevent a disk from failing. One + method of avoiding data loss due to a failed hard disk is to + implement RAID. ZFS + supports this feature in its pool design. + + To create a RAID-Z pool, issue the + following command and specify the disks to add to the + pool: &prompt.root; zpool create storage raidz da0 da1 da2 - &sun; recommends that the amount of devices used - in a RAID-Z configuration is between - three and nine. If your needs call for a single pool to - consist of 10 disks or more, consider breaking it up into - smaller RAID-Z groups. If you only - have two disks and still require redundancy, consider - using a ZFS mirror instead. See the - &man.zpool.8; manual page for more details. + &sun; recommends that the amount of devices used in + a RAID-Z configuration is between + three and nine. For environments requiring a single pool + consisting of 10 disks or more, consider breaking it up + into smaller RAID-Z groups. If only + two disks are available and redundancy is a requirement, + consider using a ZFS mirror. Refer to + &man.zpool.8; for more details. - The storage zpool should have been - created. This may be verified by using the &man.mount.8; - and &man.df.1; commands as before. More disk devices may - have been allocated by adding them to the end of the list - above. Make a new file system in the pool, called - home, where user files will eventually - be placed: + This command creates the storage + zpool. This may be verified using &man.mount.8; and + &man.df.1;. This command makes a new file system in the + pool called home: &prompt.root; zfs create storage/home It is now possible to enable compression and keep extra - copies of the user's home directories and files. This may - be accomplished just as before using the following + copies of directories and files using the following commands: &prompt.root; zfs set copies=2 storage/home @@ -384,9 +367,9 @@ example/data 17547008 0 17547008 0% /example/data &prompt.root; ln -s /storage/home /usr/home Users should now have their data stored on the freshly - created /storage/home - file system. Test by adding a new user and logging in as - that user. + created /storage/home. Test by + adding a new user and logging in as that user. Try creating a snapshot which may be rolled back later: @@ -405,28 +388,27 @@ example/data 17547008 0 17547008 0% /example/data ls in the file system's .zfs/snapshot directory. For example, to see the previously taken - snapshot, perform the following command: + snapshot: &prompt.root; ls /storage/home/.zfs/snapshot - It is possible to write a script to perform monthly - snapshots on user data; however, over time, snapshots + It is possible to write a script to perform regular + snapshots on user data. However, over time, snapshots may consume a great deal of disk space. The previous snapshot may be removed using the following command: &prompt.root; zfs destroy storage/home@08-30-08 - After all of this testing, there is no reason we should - keep /storage/home - around in its present state. Make it the real - /home file - system: + After testing, /storage/home can be made the + real /home using + this command: &prompt.root; zfs set mountpoint=/home storage/home - Issuing the df and - mount commands will show that the system - now treats our file system as the real + Run df and + mount to confirm that the system now + treats the file system as the real /home: &prompt.root; mount @@ -455,8 +437,7 @@ storage/home 26320512 0 26320512 0% /home Recovering <acronym>RAID</acronym>-Z Every software RAID has a method of - monitoring their state. - ZFS is no exception. The status of + monitoring its state. The status of RAID-Z devices may be viewed with the following command: @@ -468,7 +449,7 @@ storage/home 26320512 0 26320512 0% /home all pools are healthy If there is an issue, perhaps a disk has gone offline, - the pool state will be returned and look similar to: + the pool state will look similar to: pool: storage state: DEGRADED @@ -489,14 +470,13 @@ config: errors: No known data errors - This states that the device was taken offline by the - administrator. This is true for this particular example. - To take the disk offline, the following command was - used: + This indicates that the device was previously taken + offline by the administrator using the following + command: &prompt.root; zpool offline storage da1 - It is now possible to replace the + It is now possible to replace da1 after the system has been powered down. When the system is back online, the following command may issued to replace the disk: @@ -529,37 +509,34 @@ errors: No known data errors Data Verification - As previously mentioned, ZFS uses + ZFS uses checksums to verify the integrity of - stored data. They are enabled automatically upon creation + stored data. These are enabled automatically upon creation of file systems and may be disabled using the following command: &prompt.root; zfs set checksum=off storage/home - This is not a wise idea, however, as checksums take - very little storage space and are more useful when enabled. - There also appears to be no noticeable costs in having them - enabled. While enabled, it is possible to have - ZFS check data integrity using checksum - verification. This process is known as - scrubbing. To verify the data integrity of - the storage pool, issue the following - command: + Doing so is not recommended as + checksums take very little storage space and are used to + check data integrity using checksum verification in a + process is known as scrubbing. To verify the + data integrity of the storage pool, issue + this command: &prompt.root; zpool scrub storage This process may take considerable time depending on the amount of data stored. It is also very - I/O intensive, so much that only one - of these operations may be run at any given time. After - the scrub has completed, the status is updated and may be - viewed by issuing a status request: + I/O intensive, so much so that only one + scrub may be run at any given time. After the scrub has + completed, the status is updated and may be viewed by + issuing a status request: &prompt.root; zpool status storage pool: storage state: ONLINE - scrub: scrub completed with 0 errors on Sat Aug 30 19:57:37 2008 + scrub: scrub completed with 0 errors on Sat Jan 26 19:57:37 2013 config: NAME STATE READ WRITE CKSUM @@ -571,43 +548,39 @@ config: errors: No known data errors - The completion time is in plain view in this example. - This feature helps to ensure data integrity over a long - period of time. + The completion time is displayed and helps to ensure + data integrity over a long period of time. - There are many more options for the Z file system, - see the &man.zfs.8; and &man.zpool.8; manual - pages. + Refer to &man.zfs.8; and &man.zpool.8; for other + ZFS options. ZFS Quotas - ZFS supports different types of quotas; the - refquota, the general quota, the user quota, and - the group quota. This section will explain the - basics of each one, and include some usage - instructions. + ZFS supports different types of quotas: the refquota, + the general quota, the user quota, and the group quota. + This section explains the basics of each type and includes + some usage instructions. - Quotas limit the amount of space that a dataset - and its descendants can consume, and enforce a limit - on the amount of space used by filesystems and - snapshots for the descendants. In terms of users, - quotas are useful to limit the amount of space a - particular user can use. + Quotas limit the amount of space that a dataset and its + descendants can consume, and enforce a limit on the amount + of space used by filesystems and snapshots for the + descendants. Quotas are useful to limit the amount of space + a particular user can use. Quotas cannot be set on volumes, as the - volsize property acts as an - implicit quota. + volsize property acts as an implicit + quota. - The refquota, - refquota=size, - limits the amount of space a dataset can consume - by enforcing a hard limit on the space used. However, - this hard limit does not include space used by descendants, - such as file systems or snapshots. + The + refquota=size + limits the amount of space a dataset can consume by + enforcing a hard limit on the space used. However, this + hard limit does not include space used by descendants, such + as file systems or snapshots. To enforce a general quota of 10 GB for storage/home/bob, use the @@ -615,9 +588,8 @@ errors: No known data errors &prompt.root; zfs set quota=10G storage/home/bob - User quotas limit the amount of space that can - be used by the specified user. The general format - is + User quotas limit the amount of space that can be used + by the specified user. The general format is userquota@user=size, and the user's name must be in one of the following formats: @@ -626,28 +598,28 @@ errors: No known data errors POSIX compatible name - (e.g., joe). + Interface">POSIX compatible name such as + joe. POSIX - numeric ID (e.g., - 789). + numeric ID such as + 789. SID name - (e.g., - joe.bloggs@example.com). + such as + joe.bloggs@example.com. SID - numeric ID (e.g., - S-1-123-456-789). + numeric ID such as + S-1-123-456-789. @@ -670,7 +642,7 @@ errors: No known data errors privilege are able to view and set everyone's quota. The group quota limits the amount of space that a - specified user group can consume. The general format is + specified group can consume. The general format is groupquota@group=size. To set the quota for the group @@ -680,30 +652,29 @@ errors: No known data errors &prompt.root; zfs set groupquota@firstgroup=50G To remove the quota for the group - firstgroup, or make sure that one - is not set, instead use: + firstgroup, or to make sure that + one is not set, instead use: &prompt.root; zfs set groupquota@firstgroup=none As with the user quota property, non-root users can only see the quotas - associated with the user groups that they belong to, however - a root user or a user with the + associated with the groups that they belong to. However, + root or a user with the groupquota privilege can view and set all quotas for all groups. - The zfs userspace subcommand displays - the amount of space consumed by each user on the specified - filesystem or snapshot, along with any specified quotas. - The zfs groupspace subcommand does the - same for groups. For more information about supported - options, or only displaying specific options, see - &man.zfs.1;. + To display the amount of space consumed by each user on + the specified filesystem or snapshot, along with any + specified quotas, use zfs userspace. + For group information, use zfs + groupspace. For more information about + supported options or how to display only specific options, + refer to &man.zfs.1;. - To list the quota for - storage/home/bob, if you have the - correct privileges or are root, use the - following: + Users with sufficient privileges and + root can list the quota for + storage/home/bob using: &prompt.root; zfs get quota storage/home/bob @@ -711,9 +682,9 @@ errors: No known data errors ZFS Reservations - ZFS supports two types of space reservations. - This section will explain the basics of each one, - and include some usage instructions. + ZFS supports two types of space reservations. This + section explains the basics of each and includes some usage + instructions. The reservation property makes it possible to reserve a minimum amount of space guaranteed @@ -732,23 +703,22 @@ errors: No known data errors not counted by the refreservation amount and so do not encroach on the space set. - Reservations of any sort are useful in many - situations, for example planning and testing the - suitability of disk space allocation in a new system, or - ensuring that enough space is available on file systems - for system recovery procedures and files. + Reservations of any sort are useful in many situations, + such as planning and testing the suitability of disk space + allocation in a new system, or ensuring that enough space is + available on file systems for system recovery procedures and + files. The general format of the reservation property is -reservation=size, + reservation=size, so to set a reservation of 10 GB on - storage/home/bobthe below command is - used: + storage/home/bob, use: &prompt.root; zfs set reservation=10G storage/home/bob To make sure that no reservation is set, or to remove a - reservation, instead use: + reservation, use: &prompt.root; zfs set reservation=none storage/home/bob @@ -770,24 +740,24 @@ errors: No known data errors &linux; Filesystems - This section will describe some of the &linux; filesystems + This section describes some of the &linux; filesystems supported by &os;. - Ext2FS + <acronym>ext2</acronym> - The &man.ext2fs.5; file system kernel implementation was - written by Godmar Back, and the driver first appeared in - &os; 2.2. In &os; 8 and earlier, the code is licensed under - the GNU Public License, however under &os; - 9, the code has been rewritten and it is now licensed under - the BSD license. + The &man.ext2fs.5; file system kernel implementation has + been available since &os; 2.2. In &os; 8.x and + earlier, the code is licensed under the + GPL. Since &os; 9.0, the code has + been rewritten and is now BSD + licensed. - The &man.ext2fs.5; driver will allow the &os; kernel - to both read and write to ext2 file - systems. + The &man.ext2fs.5; driver allows the &os; kernel to both + read and write to ext2 file systems. - First, load the kernel loadable module: + To access an ext2 file system, first + load the kernel loadable module: &prompt.root; kldload ext2fs @@ -800,11 +770,10 @@ errors: No known data errors XFS - The X file system, XFS, was originally - written by SGI for the - IRIX operating system, and they ported it - to &linux;. The source code has been released under the - GNU Public License. See + XFS was originally written by + SGI for the IRIX + operating system and was then ported to &linux; and + released under the GPL. See this page for more details. The &os; port was started by Russel Cattelan, &a.kan;, and &a.rodrigc;. @@ -814,21 +783,19 @@ errors: No known data errors &prompt.root; kldload xfs - The &man.xfs.5; driver lets the &os; kernel access - XFS filesystems. However, at present only read-only - access is supported. Writing to a volume is not - possible. + The &man.xfs.5; driver lets the &os; kernel access XFS + filesystems. However, only read-only access is supported and + writing to a volume is not possible. To mount a &man.xfs.5; volume located on - /dev/ad1s1, do the following: + /dev/ad1s1: &prompt.root; mount -t xfs /dev/ad1s1 /mnt - Also useful to note is that the - sysutils/xfsprogs port - contains the mkfs.xfs utility which enables - creation of XFS filesystems, plus utilities - for analysing and repairing them. + The sysutils/xfsprogs + port includes the mkfs.xfs which enables + the creation of XFS filesystems, plus + utilities for analyzing and repairing them. The -p flag to mkfs.xfs can be used to create an @@ -842,11 +809,11 @@ errors: No known data errors The Reiser file system, ReiserFS, was ported to &os; by &a.dumbbell;, and has been released under the - GNU Public License. + GPL . - The ReiserFS driver will permit the &os; kernel to - access ReiserFS file systems and read their contents, but not - write to them, currently. + The ReiserFS driver permits the &os; kernel to access + ReiserFS file systems and read their contents, but not + write to them. First, the kernel-loadable module needs to be loaded: