diff --git a/en_US.ISO8859-1/books/handbook/filesystems/Makefile b/en_US.ISO8859-1/books/handbook/filesystems/Makefile new file mode 100644 index 0000000000..499d815d22 --- /dev/null +++ b/en_US.ISO8859-1/books/handbook/filesystems/Makefile @@ -0,0 +1,15 @@ +# +# Build the Handbook with just the content from this chapter. +# +# $FreeBSD$ +# + +CHAPTERS= filesystems/chapter.sgml + +VPATH= .. + +MASTERDOC= ${.CURDIR}/../${DOC}.${DOCBOOKSUFFIX} + +DOC_PREFIX?= ${.CURDIR}/../../../.. + +.include "../Makefile" diff --git a/en_US.ISO8859-1/books/handbook/filesystems/chapter.sgml b/en_US.ISO8859-1/books/handbook/filesystems/chapter.sgml new file mode 100644 index 0000000000..7f53445139 --- /dev/null +++ b/en_US.ISO8859-1/books/handbook/filesystems/chapter.sgml @@ -0,0 +1,619 @@ + + + + + + + Tom + Rhodes + Written by + + + + + File Systems Support + + + Synopsis + + File Systems + + File Systems Support + File Systems + + + File systems are an integral part of any operating system. + They allow for users to upload and store files, provide access + to data, and of course, make hard drives useful. Different + operating systems usually have one major aspect in common, that + is their native file system. On &os; this file system is known + as the Fast File System or FFS which is built + on the original Unix™ File System, also known as + UFS. This is the native file system on &os; + which is placed on hard disks for access to data. + + &os; also supports a multitude of different file systems to + provide support for accessing data from other operating systems + locally, i.e. data stored on locally attached + USB storage devices, flash drives, and hard + disks. There is also support for some non-native file systems. + These are file systems developed on other + operating systems, like the &linux; Extended File System + (EXT), and the &sun; Z File System + (ZFS). + + There are different levels of support for the various file + systems in &os;. Some will require a kernel module to be loaded, + others may require a toolset to be installed. This chapter is + designed to help users of &os; access other file systems on their + systems, starting with the &sun; Z file + system. + + After reading this chapter, you will know: + + + + The difference between native and supported file systems. + + + + What file systems are supported by &os;. + + + + How to enable, configure, access and make use of + non-native file systems. + + + + Before reading this chapter, you should: + + + + Understand &unix; and &os; basics + (). + + + + Be familiar with + the basics of kernel configuration/compilation + (). + + + + Feel comfortable installing third party software + in &os; (). + + + + Have some familiarity with disks, storage and + device names in &os; (). + + + + + + The ZFS feature is considered + experimental. Some options may be lacking in functionality, + other parts may not work at all. In time, this feature will + be considered production ready and this documentation will be + altered to fit that situation. + + + + + The Z File System + + The Z file system, developed by &sun;, is a new + technology designed to use a pooled storage method. This means + that space is only used as it is needed for data storage. It + has also been designed for maximum data integrity, supporting + data snapshots, multiple copies, and data checksums. A new + data replication model, known as RAID-Z has + been added. The RAID-Z model is similar + to RAID5 but is designed to prevent data + write corruption. + + + ZFS Tuning + + The ZFS subsystem utilizes much of + the system resources, so some tuning may be required to provide + maximum efficiency during every-day use. As an experimental + feature in &os; this may change in the near future; however, + at this time, the following steps are recommended. + + + Memory + + The total system memory should be at least one gigabyte, + with two gigabytes or more recommended. In all of the + examples here, the system has one gigabyte of memory with + several other tuning mechanisms in place. + + Some people have had luck using fewer than one gigabyte + of memory, but with such a limited amount of physical memory, + when the system is under heavy load, it is very plausible + that &os; will panic due to memory exhaustion. + + + + Kernel Configuration + + It is recommended that unused drivers and options + be removed from the kernel configuration file. Since most + devices are available as modules, they may simply be loaded + using the /boot/loader.conf file. + + Users of the i386 architecture should add the following + option to their kernel configuration file, rebuild their + kernel, and reboot: + + options KVA_PAGES=512 + + This option will expand the kernel address space, thus + allowing the vm.kvm_size tunable to be + pushed beyond the currently imposed limit of 1 GB + (2 GB for PAE). To find the most + suitable value for this option, divide the desired address + space in megabytes by four (4). In this case, it is + 512 for 2 GB. + + + + Loader Tunables + + The kmem address space should be + increased on all &os; architectures. On the test system with + one gigabyte of physical memory, success was achieved with the + following options which should be placed in + the /boot/loader.conf file and the system + restarted: + + vm.kmem_slze="330M" +vm.kmem_size_max="330M" +vfs.zfs.arc_max="40M" +vfs.zfs.vdev.cache.size="5M" + + For a more detailed list of recommendations for ZFS-related + tuning, see + . + + + + + Using <acronym>ZFS</acronym> + + There is a start up mechanism that allows &os; to + mount ZFS pools during system + initialization. To set it, issue the following + commands: + + &prompt.root; echo 'zfs_enable="YES"' >> /etc/rc.conf +&prompt.root; /etc/rc.d/zfs start + + The remainder of this document assumes two + SCSI disks are available, and their device names + are da0 + and da1 + respectively. Users of IDE hardware may + use the ad + devices in place of SCSI hardware. + + + Single Disk Pool + + To create a ZFS over a single disk + device, use the zpool command: + + &prompt.root; zpool create example /dev/da0 + + To view the new pool, review the output of the + df: + + &prompt.root; df +Filesystem 1K-blocks Used Avail Capacity Mounted on +/dev/ad0s1a 2026030 235230 1628718 13% / +devfs 1 1 0 100% /dev +/dev/ad0s1d 54098308 1032846 48737598 2% /usr +example 17547136 0 17547136 0% /example + + This output clearly shows the example + pool has not only been created but + mounted as well. It is also accessible + just like a normal file system, files may be created on it + and users are able to browse it as in the + following example: + + &prompt.root cd /example +&prompt.root; ls +&prompt.root; touch testfile +&prompt.root; ls -al +total 4 +drwxr-xr-x 2 root wheel 3 Aug 29 23:15 . +drwxr-xr-x 21 root wheel 512 Aug 29 23:12 .. +-rw-r--r-- 1 root wheel 0 Aug 29 23:15 testfile + + Unfortunately this pool is not taking advantage of + any ZFS features. Create a file system + on this pool, and enable compression on it: + + &prompt.root; zfs create example/compressed +&prompt.root; zfs set compression=gzip example/compressed + + The example/compressed is now a + ZFS compressed file system. Try copying + some large files to it by copying them to + /example/compressed. + + The compression may now be disabled with: + + &prompt.root; zfs set compression=off example/compressed + + To unmount the file system, issue the following command + and then verify by using the df + utility: + + &prompt.root; zfs umount example/compressed +&prompt.root; df +Filesystem 1K-blocks Used Avail Capacity Mounted on +/dev/ad0s1a 2026030 235232 1628716 13% / +devfs 1 1 0 100% /dev +/dev/ad0s1d 54098308 1032864 48737580 2% /usr +example 17547008 0 17547008 0% /example + + Re-mount the file system to make it accessible + again, and verify with df: + + &prompt.root; zfs mount example/compressed +&prompt.root; df +Filesystem 1K-blocks Used Avail Capacity Mounted on +/dev/ad0s1a 2026030 235234 1628714 13% / +devfs 1 1 0 100% /dev +/dev/ad0s1d 54098308 1032864 48737580 2% /usr +example 17547008 0 17547008 0% /example +example/compressed 17547008 0 17547008 0% /example/compressed + + The pool and file system may also be observed by viewing + the output from mount: + + &prompt.root; mount +/dev/ad0s1a on / (ufs, local) +devfs on /dev (devfs, local) +/dev/ad0s1d on /usr (ufs, local, soft-updates) +example on /example (zfs, local) +example/data on /example/data (zfs, local) +example/compressed on /example/compressed (zfs, local) + + As observed, ZFS file systems, after + creation, may be used like ordinary file systems; however, + many other features are also available. In the following + example, a new file system, data is + created. Important files will be stored here, so the file + system is set to keep two copies of each data block: + + &prompt.root; zfs create example/data +&prompt.root; zfs set copies=2 example/data + + It is now possible to see the data and space utilization + by issuing the df again: + + &prompt.root; df +Filesystem 1K-blocks Used Avail Capacity Mounted on +/dev/ad0s1a 2026030 235234 1628714 13% / +devfs 1 1 0 100% /dev +/dev/ad0s1d 54098308 1032864 48737580 2% /usr +example 17547008 0 17547008 0% /example +example/compressed 17547008 0 17547008 0% /example/compressed +example/data 17547008 0 17547008 0% /example/data + + Notice that each file system on the pool has the same + amount of available space. This is the reason for using + the df through these examples, to show + that the file systems are using only the amount of space + they need and will all draw from the same pool. + The ZFS file system does away with concepts + such as volumes and partitions, and allows for several file + systems to occupy the same pool. Destroy the file systems, + and then destroy the pool as they are no longer + needed: + + &prompt.root; zfs destroy example/compressed +&prompt.root; zfs destroy example/data +&prompt.root; zpool destroy example + + Disks go bad and fail, an unavoidable trait. When + this disk goes bad, the data will be lost. One method of + avoiding data loss due to a failed hard disk is to implement + a RAID. ZFS supports + this feature in its pool design which is covered in + the next section. + + + + <acronym>ZFS</acronym> RAID-Z + + As previously noted, this section will assume that + two SCSI exists as devices + da0 and + da1. To create a + RAID-Z pool, issue the following + command: + + &prompt.root; zpool create storage raidz da0 da1 + + The storage zpool should have been + created. This may be verified by using the &man.mount.8; and + &man.df.1; commands as before. More disk devices may have + been allocated by adding them to the end of the list above. + Make a new file system in the pool, called + home where user files will eventually be + placed: + + &prompt.root; zfs create storage/home + + It is now possible to enable compression and keep extra + copies of the user's home directories and files. This may + be accomplished just as before using the following + commands: + + &prompt.root; zfs set copies=2 storage/home +&prompt.root; zfs set compression=gzip storage/home + + To make this the new home directory for users, copy the + user data to this directory, and create the appropriate + symbolic links: + + &prompt.root; cp -rp /home/* /storage/home +&prompt.root; rm -rf /home /usr/home +&prompt.root; ln -s /storage/home /home +&prompt.root; ln -s /storage/home /usr/home + + Users should now have their data stored on the freshly + created /storage/home + file system. Test by adding a new user and logging in as + that user. + + Try creating a snapshot which may be rolled back + later: + + &prompt.root; zfs snapshot storage/home@08-30-08 + + Note that the snapshot option will only capture a real + file system, not a home directory or a file. The + @ character is a delimiter used between + the file system name or the volume name. when a user's + home directory gets trashed, restore it with: + + &prompt.root; zfs rollback storage/home@08-30-08 + + To get a list of all available snapshots, run the + ls in the file system's + .zfs/snapshot + directory. For example, to see the previously taken + snapshot, perform the following command: + + &prompt.root; ls /storage/home/.zfs/snapshot + + It is possible to write a script to perform monthly + snapshots on user data; however, over time, snapshots + may consume a great deal of disk space. The previous + snapshot may be removed using the following command: + + &prompt.root; zfs destroy storage/home@08-30-08 + + There is no reason, after all of this testing, we should + keep /storage/home + around in its present state. Make it the real + /home file + system: + + &prompt.root; zfs set mountpoint=/home storage/home + + Issuing the df and + mount commands will show that the system + now treats our file system as the real + /home: + + &prompt.root; mount +/dev/ad0s1a on / (ufs, local) +devfs on /dev (devfs, local) +/dev/ad0s1d on /usr (ufs, local, soft-updates) +storage on /storage (zfs, local) +storage/home on /home (zfs, local) +&prompt.root; df +Filesystem 1K-blocks Used Avail Capacity Mounted on +/dev/ad0s1a 2026030 235240 1628708 13% / +devfs 1 1 0 100% /dev +/dev/ad0s1d 54098308 1032826 48737618 2% /usr +storage 17547008 0 17547008 0% /storage +storage/home 17547008 0 17547008 0% /home + + This completes the RAID-Z + configuration. To get status updates about the file systems + created during the nightly &man.periodic.8; runs, issue the + following command: + + &prompt.root; echo 'daily_status_zfs_enable="YES"' >> /etc/periodic.conf + + + + Recovering <acronym>RAID</acronym>-Z + + Every software RAID has a method of + monitoring their state. + ZFS is no exception. The status of + RAID-Z devices may be viewed with the + following command: + + &prompt.root; zpool status -x + + If all pools are healthy and everything is normal, the + following message will be returned: + + all pools are healthy + + If there is an issue, perhaps a disk has gone offline, + the pool state will be returned and look similar to: + + pool: storage + state: DEGRADED +status: One or more devices has been taken offline by the administrator. + Sufficient replicas exist for the pool to continue functioning in a + degraded state. +action: Online the device using 'zpool online' or replace the device with + 'zpool replace'. + scrub: none requested +config: + + NAME STATE READ WRITE CKSUM + storage DEGRADED 0 0 0 + raidz1 DEGRADED 0 0 0 + da0 ONLINE 0 0 0 + da1 OFFLINE 0 0 0 + +errors: No known data errors + + This states that the device was taken offline by the + administrator. This is true for this particular example. + To take the disk offline, the following command was + used: + + &prompt.root; zpool offline storage da1 + + It is now possible to replace the + da1 after the system has been + powered down. When the system is back online, the following + command may issued to replace the disk: + + &prompt.root; zpool replace storage da1 + + From here, the status may be checked again, this time + without the flag to get state + information: + + &prompt.root; zpool status storage + pool: storage + state: ONLINE + scrub: resilver completed with 0 errors on Sat Aug 30 19:44:11 2008 +config: + + NAME STATE READ WRITE CKSUM + storage ONLINE 0 0 0 + raidz1 ONLINE 0 0 0 + da0 ONLINE 0 0 0 + da1 ONLINE 0 0 0 + +errors: No known data errors + + As shown from this example, everything appears to be + normal. + + + + Data Verification + + As previously mentioned, ZFS uses + checksums to verify the integrity of + stored data. They are enabled automatically upon creation + of file systems and may be disabled using the following + command: + + &prompt.root; zfs set checksum=off storage/home + + This is not a wise idea; however, as checksums take + very little storage space and are more useful enabled. There + also appear to be no noticeable costs having them enabled. + While enabled, it is possible to have ZFS + check data integrity using checksum verification. This + process is known as scrubing. To verify the + data integrity of the storage pool, issue + the following command: + + &prompt.root; zpool scrub storage + + This process may take considerable time depending on + the amount of data stored. It is also very + I/O intensive, so much that only one + of these operations may be run at any given time. After + the scrub has completed, the status is updated and may be + viewed by issuing a status request: + + &prompt.root; zpool status storage + pool: storage + state: ONLINE + scrub: scrub completed with 0 errors on Sat Aug 30 19:57:37 2008 +config: + + NAME STATE READ WRITE CKSUM + storage ONLINE 0 0 0 + raidz1 ONLINE 0 0 0 + da0 ONLINE 0 0 0 + da1 ONLINE 0 0 0 + +errors: No known data errors + + The completion time is in plain view in this example. + This feature helps to ensure data integrity over a long + period of time. + + There are many more options for the Z file system, + reading the &man.zfs.1; and &man.zpool.1; manual + pages. + + + + + + + +