ZFS is a new file system in Solaris 10 OS which provides excellent data
integrity and performance compared to other file systems (considering the
enterprise storage scenario). Unlike previous file systems, it's a 128-bit file
system, which means it can scale up to accommodate very large data. It is
perhaps the world's first 128-bit file system. But why do we need so much
scalability? The reason is simple. In an enterprise, data is continuously stored
on servers and it keeps on increasing. Enterprises want to keep as much of this
data live as possible, so that it can be quickly retrieved when required.
In traditional file systems, data is stored on a single disk or on a large
volume, consisting of multiple disks. In ZFS, a pool of storage model is used,
ie every single storage device is part of a single expandable storage pool,
irrespective of where the data is being written. Each storage device which
resides inside the pool can have different file systems, which helps
administrators scale the system in an easy and efficient manner, ie you no
longer need to take care of the file system. Just add a storage device to the
pool. With this new architecture, each file system that resides under the pool
can share the same amount of size and I/O resources as the pool itself. Also ZFS
is used for correcting noisy data corruption. For eg, in cases when you've done
an I/O operation, the disk returns an error message, say, 'Can't read the
specified block.' The second case could be silent data corruption, wherein you
do an I/O operation and the system returns corrupted results. ZFS identifies and
if possible even corrects these data corruptions, something which existing file
systems can't do. Managing existing file systems is also difficult. For example,
you upgrade your system after which you find that the file system doesn't
support the machine and you have to copy all the data. This would consume a lot
of time, but ZFS helps alleviate this. Moreover, existing file systems have
limitations in terms of volumes, file size, etc.
Direct Hit! |
Applies To: IT Managers Price: Free USP: Learn how to implement ZFS Primary Link: www.sun.com/software/solaris/ Keywords: ZFS in Solaris 10 |
We will implement ZFS inside a Solaris container. The benefit of using ZFS in
a container is that the storage pool inside the container can then be given a
particular amount of storage from the global storage pool and hence the global
pool can be managed easily.
The steps required are:
- Creating zone.
- Creating zpool, which is the actual storage pool.
- Allocating a ZFS file system to the zone.
For creating a new zone, execute the following commands in sequence:
# zonecfg -z zfs-zone
zonecfg:zfs-zone# create
zonecfg:zfs-zone> set zonepath=/export/home/zones/zfs-zone
zonecfg:zfs-zone> set autoboot=true
zonecfg:zfs-zone> verify
zonecfg:zfs-zone> commit
zonecfg:zfs-zone> exit
Traditionally, the file systems are found on a single storage device and a volume manager is used to manage one or more storage devices, whereas ZFS contains a pool made up of a block of storage devices |
Now you need to install the new zone by using the zoneadm command.
# zoneadm -z zfs-zone install
# zoneadm -z zfs-zone boot
# zlogin -C zfs-zone
The details about the Zone (or Containers) have been discussed in PC Quest,
Feb 2008 issue. Now create the zpool, ie the storage pool for the ZFS file
system.
As ZFS requires two different devices or partitions for working, here we use two
mirrored partitions.
# zpool create mypool mirror c2t5d0 c2t6d0
Now we allocate the ZFS file system to the zone which we have created. For
this, excecute the following commands:
www.unixconsult.org/zfs_vs_lvm.html |
# zfs create mypool/myzonefs1
# zfs set quota=5G mypool/myzonefs1
# zfs create mypool/myzonefs2
#zfs list
# zonecfg -z zfs-zone
zonecfg:myzone> add dataset
zonecfg:myzone:dataset> set name=mypool/myzonefs
zonecfg:myzone:dataset> end
zonecfg:myzone> commit
zonecfg:myzone> exit
# zoneadm -z zfs-zone boot
# zlogin -C zfs-zone
# zfs list
In the above steps we have created the global pool, ie the zpool, and the
zone, ie the zfs-zone. In the first command we created a ZFS file system named
myzonefs and then allocated quota to it. In the third step, we created another
ZFS file system and then listed all the ZFS file systems inside that zone. Now
to make this newly created file system available to the zone, ie the zfs-zone,
we need to update our zone. For this we ran the zonecfg command to configure the
zone, before we finally booted it.
There are more things you can do with the help of this new file system.
Initially the mount point of the newly created file system is where the file
system is created, ie in our case /mypool/myzonefs1. But in case a non-global
administrator would like to change it for his convenience, he can issue the
following command. # zfs set mountpoint=/export/ home/ mypool/myzonefs/
Another option in ZFS is the compression property. Using this property, the
ZFS compresses the files before writing them to the disks. This results in
savings in space. For using this property, issue the following command:
# zfs set compression=on mypool/myzonefs
After the zone has been created and the ZFS allocated, you can perform other
tasks such as taking snapshots or creating clones for backups.