Tuesday, June 9, 2015

How to create ZFS file system and Volumes ?

The ZFS file system works little bit different from native file system, we can call ZFS file system as a "COW - copy on right".

The ZFS file system is a revolutionary new file system that fundamentally changes the way of file systems are administered, with features and benefits. ZFS is robust, scalable, and easy to administer.

To create file system we need ZFS pool "ZPOOL" , top of the pool we can create two type of file system "datasets and volumes .

Will see step by step to create two different types of ZFS datasets, file systems and volumes.

In my scenario, I have pool call "db_pool" on top this I am creating two type of datasets "file system and volumes".
TID{root}# zpool list db_pool
NAME      SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
db_pool  3.97G   110K  3.97G     0%  ONLINE  -
TID{root}# df -h /db_pool
Filesystem             size   used  avail capacity  Mounted on
db_pool                3.9G    31K   3.9G     1%    /db_pool
TID{root}#
Create dataset:
TID{root}# zfs create db_pool/oracle
TID{root}# df -h db_pool/oracle
Filesystem             size   used  avail capacity  Mounted on
db_pool/oracle         3.9G    31K   3.9G     1%    /db_pool/oracle
TID{root}#
Create a ZFS Volumes (zvols):
Like a file system dataset, you must specific the size of the volume when you create it. This will creates two device path: /dev/zvol/dsk/db_pool/oravol (logical device) and /dev/zvol/rdsk/db_pool/oravol (raw device) like traditional file system. The file system type will be UFS.
TID{root}# zfs create -V 500m db_pool/oravol
TID{root}# newfs /dev/zvol/rdsk/db_pool/oravol
newfs: construct a new file system /dev/zvol/rdsk/db_pool/oravol:(y/n)? y
Warning: 2082 sector(s) in last cylinder unallocated
/dev/zvol/rdsk/db_pool/oravol:  1023966 sectors in 167 cylinders of 48 tracks, 128 sectors
        500.0MB in 12 cyl groups (14 c/g, 42.00MB/g, 20160 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
 32, 86176, 172320, 258464, 344608, 430752, 516896, 603040, 689184, 775328, 861472, 947616
TID{root}# fstyp /dev/zvol/rdsk/db_pool/oravol
ufs
TID{root}# mkdir /ora_vol
TID{root}# mount /dev/zvol/dsk/db_pool/oravol /ora_vol
TID{root}# df -h /ora_vol
Filesystem             size   used  avail capacity  Mounted on
/dev/zvol/dsk/db_pool/oravol
                       470M   1.0M   422M     1%    /ora_vol
TID{root}#
To list all the dataset in the pool "db_pool":
TID{root}# zfs list -r db_pool
NAME             USED  AVAIL  REFER  MOUNTPOINT
db_pool          516M  3.40G    32K  /db_pool
db_pool/oracle    31K  3.40G    31K  /db_pool/oracle
db_pool/oravol   516M  3.88G  30.1M  -
TID{root}#
Create a simple pool and datasets:
TID{root}# zpool create app_pool mirror c1t6d0 c1t8d0
TID{root}# df -h /app_pool
Filesystem             size   used  avail capacity  Mounted on
app_pool               2.0G    31K   2.0G     1%    /app_pool
TID{root}# zfs create app_pool/sap
TID{root}# zfs create app_pool/sapdata
TID{root}# zfs create app_pool/saporg
TID{root}# zfs create app_pool/saplog
TID{root}#
To list dataset in pool:
TID{root}# zfs list -r app_pool
NAME               USED  AVAIL  REFER  MOUNTPOINT
app_pool           254K  1.95G    35K  /app_pool
app_pool/sap        31K  1.95G    31K  /app_pool/sap
app_pool/sapdata    31K  1.95G    31K  /app_pool/sapdata
app_pool/saplog     31K  1.95G    31K  /app_pool/saplog
app_pool/saporg     31K  1.95G    31K  /app_pool/saporg
TID{root}#
In the above 1.95 GB the pool is shared to all of the datasets, if it is a traditional file system, you might judge there was around 10.00 GB available for app_pool and its 4 datasets.
How it is working ?
TID{root}# mkfile 500m /app_pool/saplog/slog.log
TID{root}# zfs list -r app_pool
NAME               USED  AVAIL  REFER  MOUNTPOINT
app_pool           501M  1.46G    35K  /app_pool
app_pool/sap        31K  1.46G    31K  /app_pool/sap
app_pool/sapdata    31K  1.46G    31K  /app_pool/sapdata
app_pool/saplog    500M  1.46G   500M  /app_pool/saplog
app_pool/saporg     31K  1.46G    31K  /app_pool/saporg
TID{root}#
Note:- the USED column /app_pool/saplog shows 500 MB in use. The other datasets show just the metadata size (31k), but the available space have been reduced to 1.46GB. That's because the amount of free space available to them after /app_pool/saplog had consumed the 500MB, so each datasets are consuming space from total pool size, until unless if you not set quota and reservation you will not get accurate size for particular dataset. Like first come first.

0 comments:

Post a Comment