Tuesday, June 16, 2015

How to Manage and Maintain ZFS file systems?

ZFS Chapters:
1). How to create ZFS file system and Volumes
2). How to set ZFS Dataset Properties on Solaris

Compare to traditional file system it is very easy to manage and maintain ZFS file systems "dataset and volumes". All ZFS related thinks can be done through ZFS command example creating , mounting, renaming , set properties etc...

Also you can refer man page for zfs command to know the exact option and syntax for what purpose.

Continue to the chapters one and two will see some of the common managing and maintain activity's.

How to rename dataset?
File systems can be renamed using by zfs rename command.
TID{root}# zfs list /app_pool/sapdata
NAME               USED  AVAIL  REFER  MOUNTPOINT
app_pool/sapdata    31K   463M    31K  /app_pool/sapdata
TID{root}# df -h /app_pool/sapdata
Filesystem             size   used  avail capacity  Mounted on
app_pool/sapdata       2.0G    31K   463M     1%    /app_pool/sapdata
TID{root}# zfs rename app_pool/sapdata app_pool/saporadata
TID{root}# df -h /app_pool/saporadata
Filesystem             size   used  avail capacity  Mounted on
app_pool/saporadata    2.0G    31K   464M     1%    /app_pool/saporadata
TID{root}#
How to umount dataset?
To umount particular dataset:
TID{root}# df -h | grep -i db_pool
db_pool                3.9G    32K   1.6G     1%    /db_pool
db_pool/oracle         3.9G   1.8G   1.6G    54%    /db_pool/oracle
TID{root}# zfs umount /db_pool/oracle
TID{root}# df -h | grep -i /db_pool/oracle
TID{root}#
How to mount dataset?
To mount particular dataset:
TID{root}# zfs mount db_pool/oracle
TID{root}# df -h | grep -i db_pool
db_pool                3.9G    32K   1.6G     1%    /db_pool
db_pool/oracle         3.9G   1.8G   1.6G    54%    /db_pool/oracle
TID{root}#
How to umount all the ZFS datasets? 
Umounting all the ZFS datasets:
TID{root}# df -h -F zfs
Filesystem             size   used  avail capacity  Mounted on
rpool/ROOT/s10x_u10wos_17b
                        16G   5.4G   8.4G    40%    /
app_pool               2.0G    34K   463M     1%    /app_pool
app_pool/sap           2.0G    31K   463M     1%    /app_pool/sap
app_pool/saplog        2.0G    31K   463M     1%    /app_pool/saplog
app_pool/saporadata    2.0G    31K   463M     1%    /app_pool/saporadata
rpool/export            16G    32K   8.4G     1%    /export
rpool/export/home       16G    31K   8.4G     1%    /export/home
rpool                   16G    42K   8.4G     1%    /rpool
app_pool/saporg        2.0G   1.5G   499M    76%    /saporg
test2                  3.9G    12M   3.9G     1%    /test2
zonepool                16G    36K   4.6G     1%    /zonepool
zonepool/back           16G   1.6G   4.6G    27%    /zonepool/back
zonepool/iv10           16G   4.6G   4.6G    51%    /zonepool/iv10
zonepool/iv10new        16G   4.6G   4.6G    51%    /zonepool/iv10new
zonepool/zone2          16G   4.6G   4.6G    51%    /zonepool/zone2
db_pool                3.9G    32K   1.6G     1%    /db_pool
db_pool/oracle         3.9G   1.8G   1.6G    54%    /db_pool/oracle
TID{root}# zfs umount -a
TID{root}# df -h -F zfs
Filesystem             size   used  avail capacity  Mounted on
rpool/ROOT/s10x_u10wos_17b
                        16G   5.4G   8.4G    40%    /
TID{root}#
In the above except /root all of them are unmounted.
To mount all the ZFS datasets:
TID{root}# zfs mount -a
TID{root}# df -h -F zfs
Filesystem             size   used  avail capacity  Mounted on
rpool/ROOT/s10x_u10wos_17b
                        16G   5.4G   8.4G    40%    /
app_pool               2.0G    34K   464M     1%    /app_pool
app_pool/sap           2.0G    31K   464M     1%    /app_pool/sap
app_pool/saplog        2.0G    31K   464M     1%    /app_pool/saplog
app_pool/saporadata    2.0G    31K   464M     1%    /app_pool/saporadata
db_pool                3.9G    32K   1.6G     1%    /db_pool
db_pool/oracle         3.9G   1.8G   1.6G    54%    /db_pool/oracle
rpool/export            16G    32K   8.4G     1%    /export
rpool/export/home       16G    31K   8.4G     1%    /export/home
rpool                   16G    42K   8.4G     1%    /rpool
app_pool/saporg        2.0G   1.5G   499M    76%    /saporg
test2                  3.9G    12M   3.9G     1%    /test2
zonepool                16G    36K   4.6G     1%    /zonepool
zonepool/back           16G   1.6G   4.6G    27%    /zonepool/back
zonepool/iv10           16G   4.6G   4.6G    51%    /zonepool/iv10
zonepool/iv10new        16G   4.6G   4.6G    51%    /zonepool/iv10new
zonepool/zone2          16G   4.6G   4.6G    51%    /zonepool/zone2
TID{root}#
How to destroy particular dataset?
To destroy particular dataset:
TID{root}# zfs list -r app_pool
NAME                  USED  AVAIL  REFER  MOUNTPOINT
app_pool             1.50G   463M    34K  /app_pool
app_pool/sap           31K   463M    31K  /app_pool/sap
app_pool/saplog        31K   463M    31K  /app_pool/saplog
app_pool/saporadata    31K   463M    31K  /app_pool/saporadata
app_pool/saporg      1.47G   499M  1.47G  /saporg
TID{root}# zfs destroy app_pool/saporadata
TID{root}# zfs list -r app_pool
NAME              USED  AVAIL  REFER  MOUNTPOINT
app_pool         1.50G   463M    33K  /app_pool
app_pool/sap       31K   463M    31K  /app_pool/sap
app_pool/saplog    31K   463M    31K  /app_pool/saplog
app_pool/saporg  1.47G   499M  1.47G  /saporg
TID{root}#
How to destroy particular datasets and it's children like snapshot and child datasets? I got error since it has snapshot so we have to use -r option to destroy snapshot and child datasets. Some case you may need to use -fr both option.
TID{root}# zfs list -r app_pool
NAME                         USED  AVAIL  REFER  MOUNTPOINT
app_pool                    1.50G   463M    33K  /app_pool
app_pool/sap                  31K   463M    31K  /app_pool/sap
app_pool/saplog               31K   463M    31K  /app_pool/saplog
app_pool/saporg             1.47G   496M  1.47G  /saporg
app_pool/saporg@16.06.2015    19K      -  1.47G  -
TID{root}# zfs destroy app_pool/saporg
cannot destroy 'app_pool/saporg': filesystem has children
use '-r' to destroy the following datasets:
app_pool/saporg@16.06.2015
TID{root}# zfs destroy -r app_pool/saporg
TID{root}# zfs list -r app_pool
NAME              USED  AVAIL  REFER  MOUNTPOINT
app_pool          714K  1.95G    33K  /app_pool
app_pool/sap       31K  1.95G    31K  /app_pool/sap
app_pool/saplog    31K  1.95G    31K  /app_pool/saplog
TID{root}#
How to destroy all the child datasets in one pool ?
Destroying datasets which are under the "app_pool" pool.
TID{root}# zfs list -r app_pool
NAME              USED  AVAIL  REFER  MOUNTPOINT
app_pool          714K  1.95G    33K  /app_pool
app_pool/sap       31K  1.95G    31K  /app_pool/sap
app_pool/saplog    31K  1.95G    31K  /app_pool/saplog
TID{root}# zfs destroy -r app_pool
TID{root}# zfs list -r app_pool
NAME       USED  AVAIL  REFER  MOUNTPOINT
app_pool   386K  1.95G    32K  /app_pool
TID{root}#
How to create zfs dataset with particular version ?
Using "-o version" option we can create zfs dataset for particular version.
Here , my pool dataset version is 5 , in the same pool I am creating dataset version 3.
TID{root}# zfs get version app_pool
NAME      PROPERTY  VALUE    SOURCE
app_pool  version   5        -
TID{root}# zfs upgrade -v
The following filesystem versions are supported:

VER  DESCRIPTION
---  --------------------------------------------------------
 1   Initial ZFS filesystem version
 2   Enhanced directory entries
 3   Case insensitive and File system unique identifier (FUID)
 4   userquota, groupquota properties
 5   System attributes

For more information on a particular version, including supported releases,
see the ZFS Administration Guide.

TID{root}# zfs create -o version=3 app_pool/applog
TID{root}# zfs get version app_pool/applog
NAME             PROPERTY  VALUE    SOURCE
app_pool/applog  version   3        -
TID{root}#
How to upgrade zfs dataset version ?
Using zfs upgrade command we can upgrade our dataset version.
To upgrade particular version use "-V" option with -V the dataset will upgrade to latest available version.
Upgrading dataset to particular version:
TID{root}# zfs get version app_pool/applog
NAME             PROPERTY  VALUE    SOURCE
app_pool/applog  version   3        -
TID{root}# zfs upgrade -V 4 app_pool/applog
1 filesystems upgraded
TID{root}# zfs get version app_pool/applog
NAME             PROPERTY  VALUE    SOURCE
app_pool/applog  version   4        -
TID{root}#
Upgrading to latest available version:
TID{root}# zfs get version app_pool/applog
NAME             PROPERTY  VALUE    SOURCE
app_pool/applog  version   4        -
TID{root}# zfs upgrade app_pool/applog
1 filesystems upgraded
TID{root}# zfs get version app_pool/applog
NAME             PROPERTY  VALUE    SOURCE
app_pool/applog  version   5        -
TID{root}#
How to set legacy for dataset mountpoint ? If you set legacy it is like old type file system maintenance, you have to add entry to /etc/vfstab so that you can mount that dataset. 
Mostly this option will use on zones environment as we can set pool to legacy and dedicate to particular zones so that we can manage datasets from zone level or else it will be always mounted on global zone and shard to non-global zones , this may confuse. Also we can use for several purpose.
Set legacy to dataset pool:
TID{root}# zfs set mountpoint=legacy app_pool
TID{root}# zfs get mountpoint app_pool
NAME      PROPERTY    VALUE       SOURCE
app_pool  mountpoint  legacy      local
TID{root}# df -h /app_pool
df: (/app_pool ) not a block device, directory or mounted resource
TID{root}# df -h | grep -i /app_pool
TID{root}# zfs list -r app_pool
NAME              USED  AVAIL  REFER  MOUNTPOINT
app_pool          436K  1.95G    32K  legacy
app_pool/applog    22K  1.95G    22K  /app_pool/applog
TID{root}# cat /etc/vfstab | grep -i app_pool
app_pool        -               /mnt               zfs          -               yes             -
TID{root}# mount /mnt
TID{root}# df -h /mnt
Filesystem             size   used  avail capacity  Mounted on
app_pool               2.0G    32K   2.0G     1%    /mnt
TID{root}#
Thanks for reading this post .....

0 comments:

Post a Comment