Wednesday, June 24, 2015

How to grow aka extend ZFS datasets and Volumes?

ZFS Chapters:
1).How to Manage and Maintain ZFS file systems?
2).How to create ZFS file system and Volumes ?
3).How to set ZFS Dataset Properties on Solaris?

In ZFS we have two type of growing file system like dataset and volume .The ZFS dataset can be grow setting the quota and reservation properties.
Extend a volume is to setting the volsize property to new size and using growfs command to make new size take effect.When decrease volume size we need to be careful as we may loos our                                   data.
Extend / Grow ZFS dataset:
It is very simple then other file systems, just to set the properties to increase the dataset size.

Verify zpool has free space:
TID{root}# zfs list -r app_pool
NAME              USED  AVAIL  REFER  MOUNTPOINT
app_pool          500M  1.46G    32K  legacy
app_pool/applog    22K   500M    22K  /app_pool/applog
app_pool/sap       31K  1.46G    31K  /app_pool/sap
TID{root}#
Check current size for app_pool/applog dataset:
TID{root}# df -h /app_pool/applog
Filesystem             size   used  avail capacity  Mounted on
app_pool/applog        500M    22K   500M     1%    /app_pool/applog
TID{root}# zfs get quota app_pool/applog
NAME             PROPERTY  VALUE  SOURCE
app_pool/applog  quota     500M   local
TID{root}# zfs get reservation app_pool/applog
NAME             PROPERTY     VALUE   SOURCE
app_pool/applog  reservation  500M    local
TID{root}#
Growing size from 500MB to 1GB to app_pool/applog dataset:
Note:- Simply setting the "Quota" not guaranty that you will get 1GB, but you see you have 1GB space until other dataset used the free space from the pool, 
when you reserve the space then only it will be guaranty. So always set reservation if you want to use like a traditional file system.
TID{root}# zfs set quota=1g app_pool/applog
TID{root}# df -h /app_pool/applog
Filesystem             size   used  avail capacity  Mounted on
app_pool/applog        1.0G    22K   1.0G     1%    /app_pool/applog
TID{root}# zfs set reservation=1GB app_pool/applog
TID{root}# df -h /app_pool/applog
Filesystem             size   used  avail capacity  Mounted on
app_pool/applog        1.0G    22K   1.0G     1%    /app_pool/applog
TID{root}# zfs get reservation app_pool/applog
NAME             PROPERTY     VALUE   SOURCE
app_pool/applog  reservation  1G      local
TID{root}# zfs get quota app_pool/applog
NAME             PROPERTY  VALUE  SOURCE
app_pool/applog  quota     1G     local
TID{root}#
Extend / Grow ZFS Volume (zvol):
How to identify volumes under the pool?
For volumes only we will be seeing "volsize" properties, as we can't see same for datasets.
TID{root}# zfs list -r db_pool
NAME             USED  AVAIL  REFER  MOUNTPOINT
db_pool         2.32G  1.59G    32K  /db_pool
db_pool/oracle  1.81G  1.59G  1.81G  /db_pool/oracle
db_pool/oravol   516M  2.06G  30.1M  -
TID{root}# zfs get all db_pool/oravol | grep -i volsize
db_pool/oravol  volsize             500M                   local
TID{root}#
To identify volumes generally:
Only we can see "dsk" and "rdsk" under the "/dev/zvol/" if it is volume.
TID{root}# df -h /oravol
Filesystem             size   used  avail capacity  Mounted on
/dev/zvol/dsk/db_pool/oravol
                       470M   1.0M   422M     1%    /oravol
TID{root}# fstyp /dev/zvol/rdsk/db_pool/oravol
ufs
TID{root}#
Growing/Expanding a volume is simple to setting the "volsize" property to the new value and growfs to make the file system to take effect new size.
TID{root}# zfs get volsize db_pool/oravol
NAME            PROPERTY  VALUE    SOURCE
db_pool/oravol  volsize   500M     local
TID{root}# zfs set volsize=1GB db_pool/oravol
TID{root}# df -h /oravol
Filesystem             size   used  avail capacity  Mounted on
/dev/zvol/dsk/db_pool/oravol
                       470M   1.0M   422M     1%    /oravol
TID{root}# growfs -M /oravol /dev/zvol/rdsk/db_pool/oravol
Warning: 4130 sector(s) in last cylinder unallocated
/dev/zvol/rdsk/db_pool/oravol:  2097118 sectors in 342 cylinders of 48 tracks, 128 sectors
        1024.0MB in 25 cyl groups (14 c/g, 42.00MB/g, 20160 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
 32, 86176, 172320, 258464, 344608, 430752, 516896, 603040,689184,775328,
 1292192, 1378336, 1464480, 1550624, 1636768, 1722912, 1809056, 1895200,
 1981344, 2067488
TID{root}# df -h /oravol
Filesystem             size   used  avail capacity  Mounted on
/dev/zvol/dsk/db_pool/oravol
                       962M   1.0M   914M     1%    /oravol
TID{root}# zfs get volsize db_pool/oravol
NAME            PROPERTY  VALUE    SOURCE
db_pool/oravol  volsize   1G       local
TID{root}#
Cool... we have expanded volume successfully.

Tuesday, June 16, 2015

How to Manage and Maintain ZFS file systems?

ZFS Chapters:
1). How to create ZFS file system and Volumes
2). How to set ZFS Dataset Properties on Solaris

Compare to traditional file system it is very easy to manage and maintain ZFS file systems "dataset and volumes". All ZFS related thinks can be done through ZFS command example creating , mounting, renaming , set properties etc...

Also you can refer man page for zfs command to know the exact option and syntax for what purpose.

Continue to the chapters one and two will see some of the common managing and maintain activity's.

How to rename dataset?
File systems can be renamed using by zfs rename command.
TID{root}# zfs list /app_pool/sapdata
NAME               USED  AVAIL  REFER  MOUNTPOINT
app_pool/sapdata    31K   463M    31K  /app_pool/sapdata
TID{root}# df -h /app_pool/sapdata
Filesystem             size   used  avail capacity  Mounted on
app_pool/sapdata       2.0G    31K   463M     1%    /app_pool/sapdata
TID{root}# zfs rename app_pool/sapdata app_pool/saporadata
TID{root}# df -h /app_pool/saporadata
Filesystem             size   used  avail capacity  Mounted on
app_pool/saporadata    2.0G    31K   464M     1%    /app_pool/saporadata
TID{root}#
How to umount dataset?
To umount particular dataset:
TID{root}# df -h | grep -i db_pool
db_pool                3.9G    32K   1.6G     1%    /db_pool
db_pool/oracle         3.9G   1.8G   1.6G    54%    /db_pool/oracle
TID{root}# zfs umount /db_pool/oracle
TID{root}# df -h | grep -i /db_pool/oracle
TID{root}#
How to mount dataset?
To mount particular dataset:
TID{root}# zfs mount db_pool/oracle
TID{root}# df -h | grep -i db_pool
db_pool                3.9G    32K   1.6G     1%    /db_pool
db_pool/oracle         3.9G   1.8G   1.6G    54%    /db_pool/oracle
TID{root}#
How to umount all the ZFS datasets? 
Umounting all the ZFS datasets:
TID{root}# df -h -F zfs
Filesystem             size   used  avail capacity  Mounted on
rpool/ROOT/s10x_u10wos_17b
                        16G   5.4G   8.4G    40%    /
app_pool               2.0G    34K   463M     1%    /app_pool
app_pool/sap           2.0G    31K   463M     1%    /app_pool/sap
app_pool/saplog        2.0G    31K   463M     1%    /app_pool/saplog
app_pool/saporadata    2.0G    31K   463M     1%    /app_pool/saporadata
rpool/export            16G    32K   8.4G     1%    /export
rpool/export/home       16G    31K   8.4G     1%    /export/home
rpool                   16G    42K   8.4G     1%    /rpool
app_pool/saporg        2.0G   1.5G   499M    76%    /saporg
test2                  3.9G    12M   3.9G     1%    /test2
zonepool                16G    36K   4.6G     1%    /zonepool
zonepool/back           16G   1.6G   4.6G    27%    /zonepool/back
zonepool/iv10           16G   4.6G   4.6G    51%    /zonepool/iv10
zonepool/iv10new        16G   4.6G   4.6G    51%    /zonepool/iv10new
zonepool/zone2          16G   4.6G   4.6G    51%    /zonepool/zone2
db_pool                3.9G    32K   1.6G     1%    /db_pool
db_pool/oracle         3.9G   1.8G   1.6G    54%    /db_pool/oracle
TID{root}# zfs umount -a
TID{root}# df -h -F zfs
Filesystem             size   used  avail capacity  Mounted on
rpool/ROOT/s10x_u10wos_17b
                        16G   5.4G   8.4G    40%    /
TID{root}#
In the above except /root all of them are unmounted.
To mount all the ZFS datasets:
TID{root}# zfs mount -a
TID{root}# df -h -F zfs
Filesystem             size   used  avail capacity  Mounted on
rpool/ROOT/s10x_u10wos_17b
                        16G   5.4G   8.4G    40%    /
app_pool               2.0G    34K   464M     1%    /app_pool
app_pool/sap           2.0G    31K   464M     1%    /app_pool/sap
app_pool/saplog        2.0G    31K   464M     1%    /app_pool/saplog
app_pool/saporadata    2.0G    31K   464M     1%    /app_pool/saporadata
db_pool                3.9G    32K   1.6G     1%    /db_pool
db_pool/oracle         3.9G   1.8G   1.6G    54%    /db_pool/oracle
rpool/export            16G    32K   8.4G     1%    /export
rpool/export/home       16G    31K   8.4G     1%    /export/home
rpool                   16G    42K   8.4G     1%    /rpool
app_pool/saporg        2.0G   1.5G   499M    76%    /saporg
test2                  3.9G    12M   3.9G     1%    /test2
zonepool                16G    36K   4.6G     1%    /zonepool
zonepool/back           16G   1.6G   4.6G    27%    /zonepool/back
zonepool/iv10           16G   4.6G   4.6G    51%    /zonepool/iv10
zonepool/iv10new        16G   4.6G   4.6G    51%    /zonepool/iv10new
zonepool/zone2          16G   4.6G   4.6G    51%    /zonepool/zone2
TID{root}#
How to destroy particular dataset?
To destroy particular dataset:
TID{root}# zfs list -r app_pool
NAME                  USED  AVAIL  REFER  MOUNTPOINT
app_pool             1.50G   463M    34K  /app_pool
app_pool/sap           31K   463M    31K  /app_pool/sap
app_pool/saplog        31K   463M    31K  /app_pool/saplog
app_pool/saporadata    31K   463M    31K  /app_pool/saporadata
app_pool/saporg      1.47G   499M  1.47G  /saporg
TID{root}# zfs destroy app_pool/saporadata
TID{root}# zfs list -r app_pool
NAME              USED  AVAIL  REFER  MOUNTPOINT
app_pool         1.50G   463M    33K  /app_pool
app_pool/sap       31K   463M    31K  /app_pool/sap
app_pool/saplog    31K   463M    31K  /app_pool/saplog
app_pool/saporg  1.47G   499M  1.47G  /saporg
TID{root}#
How to destroy particular datasets and it's children like snapshot and child datasets? I got error since it has snapshot so we have to use -r option to destroy snapshot and child datasets. Some case you may need to use -fr both option.
TID{root}# zfs list -r app_pool
NAME                         USED  AVAIL  REFER  MOUNTPOINT
app_pool                    1.50G   463M    33K  /app_pool
app_pool/sap                  31K   463M    31K  /app_pool/sap
app_pool/saplog               31K   463M    31K  /app_pool/saplog
app_pool/saporg             1.47G   496M  1.47G  /saporg
app_pool/saporg@16.06.2015    19K      -  1.47G  -
TID{root}# zfs destroy app_pool/saporg
cannot destroy 'app_pool/saporg': filesystem has children
use '-r' to destroy the following datasets:
app_pool/saporg@16.06.2015
TID{root}# zfs destroy -r app_pool/saporg
TID{root}# zfs list -r app_pool
NAME              USED  AVAIL  REFER  MOUNTPOINT
app_pool          714K  1.95G    33K  /app_pool
app_pool/sap       31K  1.95G    31K  /app_pool/sap
app_pool/saplog    31K  1.95G    31K  /app_pool/saplog
TID{root}#
How to destroy all the child datasets in one pool ?
Destroying datasets which are under the "app_pool" pool.
TID{root}# zfs list -r app_pool
NAME              USED  AVAIL  REFER  MOUNTPOINT
app_pool          714K  1.95G    33K  /app_pool
app_pool/sap       31K  1.95G    31K  /app_pool/sap
app_pool/saplog    31K  1.95G    31K  /app_pool/saplog
TID{root}# zfs destroy -r app_pool
TID{root}# zfs list -r app_pool
NAME       USED  AVAIL  REFER  MOUNTPOINT
app_pool   386K  1.95G    32K  /app_pool
TID{root}#
How to create zfs dataset with particular version ?
Using "-o version" option we can create zfs dataset for particular version.
Here , my pool dataset version is 5 , in the same pool I am creating dataset version 3.
TID{root}# zfs get version app_pool
NAME      PROPERTY  VALUE    SOURCE
app_pool  version   5        -
TID{root}# zfs upgrade -v
The following filesystem versions are supported:

VER  DESCRIPTION
---  --------------------------------------------------------
 1   Initial ZFS filesystem version
 2   Enhanced directory entries
 3   Case insensitive and File system unique identifier (FUID)
 4   userquota, groupquota properties
 5   System attributes

For more information on a particular version, including supported releases,
see the ZFS Administration Guide.

TID{root}# zfs create -o version=3 app_pool/applog
TID{root}# zfs get version app_pool/applog
NAME             PROPERTY  VALUE    SOURCE
app_pool/applog  version   3        -
TID{root}#
How to upgrade zfs dataset version ?
Using zfs upgrade command we can upgrade our dataset version.
To upgrade particular version use "-V" option with -V the dataset will upgrade to latest available version.
Upgrading dataset to particular version:
TID{root}# zfs get version app_pool/applog
NAME             PROPERTY  VALUE    SOURCE
app_pool/applog  version   3        -
TID{root}# zfs upgrade -V 4 app_pool/applog
1 filesystems upgraded
TID{root}# zfs get version app_pool/applog
NAME             PROPERTY  VALUE    SOURCE
app_pool/applog  version   4        -
TID{root}#
Upgrading to latest available version:
TID{root}# zfs get version app_pool/applog
NAME             PROPERTY  VALUE    SOURCE
app_pool/applog  version   4        -
TID{root}# zfs upgrade app_pool/applog
1 filesystems upgraded
TID{root}# zfs get version app_pool/applog
NAME             PROPERTY  VALUE    SOURCE
app_pool/applog  version   5        -
TID{root}#
How to set legacy for dataset mountpoint ? If you set legacy it is like old type file system maintenance, you have to add entry to /etc/vfstab so that you can mount that dataset. 
Mostly this option will use on zones environment as we can set pool to legacy and dedicate to particular zones so that we can manage datasets from zone level or else it will be always mounted on global zone and shard to non-global zones , this may confuse. Also we can use for several purpose.
Set legacy to dataset pool:
TID{root}# zfs set mountpoint=legacy app_pool
TID{root}# zfs get mountpoint app_pool
NAME      PROPERTY    VALUE       SOURCE
app_pool  mountpoint  legacy      local
TID{root}# df -h /app_pool
df: (/app_pool ) not a block device, directory or mounted resource
TID{root}# df -h | grep -i /app_pool
TID{root}# zfs list -r app_pool
NAME              USED  AVAIL  REFER  MOUNTPOINT
app_pool          436K  1.95G    32K  legacy
app_pool/applog    22K  1.95G    22K  /app_pool/applog
TID{root}# cat /etc/vfstab | grep -i app_pool
app_pool        -               /mnt               zfs          -               yes             -
TID{root}# mount /mnt
TID{root}# df -h /mnt
Filesystem             size   used  avail capacity  Mounted on
app_pool               2.0G    32K   2.0G     1%    /mnt
TID{root}#
Thanks for reading this post .....

Thursday, June 11, 2015

How to set ZFS Dataset Properties on Solaris?

ZFS Chapters:
1). How to create ZFS file system and Volumes

In commonly ZFS has many parameter and properties to use ZFS dataset and file systems effectively.Also vendor oracle recommending  to use some set of properties to get good performance for particularly DB file systems and datasets.

Mostly we will be in the situation to set quota, reservation, dedicated mountpoint, etc...

We will see some of the most common properties which generally we use in ZFS environment.

To list all the properties for particular pool "app_pool":
Colored properties are need to change according to the client environment and vendor recommendation.
You can refer man page for zfs to know the more detailed about each properties purpose.
TID{root}# zfs get all app_pool
NAME      PROPERTY              VALUE                  SOURCE
app_pool  type                  filesystem             -
app_pool  creation              Tue Jun  9 23:11 2015  -
app_pool  used                  500M                   -
app_pool  available             1.46G                  -
app_pool  referenced            35K                    -
app_pool  compressratio         1.00x                  -
app_pool  mounted               yes                    -
app_pool  quota                 none                   default
app_pool  reservation           none                   default
app_pool  recordsize            128K                   default
app_pool  mountpoint            /app_pool              default
app_pool  sharenfs              off                    default
app_pool  checksum              on                     default
app_pool  compression           off                    default
app_pool  atime                 on                     default
app_pool  devices               on                     default
app_pool  exec                  on                     default
app_pool  setuid                on                     default
app_pool  readonly              off                    default
app_pool  zoned                 off                    default
app_pool  snapdir               hidden                 default
app_pool  aclinherit            restricted             default
app_pool  canmount              on                     default
app_pool  shareiscsi            off                    default
app_pool  xattr                 on                     default
app_pool  copies                1                      default
app_pool  version               5                      -
app_pool  utf8only              off                    -
app_pool  normalization         none                   -
app_pool  casesensitivity       sensitive              -
app_pool  vscan                 off                    default
app_pool  nbmand                off                    default
app_pool  sharesmb              off                    default
app_pool  refquota              none                   default
app_pool  refreservation        none                   default
app_pool  primarycache          all                    default
app_pool  secondarycache        all                    default
app_pool  usedbysnapshots       0                      -
app_pool  usedbydataset         35K                    -
app_pool  usedbychildren        500M                   -
app_pool  usedbyrefreservation  0                      -
app_pool  logbias               latency                default
app_pool  sync                  standard               default
app_pool  rstchown              on                     default
TID{root}#
How to set quota to ZFS dataset ?
Quota is used to limit the amount of space consumed by a particular dataset,
it can't consume more than quota limit which is already set to this.
To set quota to particular dataset:
TID{root}# zfs list -r app_pool
NAME               USED  AVAIL  REFER  MOUNTPOINT
app_pool           500M  1.46G    35K  /app_pool
app_pool/sap        31K  1.46G    31K  /app_pool/sap
app_pool/sapdata    31K  1.46G    31K  /app_pool/sapdata
app_pool/saplog    500M  1.46G   500M  /app_pool/saplog
app_pool/saporg     31K  1.46G    31K  /app_pool/saporg
TID{root}# zfs get quota app_pool/sap
NAME          PROPERTY  VALUE  SOURCE
app_pool/sap  quota     none   default
TID{root}# zfs set quota=500m app_pool/sap
TID{root}# zfs get quota app_pool/sap
NAME          PROPERTY  VALUE  SOURCE
app_pool/sap  quota     500M   local
TID{root}#
In the above we can see we have 1.46 GB space but we can't use more then 500MB for "app_pool/sap" dataset as we have set quota to limit the usage size.
TID{root}# mkfile 501m /app_pool/sap/appsfile.txt
/app_pool/sap/appsfile.txt: initialized 524156928 of 525336576 bytes:Discquota exceeded
TID{root}# ls -ltr /app_pool/sap/appsfile.txt
-rw-------   1 root     root     525336576 Jun 10 19:41 /app_pool/sap/appsfile.txt
TID{root}# du -sh /app_pool/sap/appsfile.txt
 500M   /app_pool/sap/appsfile.txt
TID{root}# rm /app_pool/sap/appsfile.txt
TID{root}# mkfile 200m /app_pool/sap/dbfile.txt
TID{root}# du -sh /app_pool/sap/dbfile.txt
 200M   /app_pool/sap/dbfile.txt
TID{root}#
The above output shows that we were not able to create file call "appsfile.txt" more then 500MB so it showed error and created file with 500MB size. 
2nd when I create 200MB file it was not shown any error and it got created successfully.
Also we can set quota to all the dataset but the usage size not guarantee that will get quota size limit until other dataset used. This is how quota is working.
e.g - I have set quota to all the dataset but if any of the one or multiple dataset are consumed 999 MB then other can't get the space. In this case we need to go for reservation, will see how reservation is coming into the picture and working.
TID{root}# zfs list -r app_pool
NAME               USED  AVAIL  REFER  MOUNTPOINT
app_pool          1001M   999M    35K  /app_pool
app_pool/sap       500M      0   500M  /app_pool/sap
app_pool/sapdata    31K   999M    31K  /app_pool/sapdata
app_pool/saplog    500M   999M   500M  /app_pool/saplog
app_pool/saporg     31K   999M    31K  /app_pool/saporg
TID{root}# zfs set quota=999m app_pool/sapdata
TID{root}# zfs set quota=999m app_pool/saplog
TID{root}# zfs set quota=999m app_pool/saporg
TID{root}# mkfile 999m app_pool/saporg/test.txt
app_pool/saporg/test.txt: initialized 1047265280 of 1047527424 bytes:Discquota exceeded
TID{root}# du -sh app_pool/saporg/test.txt
 999M   app_pool/saporg/test.txt
TID{root}# zfs list -r app_pool
NAME               USED  AVAIL  REFER  MOUNTPOINT
app_pool          1.95G   101K    35K  /app_pool
app_pool/sap       500M      0   500M  /app_pool/sap
app_pool/sapdata    31K   101K    31K  /app_pool/sapdata
app_pool/saplog    500M   101K   500M  /app_pool/saplog
app_pool/saporg    999M      0   999M  /app_pool/saporg
TID{root}# mkfile 5m app_pool/saporg/oralog.txt
Could not open app_pool/saporg/oralog.txt: Disc quota exceeded
TID{root}#
Above there is no available space as "app_pool/saporg/" dataset consumed all the available space and other dataset can't usage any more.
I am trying to create 5MB files on other dataset "app_pool/saporg/oralog.txt" but could not.
How to set reservation ?
Reservations is to provide space guarantee that a dataset can consume a specified space from Zpool storage and total available free space will be reduced from the pool.
Setting reservation to dataset:
TID{root}# zfs list -r app_pool
NAME               USED  AVAIL  REFER  MOUNTPOINT
app_pool           752K  1.95G    35K  /app_pool
app_pool/sap        31K  1.95G    31K  /app_pool/sap
app_pool/sapdata    31K  1.95G    31K  /app_pool/sapdata
app_pool/saplog     31K  1.95G    31K  /app_pool/saplog
app_pool/saporg     40K  1.95G    40K  /app_pool/saporg
TID{root}# zfs set reservation=1.5G app_pool/saporg
TID{root}# zfs list -r app_pool
NAME               USED  AVAIL  REFER  MOUNTPOINT
app_pool          1.50G   463M    35K  /app_pool
app_pool/sap        31K   463M    31K  /app_pool/sap
app_pool/sapdata    31K   463M    31K  /app_pool/sapdata
app_pool/saplog     31K   463M    31K  /app_pool/saplog
app_pool/saporg     40K  1.95G    40K  /app_pool/saporg
TID{root}#
After set reservation the actual used size got increased and available free size reduced above.
Creating one file to non-reserved dataset:
TID{root}# mkfile 500M /app_pool/sap/log.sap
/app_pool/sap/log.sap: initialized 485621760 of 524288000 bytes: No space left on device
TID{root}# du -sh /app_pool/sap/log.sap
 463M   /app_pool/sap/log.sap
TID{root}# mkfile 1500M /app_pool/saporg/org.log
/app_pool/saporg/org.log: initialized 145752064 of 1572864000 bytes: No space left on device
TID{root}# zfs list -r app_pool
NAME               USED  AVAIL  REFER  MOUNTPOINT
app_pool          1.95G   227K    35K  /app_pool
app_pool/sap       463M   227K   463M  /app_pool/sap
app_pool/sapdata    31K   227K    31K  /app_pool/sapdata
app_pool/saplog     31K   227K    31K  /app_pool/saplog
app_pool/saporg   95.7M  1.41G  95.7M  /app_pool/saporg
TID{root}# du -sh /app_pool/saporg/org.log
  96M   /app_pool/saporg/org.log
TID{root}#
In above I am trying to create 500MB file but actual free space left on the pool is 463MB so it got created only 463MB as we already set reservation of 1.5 GB to "app_pool/sap" dataset. Now I am creating 1.5GB file to reserved dataset "app_pool/saporg" and got above error. WHY?
TID{root}# rm /app_pool/saporg/org.log
TID{root}# rm /app_pool/sap/log.sap
TID{root}# zfs list -r app_pool
NAME               USED  AVAIL  REFER  MOUNTPOINT
app_pool          1.50G   463M    35K  /app_pool
app_pool/sap        31K   463M    31K  /app_pool/sap
app_pool/sapdata    31K   463M    31K  /app_pool/sapdata
app_pool/saplog     31K   463M    31K  /app_pool/saplog
app_pool/saporg     40K  1.95G    40K  /app_pool/saporg
TID{root}# mkfile 1500M /app_pool/saporg/org.log
TID{root}# du -sh /app_pool/saporg/org.log
 1.5G   /app_pool/saporg/org.log
TID{root}# zfs list -r app_pool
NAME               USED  AVAIL  REFER  MOUNTPOINT
app_pool          1.50G   463M    35K  /app_pool
app_pool/sap        31K   463M    31K  /app_pool/sap
app_pool/sapdata    31K   463M    31K  /app_pool/sapdata
app_pool/saplog     31K   463M    31K  /app_pool/saplog
app_pool/saporg   1.47G   499M  1.47G  /app_pool/saporg
TID{root}#
Cool.. it got created. The reason is generally we need to maintain some free space in pool level, so I have removed both files "org.log and log.sap" and able created 1.5 GB file successfully.
In genral if you want to use normal file system e.g UFS and VXFS then you have to set quota and reservation so that you can use space constantly.
How to change mountpoint for dataset ?
In a traditional file system we have to do few of the tasks require to change mountpoint.
e.g
Umount file system
Make a new directory
Edit /etc/vfstab file 
Mount file system with new mountpoint name.
but in ZFS it is very simple to change mountpoint for dataset to setting the properties.
To set properties for dataset mountpoint:
Here I am changing mountpoint for "app_pool/saporg" dataset.
TID{root}# df -h /app_pool/saporg
Filesystem             size   used  avail capacity  Mounted on
app_pool/saporg        2.0G   1.5G   499M    76%    /app_pool/saporg
TID{root}# zfs get mountpoint app_pool/saporg
NAME             PROPERTY    VALUE             SOURCE
app_pool/saporg  mountpoint  /app_pool/saporg  default
TID{root}# zfs set mountpoint=/saporg app_pool/saporg
TID{root}# df -h /saporg
Filesystem             size   used  avail capacity  Mounted on
app_pool/saporg        2.0G   1.5G   499M    76%    /saporg
TID{root}#
Thanks for reading this post , will post more article about ZFS...

Tuesday, June 9, 2015

How to create ZFS file system and Volumes ?

The ZFS file system works little bit different from native file system, we can call ZFS file system as a "COW - copy on right".

The ZFS file system is a revolutionary new file system that fundamentally changes the way of file systems are administered, with features and benefits. ZFS is robust, scalable, and easy to administer.

To create file system we need ZFS pool "ZPOOL" , top of the pool we can create two type of file system "datasets and volumes .

Will see step by step to create two different types of ZFS datasets, file systems and volumes.

In my scenario, I have pool call "db_pool" on top this I am creating two type of datasets "file system and volumes".
TID{root}# zpool list db_pool
NAME      SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
db_pool  3.97G   110K  3.97G     0%  ONLINE  -
TID{root}# df -h /db_pool
Filesystem             size   used  avail capacity  Mounted on
db_pool                3.9G    31K   3.9G     1%    /db_pool
TID{root}#
Create dataset:
TID{root}# zfs create db_pool/oracle
TID{root}# df -h db_pool/oracle
Filesystem             size   used  avail capacity  Mounted on
db_pool/oracle         3.9G    31K   3.9G     1%    /db_pool/oracle
TID{root}#
Create a ZFS Volumes (zvols):
Like a file system dataset, you must specific the size of the volume when you create it. This will creates two device path: /dev/zvol/dsk/db_pool/oravol (logical device) and /dev/zvol/rdsk/db_pool/oravol (raw device) like traditional file system. The file system type will be UFS.
TID{root}# zfs create -V 500m db_pool/oravol
TID{root}# newfs /dev/zvol/rdsk/db_pool/oravol
newfs: construct a new file system /dev/zvol/rdsk/db_pool/oravol:(y/n)? y
Warning: 2082 sector(s) in last cylinder unallocated
/dev/zvol/rdsk/db_pool/oravol:  1023966 sectors in 167 cylinders of 48 tracks, 128 sectors
        500.0MB in 12 cyl groups (14 c/g, 42.00MB/g, 20160 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
 32, 86176, 172320, 258464, 344608, 430752, 516896, 603040, 689184, 775328, 861472, 947616
TID{root}# fstyp /dev/zvol/rdsk/db_pool/oravol
ufs
TID{root}# mkdir /ora_vol
TID{root}# mount /dev/zvol/dsk/db_pool/oravol /ora_vol
TID{root}# df -h /ora_vol
Filesystem             size   used  avail capacity  Mounted on
/dev/zvol/dsk/db_pool/oravol
                       470M   1.0M   422M     1%    /ora_vol
TID{root}#
To list all the dataset in the pool "db_pool":
TID{root}# zfs list -r db_pool
NAME             USED  AVAIL  REFER  MOUNTPOINT
db_pool          516M  3.40G    32K  /db_pool
db_pool/oracle    31K  3.40G    31K  /db_pool/oracle
db_pool/oravol   516M  3.88G  30.1M  -
TID{root}#
Create a simple pool and datasets:
TID{root}# zpool create app_pool mirror c1t6d0 c1t8d0
TID{root}# df -h /app_pool
Filesystem             size   used  avail capacity  Mounted on
app_pool               2.0G    31K   2.0G     1%    /app_pool
TID{root}# zfs create app_pool/sap
TID{root}# zfs create app_pool/sapdata
TID{root}# zfs create app_pool/saporg
TID{root}# zfs create app_pool/saplog
TID{root}#
To list dataset in pool:
TID{root}# zfs list -r app_pool
NAME               USED  AVAIL  REFER  MOUNTPOINT
app_pool           254K  1.95G    35K  /app_pool
app_pool/sap        31K  1.95G    31K  /app_pool/sap
app_pool/sapdata    31K  1.95G    31K  /app_pool/sapdata
app_pool/saplog     31K  1.95G    31K  /app_pool/saplog
app_pool/saporg     31K  1.95G    31K  /app_pool/saporg
TID{root}#
In the above 1.95 GB the pool is shared to all of the datasets, if it is a traditional file system, you might judge there was around 10.00 GB available for app_pool and its 4 datasets.
How it is working ?
TID{root}# mkfile 500m /app_pool/saplog/slog.log
TID{root}# zfs list -r app_pool
NAME               USED  AVAIL  REFER  MOUNTPOINT
app_pool           501M  1.46G    35K  /app_pool
app_pool/sap        31K  1.46G    31K  /app_pool/sap
app_pool/sapdata    31K  1.46G    31K  /app_pool/sapdata
app_pool/saplog    500M  1.46G   500M  /app_pool/saplog
app_pool/saporg     31K  1.46G    31K  /app_pool/saporg
TID{root}#
Note:- the USED column /app_pool/saplog shows 500 MB in use. The other datasets show just the metadata size (31k), but the available space have been reduced to 1.46GB. That's because the amount of free space available to them after /app_pool/saplog had consumed the 500MB, so each datasets are consuming space from total pool size, until unless if you not set quota and reservation you will not get accurate size for particular dataset. Like first come first.

Monday, June 8, 2015

ZFS pool aka Zpool maintenance and performance

Continuing "Zpool" chapters...
There are multiple useful command to maintain and check performance for ZFS pool aka ZPOOL on Solaris Operating systems.

Here , I am showing some of the useful command with examples which will help you when your performing this kind of activity on your environment.

I).Create type of ZFS Pools aka "ZPOOL" on Solaris
II).How to Import and Export ZFS Pool aka ZPOOL
III).How to upgrade ZPOOL version on Solaris OS

Mainly we use to perform disk replacement ,scrub for pools and IO performance activity.

Run scrub on all file systems under apps_pool:
TID{root}# zpool scrub apps_pool
TID{root}# zpool status apps_pool
  pool: apps_pool
 state: ONLINE
 scan: scrub repaired 0 in 0h0m with 0 errors on Mon Jun  8 23:27:11 2015
config:

        NAME        STATE     READ WRITE CKSUM
        apps_pool   ONLINE       0     0     0
          c1t9d0    ONLINE       0     0     0

errors: No known data errors
TID{root}#
To offline a disk temporarily (until next reboot):
TID{root}# zpool status ora_pool
  pool: ora_pool
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        ora_pool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c1t6d0  ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0

errors: No known data errors
TID{root}# zpool offline -t ora_pool c1t2d0
TID{root}# zpool status ora_pool
  pool: ora_pool
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        ora_pool    DEGRADED     0     0     0
          mirror-0  DEGRADED     0     0     0
            c1t6d0  ONLINE       0     0     0
            c1t2d0  OFFLINE      0     0     0

errors: No known data errors
TID{root}#
Online a disk back:
TID{root}# zpool online ora_pool c1t2d0
TID{root}# zpool status ora_pool
  pool: ora_pool
 state: ONLINE
 scan: resilvered 19.5K in 0h0m with 0 errors on Mon Jun  8 23:33:06 2015
config:

        NAME        STATE     READ WRITE CKSUM
        ora_pool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c1t6d0  ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0

errors: No known data errors
TID{root}#
Clear error count:
TID{root}# zpool clear ora_pool
TID{root}#
Replacing a failed disk with a new disk:
TID{root}# zpool status ora_pool
  pool: ora_pool
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
 scan: resilvered 4.50K in 0h0m with 0 errors on Mon Jun  8 23:36:18 2015
config:

        NAME        STATE     READ WRITE CKSUM
        ora_pool    DEGRADED     0     0     0
          mirror-0  DEGRADED     0     0     0
            c1t6d0  ONLINE       0     0     0
            c1t2d0  OFFLINE      0     0     0

errors: No known data errors
TID{root}# zpool replace ora_pool c1t2d0 c1t14d0
TID{root}# zpool status ora_pool
  pool: ora_pool
 state: ONLINE
 scan: resilvered 82.5K in 0h0m with 0 errors on Mon Jun  8 23:40:47 2015
config:

        NAME         STATE     READ WRITE CKSUM
        ora_pool     ONLINE       0     0     0
          mirror-0   ONLINE       0     0     0
            c1t6d0   ONLINE       0     0     0
            c1t14d0  ONLINE       0     0     0

errors: No known data errors
TID{root}#
Breaking the mirror pool "ora_pool" :
TID{root}# zpool status ora_pool
  pool: ora_pool
 state: ONLINE
 scan: resilvered 82.5K in 0h0m with 0 errors on Mon Jun  8 23:40:47 2015
config:

        NAME         STATE     READ WRITE CKSUM
        ora_pool     ONLINE       0     0     0
          mirror-0   ONLINE       0     0     0
            c1t6d0   ONLINE       0     0     0
            c1t14d0  ONLINE       0     0     0

errors: No known data errors
TID{root}# zpool detach ora_pool c1t14d0
TID{root}# zpool status ora_pool
  pool: ora_pool
 state: ONLINE
 scan: resilvered 82.5K in 0h0m with 0 errors on Mon Jun  8 23:40:47 2015
config:

        NAME        STATE     READ WRITE CKSUM
        ora_pool    ONLINE       0     0     0
          c1t6d0    ONLINE       0     0     0

errors: No known data errors
TID{root}# 
Mirroring back mirror pool "ora_pool":
TID{root}# zpool attach ora_pool c1t6d0 c1t14d0
TID{root}# zpool status ora_pool
  pool: ora_pool
 state: ONLINE
 scan: resilvered 82.5K in 0h0m with 0 errors on Mon Jun  8 23:44:49 2015
config:

        NAME         STATE     READ WRITE CKSUM
        ora_pool     ONLINE       0     0     0
          mirror-0   ONLINE       0     0     0
            c1t6d0   ONLINE       0     0     0
            c1t14d0  ONLINE       0     0     0

errors: No known data errors
TID{root}#
To get all the support properties:
e.g
TID{root}# zpool get all ora_pool
NAME      PROPERTY       VALUE       SOURCE
ora_pool  size           1.98G       -
ora_pool  capacity       0%          -
ora_pool  altroot        -           default
ora_pool  health         ONLINE      -
ora_pool  guid           12018882338300853730  default
ora_pool  version        29          default
ora_pool  bootfs         -           default
ora_pool  delegation     on          default
ora_pool  autoreplace    off         default
ora_pool  cachefile      -           default
ora_pool  failmode       wait        default
ora_pool  listsnapshots  on          default
ora_pool  autoexpand     off         default
ora_pool  free           1.98G       -
ora_pool  allocated      90K         -
ora_pool  readonly       off         -
TID{root}#
Displays I/O statistics for the given pools. When given an interval, the statistics are printed every interval seconds until Ctrl-C is pressed. If no pools are speci- fied, statistics for every pool in the system is shown. If count is specified, the command exits after count reports are printed. 

Note: - Always you need to ignore very first output information which is showed from system boot time. 

To check pools I/O performance 1 sec interval with 2 times:
TID{root}# zpool iostat 1 2
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
apps_pool     85K  1.98G      0      0     55    836
db_pool     73.5K  1.98G      0      0     53    818
ora_pool    73.5K  1.98G      0      0      3    100
pool_strip    85K  1.98G      0      0     12    238
rpool       6.33G  9.54G      0      0  28.4K  5.15K
test2       12.4M  3.96G      0      0    107     10
zonepool    11.0G  4.85G      0      0     71    193
----------  -----  -----  -----  -----  -----  -----
apps_pool     85K  1.98G      0      0      0      0
db_pool     73.5K  1.98G      0      0      0      0
ora_pool    73.5K  1.98G      0      0      0      0
pool_strip    85K  1.98G      0      0      0      0
rpool       6.33G  9.54G      0      0      0      0
test2       12.4M  3.96G      0      0      0      0
zonepool    11.0G  4.85G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
TID{root}#
To check pools I/O performance 1 sec interval with 2 times verbose:
TID{root}# zpool iostat -v 1 2
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
apps_pool     85K  1.98G      0      0     54    825
  c1t9d0      85K  1.98G      0      0     54    825
----------  -----  -----  -----  -----  -----  -----
db_pool     73.5K  1.98G      0      0     52    807
  c1t8d0    73.5K  1.98G      0      0     52    807
----------  -----  -----  -----  -----  -----  -----
ora_pool    73.5K  1.98G      0      0      3     98
  mirror    73.5K  1.98G      0      0      3     98
    c1t6d0      -      -      0      0     54    826
    c1t2d0      -      -      0      0     51    826
----------  -----  -----  -----  -----  -----  -----
pool_strip    85K  1.98G      0      0     12    238
  c1t10d0     85K  1.98G      0      0     12    238
----------  -----  -----  -----  -----  -----  -----
rpool       6.33G  9.54G      0      0  28.3K  5.13K
  c1t0d0s0  6.33G  9.54G      0      0  28.3K  5.13K
----------  -----  -----  -----  -----  -----  -----
test2       12.4M  3.96G      0      0    107     10
  c1t4d0    6.10M  1.98G      0      0     52      4
  c1t3d0    6.31M  1.98G      0      0     55      6
----------  -----  -----  -----  -----  -----  -----
zonepool    11.0G  4.85G      0      0     71    192
  c1t1d0    11.0G  4.85G      0      0     71    192
----------  -----  -----  -----  -----  -----  -----

               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
apps_pool     85K  1.98G      0      0      0      0
  c1t9d0      85K  1.98G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
db_pool     73.5K  1.98G      0      0      0      0
  c1t8d0    73.5K  1.98G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
ora_pool    73.5K  1.98G      0      0      0      0
  mirror    73.5K  1.98G      0      0      0      0
    c1t6d0      -      -      0      0      0      0
    c1t2d0      -      -      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
pool_strip    85K  1.98G      0      0      0      0
  c1t10d0     85K  1.98G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
rpool       6.33G  9.54G      0      0      0      0
  c1t0d0s0  6.33G  9.54G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
test2       12.4M  3.96G      0      0      0      0
  c1t4d0    6.10M  1.98G      0      0      0      0
  c1t3d0    6.31M  1.98G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
zonepool    11.0G  4.85G      0      0      0      0
  c1t1d0    11.0G  4.85G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----

TID{root}#
To check pools I/O performance every 2 sec interval verbosely:
TID{root}# zpool iostat -v 2
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
apps_pool     85K  1.98G      0      0     52    792
  c1t9d0      85K  1.98G      0      0     52    792
----------  -----  -----  -----  -----  -----  -----
db_pool     73.5K  1.98G      0      0     50    774
  c1t8d0    73.5K  1.98G      0      0     50    774
----------  -----  -----  -----  -----  -----  -----
ora_pool    73.5K  1.98G      0      0      2     94
  mirror    73.5K  1.98G      0      0      2     94
    c1t6d0      -      -      0      0     52    792
    c1t2d0      -      -      0      0     49    792
----------  -----  -----  -----  -----  -----  -----
pool_strip    85K  1.98G      0      0     12    236
  c1t10d0     85K  1.98G      0      0     12    236
----------  -----  -----  -----  -----  -----  -----
rpool       6.33G  9.54G      0      0  28.0K  5.09K
  c1t0d0s0  6.33G  9.54G      0      0  28.0K  5.09K
----------  -----  -----  -----  -----  -----  -----
test2       12.4M  3.96G      0      0    106     10
  c1t4d0    6.10M  1.98G      0      0     52      4
  c1t3d0    6.31M  1.98G      0      0     54      6
----------  -----  -----  -----  -----  -----  -----
zonepool    11.0G  4.85G      0      0     70    190
  c1t1d0    11.0G  4.85G      0      0     70    190
----------  -----  -----  -----  -----  -----  -----

               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
apps_pool     85K  1.98G      0      0      0      0
  c1t9d0      85K  1.98G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
db_pool     73.5K  1.98G      0      0      0      0
  c1t8d0    73.5K  1.98G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
ora_pool    73.5K  1.98G      0      0      0      0
  mirror    73.5K  1.98G      0      0      0      0
    c1t6d0      -      -      0      0      0      0
    c1t2d0      -      -      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
pool_strip    85K  1.98G      0      0      0      0
  c1t10d0     85K  1.98G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
rpool       6.33G  9.54G      0      0      0      0
  c1t0d0s0  6.33G  9.54G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
test2       12.4M  3.96G      0      0      0      0
  c1t4d0    6.10M  1.98G      0      0      0      0
  c1t3d0    6.31M  1.98G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
zonepool    11.0G  4.85G      0      0      0      0
  c1t1d0    11.0G  4.85G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
TID{root}#
To check particular pool I/O performance every 2 sec interval verbosely:
TID{root}# zpool iostat -v apps_pool 2
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
apps_pool     97K  1.98G      0      0     50    316
  c1t9d0      97K  1.98G      0      0     50    316
----------  -----  -----  -----  -----  -----  -----

               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
apps_pool     97K  1.98G      0      0      0      0
  c1t9d0      97K  1.98G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----

               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
apps_pool     97K  1.98G      0      0      0      0
  c1t9d0      97K  1.98G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----

               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
apps_pool     97K  1.98G      0      0      0      0
  c1t9d0      97K  1.98G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----

^C
TID{root}#
To check particular pool I/O performance 1 sec interval with three times verbosely:
TID{root}# zpool iostat -v apps_pool  1 3
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
apps_pool     97K  1.98G      0      0     50    314
  c1t9d0      97K  1.98G      0      0     50    314
----------  -----  -----  -----  -----  -----  -----

               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
apps_pool     97K  1.98G      0      0      0      0
  c1t9d0      97K  1.98G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----

               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
apps_pool     97K  1.98G      0      0      0      0
  c1t9d0      97K  1.98G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----

TID{root}#
To check particular pool I/O performance 1 sec interval with three times:
TID{root}# zpool iostat apps_pool  1 3
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
apps_pool     97K  1.98G      0      0     50    313
apps_pool     97K  1.98G      0      0      0      0
apps_pool     97K  1.98G      0      0      0      0
TID{root}#
To check particular pool I/O performance every 5 sec interval:
TID{root}# zpool iostat apps_pool  5
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
apps_pool     97K  1.98G      0      0     50    313
apps_pool     97K  1.98G      0      0      0      0
apps_pool     97K  1.98G      0      0      0      0
apps_pool     97K  1.98G      0      0      0      0
apps_pool     97K  1.98G      0      0      0      0
apps_pool     97K  1.98G      0      0      0      0
apps_pool     97K  1.98G      0      0      0      0
apps_pool     97K  1.98G      0      0      0      0
apps_pool     97K  1.98G      0      0      0      0
apps_pool     97K  1.98G      0      0      0      0
apps_pool     97K  1.98G      0      0      0      0
^C
TID{root}#
To check I/O performance for all pools every 5 sec interval:
TID{root}# zpool iostat 5
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
apps_pool     97K  1.98G      0      0     48    304
db_pool       81K  2.97G      0      0     17    552
ora_pool      90K  1.98G      0      0      3     32
pool_strip    85K  1.98G      0      0      7    152
rpool       6.34G  9.54G      0      0  18.5K  5.38K
test2       12.4M  3.96G      0      0     69      6
zonepool    11.0G  4.85G      0      0     46    125
----------  -----  -----  -----  -----  -----  -----
apps_pool     97K  1.98G      0      0      0      0
db_pool       81K  2.97G      0      0      0      0
ora_pool      90K  1.98G      0      0      0      0
pool_strip    85K  1.98G      0      0      0      0
rpool       6.34G  9.54G      0      0      0      0
test2       12.4M  3.96G      0      0      0      0
zonepool    11.0G  4.85G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
apps_pool     97K  1.98G      0      0      0      0
db_pool       81K  2.97G      0      0      0      0
ora_pool      90K  1.98G      0      0      0      0
pool_strip    85K  1.98G      0      0      0      0
rpool       6.34G  9.54G      0      0      0      0
test2       12.4M  3.96G      0      0      0      0
zonepool    11.0G  4.85G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
^C
TID{root}#

How to upgrade ZPOOL version on Solaris OS

Generally Information:
We need to upgrade Zpool version according to the vendor advice to get best support and best performance.

Also it dependence on the client environment whether they can improve the pool version since some client may run lower version and the pool need to be import some other system for several purpose, in this case the version should match always match so that it can be import to entire client environment.

Upgrading ZPOOLS: 
Zpool property to find the version details. In this case my zpool version is 25. To find out till what version this can support now, use "zpool upgrade -v" command.
TID{root}# zpool list pool_strip
NAME         SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
pool_strip  1.98G    65K  1.98G     0%  ONLINE  -
TID{root}# zpool get all pool_strip | grep -i version
pool_strip  version        25          local
TID{root}#

TID{root}# zpool upgrade -v
This system is currently running ZFS pool version 29.

The following versions are supported:

VER  DESCRIPTION
---  --------------------------------------------------------
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history
 5   Compression using the gzip algorithm
 6   bootfs pool property
 7   Separate intent log devices
 8   Delegated administration
 9   refquota and refreservation properties
 10  Cache devices
 11  Improved scrub performance
 12  Snapshot properties
 13  snapused property
 14  passthrough-x aclinherit
 15  user/group space accounting
 16  stmf property support
 17  Triple-parity RAID-Z
 18  Snapshot user holds
 19  Log device removal
 20  Compression using zle (zero-length encoding)
 21  Reserved
 22  Received properties
 23  Slim ZIL
 24  System attributes
 25  Improved scrub stats
 26  Improved snapshot deletion performance
 27  Improved snapshot creation performance
 28  Multiple vdev replacements
 29  RAID-Z/mirror hybrid allocator

For more information on a particular version, including supported releases,
see the ZFS Administration Guide.

TID{root}#
To upgrade pool to particulate version:
TID{root}# zpool upgrade -V 27 pool_strip
This system is currently running ZFS pool version 29.

Successfully upgraded 'pool_strip' from version 25 to version 27

TID{root}#

TID{root}# zpool get all pool_strip | grep -i version
pool_strip  version 27          local
TID{root}#
Upgrade pool to latest version which is currently support and available on servers:
TID{root}# zpool upgrade pool_strip
This system is currently running ZFS pool version 29.

Successfully upgraded 'pool_strip' from version 27 to version 29

TID{root}#
Upgrading all the pools which are there in systems, in this scenario we have three pools which are running lower version and it can be upgrade to latest version.
TID{root}# zpool list
NAME         SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
apps_pool   1.98G    79K  1.98G     0%  ONLINE  -
db_pool     1.98G    69K  1.98G     0%  ONLINE  -
ora_pool    1.98G    82K  1.98G     0%  ONLINE  -
pool_strip  1.98G    85K  1.98G     0%  ONLINE  -
rpool       15.9G  6.33G  9.54G    39%  ONLINE  -
TID{root}#

TID{root}# zpool get version apps_pool
NAME       PROPERTY  VALUE    SOURCE
apps_pool  version   25       local
TID{root}# zpool get version db_pool
NAME     PROPERTY  VALUE    SOURCE
db_pool  version   20       local
TID{root}# zpool get version ora_pool
NAME      PROPERTY  VALUE    SOURCE
ora_pool  version   23       local
TID{root}# zpool get version pool_strip
NAME        PROPERTY  VALUE    SOURCE
pool_strip  version   29       default
TID{root}# zpool get version rpool
NAME   PROPERTY  VALUE    SOURCE
rpool  version   29       default
TID{root}#

TID{root}# zpool upgrade -a
This system is currently running ZFS pool version 29.
Successfully upgraded 'apps_pool'
Successfully upgraded 'db_pool'
Successfully upgraded 'ora_pool'
TID{root}# zpool get version apps_pool
NAME       PROPERTY  VALUE    SOURCE
apps_pool  version   29       default
TID{root}# zpool get version db_pool
NAME     PROPERTY  VALUE    SOURCE
db_pool  version   29       default
TID{root}# zpool get version ora_pool
NAME      PROPERTY  VALUE    SOURCE
ora_pool  version   29       default
TID{root}#
Note : This pool can not be imported on any system which is running a zpool version less than 29.

Friday, June 5, 2015

How to Import and Export ZFS Pool aka ZPOOL

There are multiple way to import and export ZPOOL. These are the useful commands for all the Solaris administrator who is supporting ZFS environment.

Here I showed multiply way to import and export ZPOOL's with many scenario and example.

Please refer this link for basic pool creation - Create POOLS

To list available pools which can be import:
TID{root}# zpool import
  pool: test2
    id: 14839829789894250168
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

        test2       ONLINE
          c1t4d0    ONLINE
          c1t3d0    ONLINE

  pool: zonepool
    id: 9851977220631207786
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

        zonepool    ONLINE
          c1t1d0    ONLINE
TID{root}#
To import all pools which are found in above command "zpool import" , the listed pools can be exported pools:
TID{root}# zpool list
NAME    SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
rpool  15.9G  6.34G  9.54G    39%  ONLINE  -
TID{root}# zpool import -a
TID{root}# zpool list
NAME       SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
rpool     15.9G  6.34G  9.54G    39%  ONLINE  -
test2     3.97G  12.5M  3.96G     0%  ONLINE  -
zonepool  15.9G  11.0G  4.85G    69%  ONLINE  -
TID{root}#
To import "DESTROYED POOL":
Those disk are not used anywhere or else the data will be lost.
TID{root}# zpool import -D
  pool: strip_pool
    id: 10696204917074183490
 state: ONLINE (DESTROYED)
action: The pool can be imported using its name or numeric identifier.
config:

        strip_pool    ONLINE
          c1t2d0      ONLINE
        spares
          c1t5d0
TID{root}# zpool import -D strip_pool
TID{root}# zpool list strip_pool
NAME        SIZE    ALLOC   FREE      CAP  HEALTH  ALTROOT
strip_pool  1.98G   112K    1.98G     0%   ONLINE  -
TID{root}# zpool status strip_pool
  pool: strip_pool
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        strip_pool    ONLINE       0     0     0
            c1t2d0    ONLINE       0     0     0
        spares
            c1t5d0    AVAIL

errors: No known data errors
TID{root}#
Type-2 for import "DESTROYED POOL" using Pool ID:
TID{root}# zpool import -D
  pool: apps_pool
    id: 5646360073422230766
 state: ONLINE (DESTROYED)
action: The pool can be imported using its name or numeric identifier.
config:

        apps_pool   ONLINE
          c1t2d0    ONLINE
          c1t5d0    ONLINE

  pool: poolm
    id: 11932504340727542212
 state: FAULTED (DESTROYED)
TID{root}# zpool import -D 5646360073422230766
TID{root}# zpool list
NAME        SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
apps_pool  3.97G    94K  3.97G     0%  ONLINE  -
rpool      15.9G  6.33G  9.54G    39%  ONLINE  -
test2      3.97G  12.4M  3.96G     0%  ONLINE  -
zonepool   15.9G  11.0G  4.85G    69%  ONLINE  -
TID{root}#
Rename ZPOOL name using import command:
TID{root}# zpool list test_pool
NAME        SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
test_pool  3.97G   146K  3.97G     0%  ONLINE  -
TID{root}# zpool export test_pool
TID{root}# zpool import test_pool apps_pool
TID{root}# zpool list apps_pool
NAME        SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
apps_pool  3.97G   146K  3.97G     0%  ONLINE  -
TID{root}#
To export pool:
TID{root}# zpool export apps_pool
TID{root}# zpool list apps_pool
cannot open 'apps_pool': no such pool
TID{root}#
To export pool force fully:
TID{root}# zpool export -f apps_pool
TID{root}# zpool list apps_pool
cannot open 'apps_pool': no such pool
TID{root}#

Thursday, June 4, 2015

Create type of ZFS Pools aka "ZPOOL" on Solaris

I am explain how to create multiple type of ZFS pools aka ZPOOL.

Generally we can create file system using traditional method which is reside on single DISK / LUN, and we can't create large size of file system more then DISK or LUN SIZE.

In this case we need Volume Manager Software to compile multiple disks / LUN's to make one group and using that group to create multiple file systems with large size.

ZPOOL is Simplify Storage / Volume Manager Software, one of the good feature from Solaris 10 Operating System.


Capacity:
ZFS is a 128-bit file system,[35][36] so it can address 1.84 × 1019 times more data than 64-bit systems such as Btrfs. The limitations of ZFS are designed to be so large that they should not be encountered in the foreseeable future.
Some theoretical limits in ZFS are:
  • 248 number of entries in any individual directory[37]
  • 16 exbibytes (264 bytes): maximum size of a single file
  • 16 exbibytes: maximum size of any attribute
  • 256 zebibytes (278 bytes): maximum size of any zpool
  • 256 number of attributes of a file (actually constrained to 248 for the number of files in a ZFS file system)
  • 264 number of devices in any zpool
  • 264 number of zpools in a system
  • 264 number of file systems in a zpool
Type of Layouts:

Stripe:
A stripe has no redundancy, but you will get full capacity of the drives.
Stripe concept is useful when speed and capacity is only the concern.
You compare with "raid-0".

In stripe concept if one disk is failed all data will be lost.

Mirror:
Generally we have two type of common use mirror type.

A two-way mirror will lose redundancy if one drive loss.
A three-way mirror will lose redundancy with two drive losses.
You can compare with "raid-1".

RaidZ and RaidZ-1:
RAIDZ and RaisZ1 both are similar.
RAIDZ and RAIDZ-1  will lose redundancy if one drive loss.
May be you can compare with "raid-5", except there’s no "write-hole" problem.

RaidZ-2:
Will lose redundancy if two disk losses on same time.
May be you can compare with "raid-6", except there’s no "write-hole" problem.

RaidZ-3:
Will lose redundancy if three disk losses on same time.

Will see step by step to create multiple type of ZPOOL's with multiple options.

Dry run for pool creation:

TID{root}# zpool create -n drypool c1t2d0
would create 'drypool' with the following layout:

        drypool
          c1t2d0
TID{root}# zpool list drypool
cannot open 'drypool': no such pool
TID{root}#
Creating Basic pool "strip_poole" and check the pools status:
TID{root}# zpool create strip_pool c1t2d0
TID{root}# zpool status strip_pool
  pool: strip_pool
 state: ONLINE
 scan: none requested
config:

        NAME             STATE     READ WRITE CKSUM
        strip_pool       ONLINE       0     0     0
            c1t2d0       ONLINE       0     0     0

errors: No known data errors
TID{root}# zpool list strip_pool
NAME    SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
strip_pool  1.98G  78.5K  1.98G     0%  ONLINE  -
TID{root}# df -h /strip_pool
Filesystem             size   used  avail capacity  Mounted on
strip_pool                  2.0G    31K   2.0G     1%    /strip_pool
TID{root}#
You may see error some time when you use already used disk which was used by some other pool or some where , in that time you have to use "-f" to force the system to create pool, but make sure currently that disk is not having any data and not using anywhere, mean free disk and can be used for. 
e.g
TID{root}# zpool create strip_pool c1t2d0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c1t2d0s0 contains a ufs filesystem.
TID{root}# zpool create -f strip_pool c1t2d0
TID{root}# zpool list strip_pool
NAME    SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
strip_pool  1.98G  78.5K  1.98G     0%  ONLINE  -
TID{root}#
Create pool with different mount point:
TID{root}# zpool create -m /strip_poolmount strip_pool c1t2d0
TID{root}# df -h /strip_poolmount
Filesystem             size   used  avail capacity  Mounted on
strip_pool                  2.0G    31K   2.0G     1%    /strip_poolmount
TID{root}#
Create Mirror Pool:
TID{root}# zpool create poolm mirror c1t5d0 c1t6d0
TID{root}# df -h /poolm
Filesystem             size   used  avail capacity  Mounted on
poolm                  2.0G    31K   2.0G     1%    /poolm
TID{root}# zpool list poolm
NAME    SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
poolm  1.98G  78.5K  1.98G     0%  ONLINE  -
TID{root}# zpool status poolm
  pool: poolm
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        poolm       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c1t5d0  ONLINE       0     0     0
            c1t6d0  ONLINE       0     0     0

errors: No known data errors
TID{root}#
Create two-way mirror and check status:
TID{root}# zpool create mirr2 mirror c1t2d0 c1t5d0 mirror c1t6d0 c1t8d0
TID{root}# df -h /mirr2
Filesystem             size   used  avail capacity  Mounted on
mirr2                  3.9G    31K   3.9G     1%    /mirr2
TID{root}# zpool list mirr2
NAME    SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
mirr2  3.97G  81.5K  3.97G     0%  ONLINE  -
TID{root}# zpool status mirr2
  pool: mirr2
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        mirr2       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0
            c1t5d0  ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            c1t6d0  ONLINE       0     0     0
            c1t8d0  ONLINE       0     0     0

errors: No known data errors
TID{root}#
Create RAID-Z pool:
TID{root}# zpool create raidz raidz c1t5d0 c1t6d0 c1t8d0
cannot create 'raidz': name is reserved
pool name may have been omitted
TID{root}# zpool create poolz raidz c1t5d0 c1t6d0 c1t8d0
TID{root}# zpool list poolz
NAME    SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
poolz  5.94G   166K  5.94G     0%  ONLINE  -
TID{root}# df -h /poolz
Filesystem             size   used  avail capacity  Mounted on
poolz                  3.9G    34K   3.9G     1%    /poolz
TID{root}# zpool status poolz
  pool: poolz
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        poolz       ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c1t5d0  ONLINE       0     0     0
            c1t6d0  ONLINE       0     0     0
            c1t8d0  ONLINE       0     0     0

errors: No known data errors
TID{root}#
Create RAID-Z1 pool:
TID{root}# zpool create poolz1 raidz1 c1t2d0 c1t5d0 c1t6d0 c1t8d0 c1t9d0 c1t10d0
TID{root}# zpool status poolz1
  pool: poolz1
 state: ONLINE
 scan: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        poolz1       ONLINE       0     0     0
          raidz1-0   ONLINE       0     0     0
            c1t2d0   ONLINE       0     0     0
            c1t5d0   ONLINE       0     0     0
            c1t6d0   ONLINE       0     0     0
            c1t8d0   ONLINE       0     0     0
            c1t9d0   ONLINE       0     0     0
            c1t10d0  ONLINE       0     0     0

errors: No known data errors
TID{root}#
Create RAID-Z2 pool:
TID{root}# zpool create poolz2 raidz2 c1t2d0 c1t5d0 c1t6d0 c1t8d0 c1t9d0 c1t10d0
TID{root}# zpool status poolz2
  pool: poolz2
 state: ONLINE
 scan: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        poolz2       ONLINE       0     0     0
          raidz2-0   ONLINE       0     0     0
            c1t2d0   ONLINE       0     0     0
            c1t5d0   ONLINE       0     0     0
            c1t6d0   ONLINE       0     0     0
            c1t8d0   ONLINE       0     0     0
            c1t9d0   ONLINE       0     0     0
            c1t10d0  ONLINE       0     0     0

errors: No known data errors
TID{root}#
Create RAID-Z3 pool:
TID{root}# zpool create poolz3 raidz3 c1t2d0 c1t5d0 c1t6d0 c1t8d0 c1t9d0 c1t10d0
TID{root}# zpool status poolz3
  pool: poolz3
 state: ONLINE
 scan: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        poolz3       ONLINE       0     0     0
          raidz3-0   ONLINE       0     0     0
            c1t2d0   ONLINE       0     0     0
            c1t5d0   ONLINE       0     0     0
            c1t6d0   ONLINE       0     0     0
            c1t8d0   ONLINE       0     0     0
            c1t9d0   ONLINE       0     0     0
            c1t10d0  ONLINE       0     0     0

errors: No known data errors
TID{root}#
Adding disk and Expanding existing pool size:
TID{root}# zpool status strip_pool
  pool: strip_pool
 state: ONLINE
 scan: none requested
config:

        NAME          STATE     READ WRITE CKSUM
        strip_pool    ONLINE       0     0     0
            c1t2d0    ONLINE       0     0     0

errors: No known data errors
TID{root}# df -h /strip_pool
Filesystem             size   used  avail capacity  Mounted on
strip_pool               2.0G    31K   2.0G     1%    /strip_pool
TID{root}# zpool add strip_pool c1t5d0
TID{root}# zpool status strip_pool
  pool: strip_pool
 state: ONLINE
 scan: none requested
config:

        NAME          STATE     READ WRITE CKSUM
        strip_pool    ONLINE       0     0     0
            c1t2d0    ONLINE       0     0     0
            c1t5d0    ONLINE       0     0     0

errors: No known data errors
TID{root}# zpool add strip_pool c1t5d0
TID{root}# df -h /strip_pool
Filesystem             size   used  avail capacity  Mounted on
strip_pool               3.9G    31K   3.9G     1%    /strip_pool
TID{root}#
Adding spare disk to existing pool:
TID{root}# zpool list strip_pool
NAME       SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
strip_pool  1.98G  78.5K  1.98G     0%  ONLINE  -
TID{root}# zpool status strip_pool
  pool: strip_pool
 state: ONLINE
 scan: none requested
config:

        NAME          STATE     READ WRITE CKSUM
        strip_pool    ONLINE       0     0     0
            c1t2d0    ONLINE       0     0     0

errors: No known data errors
TID{root}# zpool add strip_pool spare c1t5d0
TID{root}# zpool status strip_pool
  pool: strip_pool
 state: ONLINE
 scan: none requested
config:

        NAME          STATE     READ WRITE CKSUM
        strip_pool    ONLINE       0     0     0
            c1t2d0    ONLINE       0     0     0
        spares
            c1t5d0    AVAIL

errors: No known data errors
TID{root}# zpool list strip_pool
NAME       SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
strip_pool  1.98G   132K  1.98G     0%  ONLINE  -
TID{root}#
Multiple way to check pool information:
TID{root}# zpool status -x strip_pool
pool 'strip_pool' is healthy
TID{root}# zpool status -v strip_pool
  pool: strip_pool
 state: ONLINE
 scan: none requested
config:

        NAME          STATE     READ WRITE CKSUM
        strip_pool    ONLINE       0     0     0
            c1t2d0    ONLINE       0     0     0
        spares
            c1t5d0    AVAIL

errors: No known data errors
TID{root}# zpool list strip_pool
NAME       SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
strip_pool  1.98G  95.5K  1.98G     0%  ONLINE  -
TID{root}#
Still will see more useful commands related to ZPOOL on next posts.....