Continuing "Zpool" chapters...
There are multiple useful command to maintain and check performance for ZFS pool aka ZPOOL on Solaris Operating systems.
Here , I am showing some of the useful command with examples which will help you when your performing this kind of activity on your environment.
I).Create type of ZFS Pools aka "ZPOOL" on Solaris
II).How to Import and Export ZFS Pool aka ZPOOL
III).How to upgrade ZPOOL version on Solaris OS
Mainly we use to perform disk replacement ,scrub for pools and IO performance activity.
Run scrub on all file systems under apps_pool:
TID{root}# zpool scrub apps_pool
TID{root}# zpool status apps_pool
pool: apps_pool
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Mon Jun 8 23:27:11 2015
config:
NAME STATE READ WRITE CKSUM
apps_pool ONLINE 0 0 0
c1t9d0 ONLINE 0 0 0
errors: No known data errors
TID{root}#
To offline a disk temporarily (until next reboot):
TID{root}# zpool status ora_pool
pool: ora_pool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
ora_pool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t6d0 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
errors: No known data errors
TID{root}# zpool offline -t ora_pool c1t2d0
TID{root}# zpool status ora_pool
pool: ora_pool
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: none requested
config:
NAME STATE READ WRITE CKSUM
ora_pool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
c1t6d0 ONLINE 0 0 0
c1t2d0 OFFLINE 0 0 0
errors: No known data errors
TID{root}#
Online a disk back:
TID{root}# zpool online ora_pool c1t2d0
TID{root}# zpool status ora_pool
pool: ora_pool
state: ONLINE
scan: resilvered 19.5K in 0h0m with 0 errors on Mon Jun 8 23:33:06 2015
config:
NAME STATE READ WRITE CKSUM
ora_pool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t6d0 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
errors: No known data errors
TID{root}#
Clear error count:
TID{root}# zpool clear ora_pool
TID{root}#
Replacing a failed disk with a new disk:
TID{root}# zpool status ora_pool
pool: ora_pool
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: resilvered 4.50K in 0h0m with 0 errors on Mon Jun 8 23:36:18 2015
config:
NAME STATE READ WRITE CKSUM
ora_pool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
c1t6d0 ONLINE 0 0 0
c1t2d0 OFFLINE 0 0 0
errors: No known data errors
TID{root}# zpool replace ora_pool c1t2d0 c1t14d0
TID{root}# zpool status ora_pool
pool: ora_pool
state: ONLINE
scan: resilvered 82.5K in 0h0m with 0 errors on Mon Jun 8 23:40:47 2015
config:
NAME STATE READ WRITE CKSUM
ora_pool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t6d0 ONLINE 0 0 0
c1t14d0 ONLINE 0 0 0
errors: No known data errors
TID{root}#
Breaking the mirror pool "ora_pool" :
TID{root}# zpool status ora_pool
pool: ora_pool
state: ONLINE
scan: resilvered 82.5K in 0h0m with 0 errors on Mon Jun 8 23:40:47 2015
config:
NAME STATE READ WRITE CKSUM
ora_pool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t6d0 ONLINE 0 0 0
c1t14d0 ONLINE 0 0 0
errors: No known data errors
TID{root}# zpool detach ora_pool c1t14d0
TID{root}# zpool status ora_pool
pool: ora_pool
state: ONLINE
scan: resilvered 82.5K in 0h0m with 0 errors on Mon Jun 8 23:40:47 2015
config:
NAME STATE READ WRITE CKSUM
ora_pool ONLINE 0 0 0
c1t6d0 ONLINE 0 0 0
errors: No known data errors
TID{root}#
Mirroring back mirror pool "ora_pool":
TID{root}# zpool attach ora_pool c1t6d0 c1t14d0
TID{root}# zpool status ora_pool
pool: ora_pool
state: ONLINE
scan: resilvered 82.5K in 0h0m with 0 errors on Mon Jun 8 23:44:49 2015
config:
NAME STATE READ WRITE CKSUM
ora_pool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t6d0 ONLINE 0 0 0
c1t14d0 ONLINE 0 0 0
errors: No known data errors
TID{root}#
To get all the support properties:
e.g
TID{root}# zpool get all ora_pool
NAME PROPERTY VALUE SOURCE
ora_pool size 1.98G -
ora_pool capacity 0% -
ora_pool altroot - default
ora_pool health ONLINE -
ora_pool guid 12018882338300853730 default
ora_pool version 29 default
ora_pool bootfs - default
ora_pool delegation on default
ora_pool autoreplace off default
ora_pool cachefile - default
ora_pool failmode wait default
ora_pool listsnapshots on default
ora_pool autoexpand off default
ora_pool free 1.98G -
ora_pool allocated 90K -
ora_pool readonly off -
TID{root}#
Displays I/O statistics for the given pools. When given
an interval, the statistics are printed every interval
seconds until Ctrl-C is pressed. If no pools are speci-
fied, statistics for every pool in the system is shown.
If count is specified, the command exits after count
reports are printed.
Note: - Always you need to ignore very first output information which is showed from system boot time.
To check pools I/O performance 1 sec interval with 2 times:
TID{root}# zpool iostat 1 2
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
apps_pool 85K 1.98G 0 0 55 836
db_pool 73.5K 1.98G 0 0 53 818
ora_pool 73.5K 1.98G 0 0 3 100
pool_strip 85K 1.98G 0 0 12 238
rpool 6.33G 9.54G 0 0 28.4K 5.15K
test2 12.4M 3.96G 0 0 107 10
zonepool 11.0G 4.85G 0 0 71 193
---------- ----- ----- ----- ----- ----- -----
apps_pool 85K 1.98G 0 0 0 0
db_pool 73.5K 1.98G 0 0 0 0
ora_pool 73.5K 1.98G 0 0 0 0
pool_strip 85K 1.98G 0 0 0 0
rpool 6.33G 9.54G 0 0 0 0
test2 12.4M 3.96G 0 0 0 0
zonepool 11.0G 4.85G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
TID{root}#
To check pools I/O performance 1 sec interval with 2 times verbose:
TID{root}# zpool iostat -v 1 2
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
apps_pool 85K 1.98G 0 0 54 825
c1t9d0 85K 1.98G 0 0 54 825
---------- ----- ----- ----- ----- ----- -----
db_pool 73.5K 1.98G 0 0 52 807
c1t8d0 73.5K 1.98G 0 0 52 807
---------- ----- ----- ----- ----- ----- -----
ora_pool 73.5K 1.98G 0 0 3 98
mirror 73.5K 1.98G 0 0 3 98
c1t6d0 - - 0 0 54 826
c1t2d0 - - 0 0 51 826
---------- ----- ----- ----- ----- ----- -----
pool_strip 85K 1.98G 0 0 12 238
c1t10d0 85K 1.98G 0 0 12 238
---------- ----- ----- ----- ----- ----- -----
rpool 6.33G 9.54G 0 0 28.3K 5.13K
c1t0d0s0 6.33G 9.54G 0 0 28.3K 5.13K
---------- ----- ----- ----- ----- ----- -----
test2 12.4M 3.96G 0 0 107 10
c1t4d0 6.10M 1.98G 0 0 52 4
c1t3d0 6.31M 1.98G 0 0 55 6
---------- ----- ----- ----- ----- ----- -----
zonepool 11.0G 4.85G 0 0 71 192
c1t1d0 11.0G 4.85G 0 0 71 192
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
apps_pool 85K 1.98G 0 0 0 0
c1t9d0 85K 1.98G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
db_pool 73.5K 1.98G 0 0 0 0
c1t8d0 73.5K 1.98G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
ora_pool 73.5K 1.98G 0 0 0 0
mirror 73.5K 1.98G 0 0 0 0
c1t6d0 - - 0 0 0 0
c1t2d0 - - 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
pool_strip 85K 1.98G 0 0 0 0
c1t10d0 85K 1.98G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
rpool 6.33G 9.54G 0 0 0 0
c1t0d0s0 6.33G 9.54G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
test2 12.4M 3.96G 0 0 0 0
c1t4d0 6.10M 1.98G 0 0 0 0
c1t3d0 6.31M 1.98G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
zonepool 11.0G 4.85G 0 0 0 0
c1t1d0 11.0G 4.85G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
TID{root}#
To check pools I/O performance every 2 sec interval verbosely:
TID{root}# zpool iostat -v 2
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
apps_pool 85K 1.98G 0 0 52 792
c1t9d0 85K 1.98G 0 0 52 792
---------- ----- ----- ----- ----- ----- -----
db_pool 73.5K 1.98G 0 0 50 774
c1t8d0 73.5K 1.98G 0 0 50 774
---------- ----- ----- ----- ----- ----- -----
ora_pool 73.5K 1.98G 0 0 2 94
mirror 73.5K 1.98G 0 0 2 94
c1t6d0 - - 0 0 52 792
c1t2d0 - - 0 0 49 792
---------- ----- ----- ----- ----- ----- -----
pool_strip 85K 1.98G 0 0 12 236
c1t10d0 85K 1.98G 0 0 12 236
---------- ----- ----- ----- ----- ----- -----
rpool 6.33G 9.54G 0 0 28.0K 5.09K
c1t0d0s0 6.33G 9.54G 0 0 28.0K 5.09K
---------- ----- ----- ----- ----- ----- -----
test2 12.4M 3.96G 0 0 106 10
c1t4d0 6.10M 1.98G 0 0 52 4
c1t3d0 6.31M 1.98G 0 0 54 6
---------- ----- ----- ----- ----- ----- -----
zonepool 11.0G 4.85G 0 0 70 190
c1t1d0 11.0G 4.85G 0 0 70 190
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
apps_pool 85K 1.98G 0 0 0 0
c1t9d0 85K 1.98G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
db_pool 73.5K 1.98G 0 0 0 0
c1t8d0 73.5K 1.98G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
ora_pool 73.5K 1.98G 0 0 0 0
mirror 73.5K 1.98G 0 0 0 0
c1t6d0 - - 0 0 0 0
c1t2d0 - - 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
pool_strip 85K 1.98G 0 0 0 0
c1t10d0 85K 1.98G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
rpool 6.33G 9.54G 0 0 0 0
c1t0d0s0 6.33G 9.54G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
test2 12.4M 3.96G 0 0 0 0
c1t4d0 6.10M 1.98G 0 0 0 0
c1t3d0 6.31M 1.98G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
zonepool 11.0G 4.85G 0 0 0 0
c1t1d0 11.0G 4.85G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
TID{root}#
To check particular pool I/O performance every 2 sec interval verbosely:
TID{root}# zpool iostat -v apps_pool 2
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
apps_pool 97K 1.98G 0 0 50 316
c1t9d0 97K 1.98G 0 0 50 316
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
apps_pool 97K 1.98G 0 0 0 0
c1t9d0 97K 1.98G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
apps_pool 97K 1.98G 0 0 0 0
c1t9d0 97K 1.98G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
apps_pool 97K 1.98G 0 0 0 0
c1t9d0 97K 1.98G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
^C
TID{root}#
To check particular pool I/O performance 1 sec interval with three times verbosely:
TID{root}# zpool iostat -v apps_pool 1 3
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
apps_pool 97K 1.98G 0 0 50 314
c1t9d0 97K 1.98G 0 0 50 314
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
apps_pool 97K 1.98G 0 0 0 0
c1t9d0 97K 1.98G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
apps_pool 97K 1.98G 0 0 0 0
c1t9d0 97K 1.98G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
TID{root}#