Friday, February 14, 2014

Fail-back plan for solaris OS patching with VCS/SVM Environment.


If the patching activity failed due to some unexpected reason (unfortunately ).Then we have to perform below task to bring the server into normal / operational status .

If the patching was not success, then reboot the server with secondary disk and roll-back the change.

Note : Now we have to consider secondary disk as a primary disk.

Boot the server from secondary disk with multiuser mode:
OK>boot disk1
Check whether we have any old configured metadb:
[root@tpt01]#metadb -i
These three below (Clear DB, Copy VTOC & Create DB ) steps are optional but good practice to do this.
Clear old metadb configuration:
[root@tpt01]#metadb -d /dev/dsk/c1t1d0s7 /dev/dsk/c1t0d0s7
Copy the VTOC from secondary disk to primary disk (old / Corrupted disk):
[root@tpt01]#prtvtoc /dev/rdsk/c1t1d0s2 | fmthard –s - /dev/rdsk/c1t0d0s2
SVM root mirroring steps. Create metadb newly with three replicas:
[root@tpt01]#metadb -afc3 /dev/dsk/c1t1d0s7 /dev/dsk/c1t0d0s7
Create MD device for / FS:
[root@tpt01]#metainit -f d11 1 1 c1t1d0s0
[root@tpt01]#metainit -f d12 1 1 c1t0d0s0
[root@tpt01]#metainit d10 -m d11
[root@tpt01]#metaroot d10
Shutdown and boot the server with secondary disk:
[root@tpt01]#init 5
OK>boot disk1
Attach secondary submirror to main meta mirror:
[root@tpt01]#metattach d10 d12
If we have dedicated file system like for i.e /var , /opt , /export/home then we have to perform same steps like above. As per my current setup, I have only / and /swap FS. For root file system I have already completed the mirroring let me do for /swap device. Create MD device for / swap FS:
[root@tpt01]#metainit -f d21 1 1 c1t1d0s1
[root@tpt01]#metainit -f d22 1 1 c1t0d0s1
[root@tpt01]#metainit d20 -m d21
[root@tpt01]#metattach d20 d22
Check whether syncing is happening:
[root@tpt01] metastat | grep -i sync or %
Edit vfstab and add the MD entries: Note : when you run metaroot command it will automatically add MD device name for /FS on /etc/vfstab, apart from that we have to add for other FS, i.e I am adding entry for /sawp FS. 
Here I am adding for /swap FS:
[root@tpt01]# vi vfstab
/dev/md/dsk/d10 /dev/md/rdsk/d10 / ufs 1 no -
/dev/md/dsk/d20 - - swap - no -
[root@tpt01]#
Now move back VCS configuration files to earlier state:
[root@tpt01]# mv /etc/rc3.d/bake_S99vcs /etc/rc3.d/S99vcs
[root@tpt01]# mv /etc/back_llthosts /etc/llthosts
[root@tpt01]# mv /etc/back_gabtab /etc/gabtab
Start VCS cluster service:
[root@tpt01]# hastart
Check cluster status:
[root@tpt01]# hastatus -sum | grep -i online
B ClusterService tpt01 Y N ONLINE
B nbu_sg tpt02 Y N ONLINE
[root@tpt01]#
To update the eeprom value to boot the server into secondary disk persistently. 
Check current status :
[root@tpt01]# eeprom | grep -i boot-device
boot-device=/pci@9,600000/SUNW,qlc@2/fp@0,0/disk@w2100001862f7d137,0:a disk net
[root@tpt01]# eeprom | grep -i nvramrc
use-nvramrc?=true
nvramrc=devalias net /pci@9,700000/network@2
[root@tpt01]#
As per the above current value use-nvramrc is true. It means server will ignore the boot device name and try to boot the server accordingly referred by use-nvramrc values: To change eeprom value: Here I am creating boot device name and updating nvramrc value:
[root@tpt01]#eeprom "boot-device=rootdisk rootmirror"
eeprom "nvramrc=devalias rootdisk /pci@9,600000/SUNW,qlc@2/fp@0,0/disk@w2100001862f7f113,0:a devalias rootmirror /pci@9,600000/SUNW,qlc@2/fp@0,0/disk@w2100001862f7d137,0:a"
[root@tpt01]#

0 comments:

Post a Comment