Thursday, February 4, 2016

TO replace disk in zpool.


TO replace disk in zfs rpool.


 # prtconf -vp |grep -i bootpath
        bootpath:  '/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/disk@0,0:a'(faulted disk)

 # ls -l /dev/rdsk/c1t0d0s0
lrwxrwxrwx   1 root     root          65 Mar 11  2010 /dev/rdsk/c1t0d0s0 -> ../../devices/pci@10,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:a,raw

 # ls -ld /dev/rdsk/c0t0d0s0
lrwxrwxrwx   1 root     root          64 Mar 11  2010 /dev/rdsk/c0t0d0s0 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:a,raw


# cfgadm -c unconfigure c0::dsk/c0t0d0

<Physically remove failed disk c0t0d0>
<Physically insert replacement disk c0t0d0>

Once replacement is done ,we will configure the disk
# cfgadm -c configure c0::dsk/c0t0d0

echo |format |grep –i c0t0d0

prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c0t0d0s2

label the new device with SMI as it is zfs rpool.

# zpool replace rpool c0t0d0s0
Make sure to wait until resilver is done before rebooting

# zpool online rpool c0t0d0s0

# zpool status rpool

 # zpool status rpool
  pool: rpool
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scan: resilver in progress since Thu Feb  4 12:16:01 2016
    24.5G scanned out of 114G at 56.1M/s, 0h27m to go
    24.5G resilvered, 21.41% done
config:

        NAME                STATE     READ WRITE CKSUM
        rpool               DEGRADED     0     0     0
          mirror-0          DEGRADED     0     0     0
            replacing-0     DEGRADED     0     0     0
              c0t0d0s0/old  FAULTED      0   951     0  too many errors
              c0t0d0s0      ONLINE       0     0     0  (resilvering)
            c1t0d0s0        ONLINE       0     0     0

errors: No known data errors


Once resilveing done automatically c0t0d0s0/old will get removed.

# zpool status rpool
  pool: rpool
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on older software versions.
 scan: resilvered 114G in 0h51m with 0 errors on Thu Feb  4 13:07:55 2016
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
            c0t0d0s0  ONLINE       0     0     0
            c1t0d0s0  ONLINE       0     0     0

errors: No known data errors

<Let disk resilver before installing the boot blocks>
SPARC# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t0d0s0

Monday, February 1, 2016

Creating a new NFS resource group in Sun Cluster 3.2


1. Make sure the new LUN’s are visible and available to be configured.

     # echo | format > format.b4

     # scdidadm -L > scdidadm.b4

     # cfgadm -c configure <controller(s)>

     # devfsadm

     # scdidadm -r

     # scgdevs ( on one node )

     # scdidadm -L > scdidadm.after

     # diff scdidadm.b4 scdidadm.after

Note down the new DID devices. This is to be used to create file systems

2. Create the new metaset

        # metaset -s sap-set -a -h phys-host1 phys-host2

3. Add disks to metaset

    # metaset -s sap-set -a /dev/did/rdsk/d34

    # metaset -s sap-set -a /dev/did/rdsk/d35

    # metaset -s sap-set -a /dev/did/rdsk/d36

    # metaset -s sap-set -a /dev/did/rdsk/d37

4.  Take ownership of metaset on phys-host1

          # cldg switch -n phys-host1 sap-set

5. Create new volumes for sap-set

    # metainit -s sap-set d134 -p /dev/did/dsk/d34s0 1g
    # metainit -s sap-set d135 -p /dev/did/dsk/d34s0 all
    # metainit -s sap-set d136 -p /dev/did/dsk/d35s0 all
    # metainit -s sap-set d137 -p /dev/did/dsk/d36s0 all
    # metainit -s sap-set d138 -p /dev/did/dsk/d37s0 all

6. Create new filesystems

    # umask 022
    # newfs /dev/md/sap-set/rdsk/d134
    # newfs /dev/md/sap-set/rdsk/d135
    # newfs /dev/md/sap-set/rdsk/d136
    # newfs /dev/md/sap-set/rdsk/d137
    # newfs /dev/md/sap-set/rdsk/d138

7. create new mount points, Create it on both the nodes.

    #mkdir -p /sap; chown sap:sap /sap

    #mkdir -p /sapdata/sap11 ; chown sap:sap /sapdata/sap11
    #mkdir -p /sapdata/sap12 ; chown sap:sap /sapdata/sap12
    #mkdir -p /sapdata/sap13 ; chown sap:sap /sapdata/sap13
    #mkdir -p /sapdata/sap14 ; chown sap:sap /sapdata/sap14

8. Edit the /etc/vfstab file and add the new file systems.  Make the mount at boot option as “no”

9. Create the Resource group SAP-RG.

      # clrg create -n phys-host1 phys-host2 SAP-RG

10. Create logical hostname resource.

      # clrslh create -g SAP-RG saplh-rs

11. Create the HAstoragePlus Resource

       # clrs create -t HAStoragePlus -g SAP-RG -p AffinityOn=true -p FilesystemMountPoints=”/sap,/sapdata/sap11,/sapdata/sap12,/sapdata/sap13/sapdata/sap14″  sap-data-res

12. Bring the Resource Group Online

    # clrg online -M phys-host1 SAP-RG

13. Test the failover of the Resource Group

     # clrg switch -n phys-host2 SAP-RG

14. Failover Back

     # clrg switch -n phys-host1 SAP-RG

15. Create the SUNW.nfs Config Directory on the /sap file system.

     # mkdir -p /sap/nfs/SUNW.nfs

16. Create the dfstab file to share the file systems

    # vi /sap/nfs/SUNW.nfs/dfstab-sap-nfs-res

    #share -F nfs -o rw /sapdata/sap11
    #share -F nfs -o rw /sapdata/sap12
    #share -F nfs -o rw /sapdata/sap13
    #share -F nfs -o rw /sapdata/sap14

17. Offline the SAP-RG resource group.

      # clrg offline SAP-RG

18. Modify the Pathprefix variable to ensure that NFS knows path to cluster dfstab

    # clrs set -p Pathprefix=/sap/nfs

19. Bring the Resource Group online

    # clrg online -n phys-host1 SAP-RG

20. Create the NFS resource in SAP-RG resource group.

    # clrs create -g SAP-RG -t nfs -p Resource_dependencies=sap-data-res sap-nfs-res

21. Resource should be created and enabled as part of SAP-RG

    # clrs status

22. check to see if the server is exporting filesystems

    # dfshares