Thursday, February 4, 2016

TO replace disk in zpool.


TO replace disk in zfs rpool.


 # prtconf -vp |grep -i bootpath
        bootpath:  '/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/disk@0,0:a'(faulted disk)

 # ls -l /dev/rdsk/c1t0d0s0
lrwxrwxrwx   1 root     root          65 Mar 11  2010 /dev/rdsk/c1t0d0s0 -> ../../devices/pci@10,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:a,raw

 # ls -ld /dev/rdsk/c0t0d0s0
lrwxrwxrwx   1 root     root          64 Mar 11  2010 /dev/rdsk/c0t0d0s0 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:a,raw


# cfgadm -c unconfigure c0::dsk/c0t0d0

<Physically remove failed disk c0t0d0>
<Physically insert replacement disk c0t0d0>

Once replacement is done ,we will configure the disk
# cfgadm -c configure c0::dsk/c0t0d0

echo |format |grep –i c0t0d0

prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c0t0d0s2

label the new device with SMI as it is zfs rpool.

# zpool replace rpool c0t0d0s0
Make sure to wait until resilver is done before rebooting

# zpool online rpool c0t0d0s0

# zpool status rpool

 # zpool status rpool
  pool: rpool
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scan: resilver in progress since Thu Feb  4 12:16:01 2016
    24.5G scanned out of 114G at 56.1M/s, 0h27m to go
    24.5G resilvered, 21.41% done
config:

        NAME                STATE     READ WRITE CKSUM
        rpool               DEGRADED     0     0     0
          mirror-0          DEGRADED     0     0     0
            replacing-0     DEGRADED     0     0     0
              c0t0d0s0/old  FAULTED      0   951     0  too many errors
              c0t0d0s0      ONLINE       0     0     0  (resilvering)
            c1t0d0s0        ONLINE       0     0     0

errors: No known data errors


Once resilveing done automatically c0t0d0s0/old will get removed.

# zpool status rpool
  pool: rpool
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on older software versions.
 scan: resilvered 114G in 0h51m with 0 errors on Thu Feb  4 13:07:55 2016
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
            c0t0d0s0  ONLINE       0     0     0
            c1t0d0s0  ONLINE       0     0     0

errors: No known data errors

<Let disk resilver before installing the boot blocks>
SPARC# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t0d0s0

Monday, February 1, 2016

Creating a new NFS resource group in Sun Cluster 3.2


1. Make sure the new LUN’s are visible and available to be configured.

     # echo | format > format.b4

     # scdidadm -L > scdidadm.b4

     # cfgadm -c configure <controller(s)>

     # devfsadm

     # scdidadm -r

     # scgdevs ( on one node )

     # scdidadm -L > scdidadm.after

     # diff scdidadm.b4 scdidadm.after

Note down the new DID devices. This is to be used to create file systems

2. Create the new metaset

        # metaset -s sap-set -a -h phys-host1 phys-host2

3. Add disks to metaset

    # metaset -s sap-set -a /dev/did/rdsk/d34

    # metaset -s sap-set -a /dev/did/rdsk/d35

    # metaset -s sap-set -a /dev/did/rdsk/d36

    # metaset -s sap-set -a /dev/did/rdsk/d37

4.  Take ownership of metaset on phys-host1

          # cldg switch -n phys-host1 sap-set

5. Create new volumes for sap-set

    # metainit -s sap-set d134 -p /dev/did/dsk/d34s0 1g
    # metainit -s sap-set d135 -p /dev/did/dsk/d34s0 all
    # metainit -s sap-set d136 -p /dev/did/dsk/d35s0 all
    # metainit -s sap-set d137 -p /dev/did/dsk/d36s0 all
    # metainit -s sap-set d138 -p /dev/did/dsk/d37s0 all

6. Create new filesystems

    # umask 022
    # newfs /dev/md/sap-set/rdsk/d134
    # newfs /dev/md/sap-set/rdsk/d135
    # newfs /dev/md/sap-set/rdsk/d136
    # newfs /dev/md/sap-set/rdsk/d137
    # newfs /dev/md/sap-set/rdsk/d138

7. create new mount points, Create it on both the nodes.

    #mkdir -p /sap; chown sap:sap /sap

    #mkdir -p /sapdata/sap11 ; chown sap:sap /sapdata/sap11
    #mkdir -p /sapdata/sap12 ; chown sap:sap /sapdata/sap12
    #mkdir -p /sapdata/sap13 ; chown sap:sap /sapdata/sap13
    #mkdir -p /sapdata/sap14 ; chown sap:sap /sapdata/sap14

8. Edit the /etc/vfstab file and add the new file systems.  Make the mount at boot option as “no”

9. Create the Resource group SAP-RG.

      # clrg create -n phys-host1 phys-host2 SAP-RG

10. Create logical hostname resource.

      # clrslh create -g SAP-RG saplh-rs

11. Create the HAstoragePlus Resource

       # clrs create -t HAStoragePlus -g SAP-RG -p AffinityOn=true -p FilesystemMountPoints=”/sap,/sapdata/sap11,/sapdata/sap12,/sapdata/sap13/sapdata/sap14″  sap-data-res

12. Bring the Resource Group Online

    # clrg online -M phys-host1 SAP-RG

13. Test the failover of the Resource Group

     # clrg switch -n phys-host2 SAP-RG

14. Failover Back

     # clrg switch -n phys-host1 SAP-RG

15. Create the SUNW.nfs Config Directory on the /sap file system.

     # mkdir -p /sap/nfs/SUNW.nfs

16. Create the dfstab file to share the file systems

    # vi /sap/nfs/SUNW.nfs/dfstab-sap-nfs-res

    #share -F nfs -o rw /sapdata/sap11
    #share -F nfs -o rw /sapdata/sap12
    #share -F nfs -o rw /sapdata/sap13
    #share -F nfs -o rw /sapdata/sap14

17. Offline the SAP-RG resource group.

      # clrg offline SAP-RG

18. Modify the Pathprefix variable to ensure that NFS knows path to cluster dfstab

    # clrs set -p Pathprefix=/sap/nfs

19. Bring the Resource Group online

    # clrg online -n phys-host1 SAP-RG

20. Create the NFS resource in SAP-RG resource group.

    # clrs create -g SAP-RG -t nfs -p Resource_dependencies=sap-data-res sap-nfs-res

21. Resource should be created and enabled as part of SAP-RG

    # clrs status

22. check to see if the server is exporting filesystems

    # dfshares

Saturday, January 16, 2016

Sun cluster Purging Quorum Keys

Purging Quorum Keys
CAUTION: Purging the keys from a quorum device may result in amnesia.
It should only be done after careful diagnostics have been done to verify why the cluster is not coming up.
This should never be done as long as the cluster is able to come up.
 It may need to be done if the last node to leave the cluster is unable to boot,
leaving everyone else fenced out. In that case, boot one of the other nodes to single-user mode,
identify the quorum device, and:
For SCSI 2 disk reservations, the relevant command is pgre, which is located in /usr/cluster/lib/sc:

pgre -c pgre_inkeys -d /dev/did/rdks/d#s2 (List the keys in the quorum device.)
pgre -c pgre_scrub -d /dev/did/rdks/d#s2 (Remove the keys from the quorum device.)

Similarly, for SCSI 3 disk reservations, the relevant command is scsi:

Sun Cluster 3.2 & SCSI Reservation Issues


Sun Cluster 3.2 & SCSI Reservation Issues
If you have worked with luns and Sun Cluster 3.2, you may have discovered that
if you ever want to remove a lun from a system, it may not be possible because
of the scsi3 reservation that Sun Cluster places on the disks.  The example scenario below
walks you through how to overcome this issue and proceed as
though Sun Cluster is not even installed.
Example:  We had a 100GB lun off of a Hitachi disk array that we were using in a metaset that was 
controlled by Sun Cluster. We had removed the resource from the 
Sun Cluster configuration and removed 
the device with configadm/devfsadm,
 however when the storage admin attempted to remove the lun id from the Hitachi array zone, 
the Hitach array indicated the lun was still in use. 
 From the Solaris server side, it did not appear to be in use,
 however Sun Cluster has set the scsi3 reservations on the disk.
Clearing the Sun Cluster scsi reservation steps
 Determine what DID device the lun is mapped to using /usr/cluster/bin/scdidadm -L
Disable failfast on the DID device using /usr/cluster/lib/sc/scsi -c disfailfast -d /dev/did/rdsk/DID 
Release the DID device using  /usr/cluster/lib/sc/scsi -c release -d /dev/did/rdsk/DID 
Scrub the reserve keys from the DID device using  /usr/cluster/lib/sc/scsi -c scrub -d /dev/did/rdsk/DID
 Confirm reserve keys are removed using /usr/cluster/lib/sc/scsi -c inkeys -d /dev/did/rdsk/DID
Remove lun from zone on machine or whatever procedure you were trying to complete. 

HOW to recover from Amnesai situation


Amnesia Scenario:
 Node node-1 is shut down.
Node node-2 crashes and will not boot due to hardware failure.
Node node-1 is rebooted but stops and prints out the messages: 
Booting as part of a cluster
    NOTICE: CMM: Node node-1 (nodeid = 1) with votecount = 1 added.
    NOTICE: CMM: Node node-2 (nodeid = 2) with votecount = 1 added.
    NOTICE: CMM: Quorum device 1 (/dev/did/rdsk/d4s2) added; votecount = 1, bitmask of nodes with configured paths = 0x3.
    NOTICE: CMM: Node node-1: attempting to join cluster.
  
 NOTICE: CMM: Quorum device 1 (gdevname /dev/did/rdsk/d4s2) can not be acquired by the current cluster members. This quorum device is held by node 2.
NOTICE: CMM: Cluster doesn't have operational quorum yet; waiting for quorum.
Node node-1 cannot boot completely because it cannot achieve the needed quorum vote count.
 In the above case, node node-1 cannot start the cluster due to the amnesia protection of Oracle Solaris Cluster. Since node node-1 was not a member of the cluster when it was shut down (when node-2 crashed) there is a possibility it has an outdated CCR and should not be allowed to automatically start up the cluster on its own.
The general rule is that a node can only start the cluster if it was part of the cluster when the cluster was last shut down. In a multi node cluster it is possible for more than one node to become "the last" leaving the cluster. 
How to recover Sun Cluster 3.3 from amnesia if its having only one operatinal node 
When we stop all nodes in Sun Cluster, the last node that leaves the cluster is the first that have to boot for the CCR consistency. However, if for any reason the last node that leaves the cluster can not boot (hardware failure … etc)
we will find the problem that the other nodes in the cluster will not boot and this message will appear:
Jul 15 11:05:19 maquina01 cl_runtime: [ID 980942 kern.notice]
 NOTICE: CMM: Cluster doesn't have operational quorum yet; waiting
 for quorum.
This is a normal behavior that occurs to prevent what Sun Cluster called “amnesia” (see documentation for details). To start the cluster while the faulty node is repaired, we must make the following changes:
boot the node outside of the cluster
# reboot -- -x
Edit the file /etc/cluster/ccr/global/infrastructure
# cd /etc/cluster/ccr/global/
# vi infrastructure
Edit the /etc/cluster/ccr/infrastructure file and change the quorum_vote to 1 for the node that is up:
# vi /etc/cluster/ccr/infrastructure
  cluster.nodes.1.name   NODE1
  cluster.nodes.1.state  enable
  cluster.nodes.1.properties.quorum_vote  1
For all other nodes and any Quorum Device, set the votecount to zero (0). For example:
cluster.nodes.N.properties.quorum_vote  0
cluster.quorum_devices.Q.properties.votecount  0
Where N is the node id and Q is the quorum device id.
Regenerate the checksum of /etc/cluster/ccr/infrastructure file:
# /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/infrastructure -o
Reboot node NODE1 into the cluster:
# reboot

SUN cluster commands


Some resource group cluster commands are:
* clrt register resource-type         : Register a resource type.
* clrt register -n node1name,node2name resource-type                
: Register a resource type to specific nodes.
* clrt unregister resource-type                                 
 : Unregister a resource type.
* clrt list -v                                   
  : List all resource types and their associated node lists.
* clrt show resource-type                    
 : Display all information for a resource type.
* clrg create -n node1name,node2name rgname   
: Create a resource group.
* clrg delete rgname           
 : Delete a resource group.
* clrg set -p property-name rgname         
: Set a property.
* clrg show -v rgname            
: Show resource group information.
* clrs create -t HAStoragePlus -g rgname -p AffinityOn=true -p FilesystemMountPoints=/mountpoint resource-name
* clrg online -M rgname
* clrg switch -M -n nodename rgname
* clrg offline rgname                         : Offline the resource, but leave it in a managed state.
* clrg restart rgname
* clrs disable resource-name         : Disable a resource and its fault monitor.
* clrs enable resource-name      
 Re-enable a resource and its fault monitor.
* clrs clear -n nodename -f STOP_FAILED resource-name
* clrs unmonitor resource-name                 
: Disable the fault monitor, but leave resource running.
* clrs monitor resource-name                  
: Re-enable the fault monitor for a resource that is currently enabled.
* clrg suspend rgname                           
 : Preserves online status of group, but does not continue monitoring.
* clrg resume rgname                       
: Resumes monitoring of a suspended group
* clrg status                                             
: List status of resource groups.
* clrs status -g rgname

How to add zpool to the existing HASTORAGEPLUS in sun cluster resource






How to add zpool to the Existing HASTORAGEPLUS in sun cluster resource.



When you add a local or global file system to a HAStoragePlus resource, the HAStoragePlus resource automatically mounts the file system.
In the /etc/vfstab file on each node of the cluster, add an entry for the mount point of each file system that you are adding.
For local file systems
Set the mount at boot field to no.
Remove the global flag.
For cluster file systems
If the file system is a global file system, set the mount options field to contain the global option.
Retrieve the list of mount points for the file systems that the HAStoragePlus resource already manages
scha_resource_get -O extension -R hasp-resource -G hasp-rg FileSystemMountPoints


Modify the FileSystemMountPoints extension property of the HAStoragePlus resource to contain the following mount points

clresource set -p FileSystemMountPoints="mount-point-list" hasp-resource

Specifies a comma-separated list of mount points of the file systems that the HAStoragePlus resource already manages and the mount points of the file systems that you are adding.








How to add New zpool in to sun cluster with Hastorageplus


How to add New zpool  in to sun cluster  with Hastorageplus    
# cldev list -v d5
DID Device Full Device Path
———- —————-
d5 nodo2:/dev/rdsk/c2t600144F04CC554A100000C29A5674000d0
d5 nodo1:/dev/rdsk/c2t600144F04CC554A100000C29A5674000d0
   
creation of zpool zonepool
zpool create zonepool c2t600144F04CC554A100000C29A5674000d0
 i want see some flag before clustering. One of them is cachefile
# zpool get all zonepool | grep cachefile
NAME PROPERTY VALUE SOURCE
zonepool cachefile – default
i change dataset’s mountpoint:
# zfs set mountpoint=/zone1 zonepool
Creation of resource group zone-rg
clrg create test-rg
 Creation of resource zone-hast
 clrs create -g test-rg -t HAStoragePlus -p Zpools=zonepool zone-hast
After step 6 i check cachefile flag and i can clearly see that it was changed in:
cachefile /var/cluster/run/HAStoragePlus/zfs/zonepool.cachefile local
8) resource group is online and managed :)

# clrg online -M zone-rg

Fixing Solaris Cluster device ID (DID) mismatches

Fixing Sun Cluster device ID (DID) mismatches

I had to replace a disk in one of my cluster nodes, and was greeted with the following message once the disk was swapped and I checked the devices for consistency:
$ cldevice check
cldevice:  (C894318) Device ID "snode2:/dev/rdsk/c1t0d0" does not match physical device ID for "d5".
Warning: Device "snode2:/dev/rdsk/c1t0d0" might have been replaced.


To fix this issue, I used the cldevice utilities repair option:
$ cldevice repair
Updating shared devices on node 1
Updating shared devices on node 2


Once the repair operation updated the devids, cldevice ran cleanly:
$ cldevice check

How to modify logicalhostname in sun cluster


How to modify logicalhostnamein suncluster.
clrg offline apache-rg
Disable the Apache logical hostname resources
clrs disable appache-lh-res
 Provide the new hostname list.
clrs set -p HostnameList=test-2 apache-lh-res     don’t forget to change in /etc/hosts
clrs enable apache-lh-res
clrg online  apache-rg

How to add new file system in to cluster


How to add new file system in to cluster


# /usr/sbin/metainit -s oraset d200 1 1 /dev/did/dsk/d5s0
# /usr/sbin/metainit -s oraset d300 1 1 /dev/did/dsk/d4s0
# /usr/sbin/metainit -s oraset d100 -m d200
# /usr/sbin/metattach -s oraset d100 d300

# metainit -s sap-set d134 -p /dev/did/dsk/d34s0 1g


 newfs /dev/md/sap-set/rdsk/d134


create new mount points, Create it on both the nodes


Edit the /etc/vfstab file and add the new file systems.  Make the mount at boot option as “no”


1)./scgdevs

2)./scdidadm -L -h |grep -i LUN name




nlnehvdcs1sx895:a537069>./cldev list and check last did

nlnehvdcs1sx895:a537069>./cldev show -v and check device list.

nlnehvdcs1sx895:a537069>./scgdevs  it will configure new dids for newly added disks

nlnehvdcs1sx895:a537069>./scdidadm -r it will do reconfigure of DID device number

nlnehvdcs1sx895:a537069>./scdidadm -L -h will show clear details of disk.

nlnehvdcs1sx895:a537069> ./scdidadm -L -h |grep -i c4t60000970000292602677533033453644d0
30       nlnehvdcs1sx895:/dev/rdsk/c4t60000970000292602677533033453644d0 /dev/did/rdsk/d30
30       nlnehvdcs1sx896:/dev/rdsk/c4t60000970000292602677533033453644d0 /dev/did/rdsk/d30
nlnehvdcs1sx895:a537069> ./scdidadm -L -h |grep -i c4t60000970000292602677533033453731d0
29       nlnehvdcs1sx895:/dev/rdsk/c4t60000970000292602677533033453731d0 /dev/did/rdsk/d29
29       nlnehvdcs1sx896:/dev/rdsk/c4t60000970000292602677533033453731d0 /dev/did/rdsk/d29
nlnehvdcs1sx895:a537069>


metaset -s nlnehvdcs1cl897-prod-ds01 -a /dev/did/rdsk/d64

/usr/sbin/metainit -s nlnehvdcs1cl897-prod-ds01 d107 1 1 /dev/did/dsk/d64s0

/usr/sbin/metattach -s nlnehvdcs1cl897-prod-ds01 d100 d107

metastat -s nlnehvdcs1cl897-prod-ds01 -p


=================================================================================================================================

adding new fileystems in to cluster



nlnehvdcs1sx895:a537069>./cldev list and check last did

nlnehvdcs1sx895:a537069>./cldev show -v and check device list.

nlnehvdcs1sx895:a537069>./scgdevs  it will configure new dids for newly added disks

nlnehvdcs1sx895:a537069>./scdidadm -r it will do reconfigure of DID device number

nlnehvdcs1sx895:a537069>./scdidadm -L -h will show clear details of disk.

nlnehvdcs1sx895:a537069> ./scdidadm -L -h |grep -i c4t60000970000292602677533033453644d0
30       nlnehvdcs1sx895:/dev/rdsk/c4t60000970000292602677533033453644d0 /dev/did/rdsk/d30
30       nlnehvdcs1sx896:/dev/rdsk/c4t60000970000292602677533033453644d0 /dev/did/rdsk/d30
nlnehvdcs1sx895:a537069> ./scdidadm -L -h |grep -i c4t60000970000292602677533033453731d0
29       nlnehvdcs1sx895:/dev/rdsk/c4t60000970000292602677533033453731d0 /dev/did/rdsk/d29
29       nlnehvdcs1sx896:/dev/rdsk/c4t60000970000292602677533033453731d0 /dev/did/rdsk/d29
nlnehvdcs1sx895:a537069>





metaset -s nlnehvdcs1cl897-prod-ds01 -a /dev/did/rdsk/d62   /dev/did/rdsk/d63

# /usr/sbin/metainit -s nlnehvdcs1cl897-prod-ds01  d801 1 1 /dev/did/dsk/d62s0
# /usr/sbin/metainit -s nlnehvdcs1cl897-prod-ds01 d802 1 1 /dev/did/dsk/d63s0
# /usr/sbin/metainit -s nlnehvdcs1cl897-prod-ds01 d800 -m  d801

/usr/sbin/metattach -s nlnehvdcs1cl897-prod-ds01 d800 d802

newfs /dev/md/nlnehvdcs1cl897-prod-ds01/rdsk/d800

mount /dev/md/nlnehvdcs1cl897-prod-ds01/dsk/d800 /mnt


=========================================================================================================================
nlnehvdcs1sx896:a537069> echo |format |more
Searching for disks...done

c4t60000970000292602650533032463135d0: configured with capacity of 350.03GB
c4t60000970000292602762533032433435d0: configured with capacity of 360.03GB
c4t60000970000292602762533032414635d0: configured with capacity of 350.03GB

nlnehvdcs1sx895:a537069> echo |format |more
Searching for disks...done

c4t60000970000292602650533032463135d0: configured with capacity of 350.03GB
c4t60000970000292602762533032433435d0: configured with capacity of 360.03GB
c4t60000970000292602762533032414635d0: configured with capacity of 350.03GB

350GB

nlnehvdcs1sx895:a537069> ./scdidadm -r
nlnehvdcs1sx895:a537069> ./scdidadm -L -h |grep -i c4t60000970000292602650533032463135d0
62       nlnehvdcs1sx896:/dev/rdsk/c4t60000970000292602650533032463135d0 /dev/did/rdsk/d62
62       nlnehvdcs1sx895:/dev/rdsk/c4t60000970000292602650533032463135d0 /dev/did/rdsk/d62

360GB LUN

nlnehvdcs1sx895:a537069> ./scdidadm -L -h |grep -i  c4t60000970000292602762533032433435d0
64       nlnehvdcs1sx896:/dev/rdsk/c4t60000970000292602762533032433435d0 /dev/did/rdsk/d64
64       nlnehvdcs1sx895:/dev/rdsk/c4t60000970000292602762533032433435d0 /dev/did/rdsk/d64

350GB LUN

nlnehvdcs1sx895:a537069> ./scdidadm -L -h |grep -i c4t60000970000292602762533032414635d0
63       nlnehvdcs1sx896:/dev/rdsk/c4t60000970000292602762533032414635d0 /dev/did/rdsk/d63
63       nlnehvdcs1sx895:/dev/rdsk/c4t60000970000292602762533032414635d0 /dev/did/rdsk/d63
nlnehvdcs1sx895:a537069>


metaset -s nlnehvdcs1cl897-prod-ds01

====================================================================================================================
ON 896

/volumes/v0/fra         87G    65G    21G    76%    /volumes/fra
/dev/md/nlnehvdcs1cl897-prod-ds01/dsk/d800
                       345G   130G   211G    39%    /volumes/franew


umount /volumes/fra ----this is LOFS filesystem
umount /volumes/franew --this needs to be mounted as /volumes/fra

mount /dev/md/nlnehvdcs1cl897-prod-ds01/dsk/d800 /volumes/fra

0n 895

umount /volumes/fra ---this needs to be mounted as /volumes/fraold
mount /dev/md/dsk/d160 /volumes/fraold

Now adding fileystem into cluster

/volumes/prod-ora01 /volumes/prod-ora02 /volumes/prod-ora03 /volumes/prod-ora04 already in cluster

Resource:nlnehvdcs1cl897-prod-has
Group:nlnehvdcs1cl897-prod-rg


./clresource set -p FilesystemMountPoints="/volumes/prod-ora01,/volumes/prod-ora02,/volumes/prod-ora03,/volumes/prod-ora04,/volumes/fra" nlnehvdcs1cl897-prod-has

./clrs show -v --see here whether fileystem is added into cluster



nlnehvdcs1sx896:a537069> ./clrg status

=== Cluster Resource Groups ===

Group Name                Node Name         Suspended   Status
----------                ---------         ---------   ------
nlnehvdcs1cl897-prod-rg   nlnehvdcs1sx895   Yes         Offline
                          nlnehvdcs1sx896   Yes         Online

nlnehvdcs1cl89b-qa-rg     nlnehvdcs1sx896   Yes         Offline

nlnehvdcs1sx896:a537069>



./clrg resume nlnehvdcs1cl897-prod-rg

./clrg resume  nlnehvdcs1cl89b-qa-rg


nlnehvdcs1sx896:a537069> ./clrg status --check status should be suspended should be no

switching resource Groups from 896 to 895

nlnehvdcs1sx896:a537069> ./clrg switch -n nlnehvdcs1sx895 nlnehvdcs1cl897-prod-rg

nlnehvdcs1sx896:a537069> ./clrg online -n nlnehvdcs1sx896 nlnehvdcs1cl89b-qa-rg



nlnehvdcs1sx896:a537069> ./clrg status



ufsdump and rsync utilisation while migration

ufsdump and rsync utilisation  while migration

ufsdump 0uf - /volumes/app_prod5 | ( cd /volumes/Napp_prod5;ufsrestore -xf - )

incremental

ufsdump 1uf - /volumes/app_prod5 | ( cd /volumes/Napp_prod5;ufsrestore -xf - )

===========================================================


ufsdump 0uf - /volumes/app_prod8 | ( cd /volumes/Napp_prod8;ufsrestore -xf - )


/users/a537069> cd /volumes/Napp_prod5
/volumes/Napp_prod5> ls -lrth
total 8
drwx------   2 root     root         512 Feb 10  2005 lost+found
drwxr-x---   4 hrn      hrnusers     512 Jan 13  2009 hrn
drwxr-x---   3 sag      sag          512 Nov 16  2012 sag
drwxr-xr-x   2 root     other        512 Aug 21 07:34 test
/volumes/Napp_prod5>


/volumes/Napp_prod5> cd /volumes/app_prod5
/volumes/app_prod5> ls -lrth
total 26
drwx------   2 root     root        8.0K Feb 10  2005 lost+found
drwxr-x---   4 hrn      hrnusers     512 Jan 13  2009 hrn
drwxr-x---   3 sag      sag         2.5K Nov 16  2012 sag
drwxr-xr-x   2 root     other        512 Aug 21 07:34 test
/volumes/app_prod5>


============================================================================

rsync -arvHX /volumes/fra/* nlnehvdcs1sx896:/volumes/franew/


rsync -arvH /volumes/app_prod6/* /volumes/Napp_prod6


root@cor-9008app01 # rsync -arvH /volumes/app_prod6/* /volumes/Napp_prod6/
building file list ... done
APP_PROD6
DBHRN
db001/
db001/ASSO1.001
db001/DATA1.001
db001/WORK1.001
db250/
db250/ASSO1.250
db250/DATA1.250
db250/DATA2.250
db250/SORT1.250
db250/TEMP1.250
db250/WORK1.250
lost+found/
wrote 13064456240 bytes read 196 bytes 28999903.30 bytes/sec
total size is 13062860800 speedup is 1.00
root@cor-9008app01 #



How to set ACL in zfs

How to set ACL in zfs

owner@

    The owner is denied execute permissions on the file (x=execute).
owner@

    The owner can read and modify the contents of the file (rw=read_data/write_data, p=append_data). The owner can also modify the file's attributes such as time stamps, extended attributes, and ACLs (A=write_xattr, W=write_attributes, and C=write_acl). In addition, the owner can modify the ownership of the file (o=write_owner).
group@

    The group is denied modify and execute permissions on the file (write_data, p=append_data, and x=execute).



Please grant the following filesystem / directory permissions for the 'icinga' user:


Read Only permission to:
/usd
/usd_fs


Execute permission to:
/usd/site/mods/interp

check this aclmode and aclinherit and just make passthrough for this.


zfs set aclinherit=passthrough /usd


zfs set aclinherit=passthrough /usd_fs


chmod A+User:icinga:r:fd:allow /usd




=====================================================================================================

chmod A+user:icinga:r:fd:allow /usd


chmod A+user:icinga:r:fd:allow /usd_fs


chmod A+user:icinga:rwx:fd:allow /usd/site/mods/interp


ls -Vd /usd

ls -Vd /usd/site/mods/interp



nlxusd02prp:root>ls -Vd /usd
drwxr-xr-x+ 44 SrvcPlus root          66 Nov 24 22:00 /usd
       user:icinga:r-------------:fd----:allow
            owner@:rwxp-DaARWcCos:------:allow
            group@:r-x---a-R-c--s:------:allow
         everyone@:r-x---a-R-c--s:------:allow
nlxusd02prp:root>ls -Vd /usd/site/mods/interp
drwxrwxr-x+  2 SrvcPlus root          61 Nov 13 13:18 /usd/site/mods/interp
       user:icinga:rwx-----------:fd----:allow
            owner@:rwxp-DaARWcCos:------:allow
            group@:rwxp-DaARWc--s:------:allow
         everyone@:r-x---a-R-c--s:------:allow
nlxusd02prp:root>



nlxusd02cat:root>cat /etc/passwd |grep -i icinga
icinga:x:31570614:3801:C4238380 - icinga:/export/home/icinga:/usr/bin/bash
nlxusd02cat:root>chmod A+user:icinga:r:fd:allow /usd
nlxusd02cat:root>chmod A+user:icinga:r:fd:allow /usd_fs
nlxusd02cat:root>chmod A+user:icinga:rwx:fd:allow /usd/site/mods/interp
nlxusd02cat:root>ls -Vd /usd
drwxr-xr-x+ 42 SrvcPlus root          67 Nov 25 11:43 /usd
       user:icinga:r-------------:fd----:allow
            owner@:rwxp-DaARWcCos:------:allow
            group@:r-x---a-R-c--s:------:allow
         everyone@:r-x---a-R-c--s:------:allow
nlxusd02cat:root>ls -Vd /usd/site/mods/interp
drwxrwxr-x+  2 SrvcPlus root          53 Nov 12 16:45 /usd/site/mods/interp
       user:icinga:rwx-----------:fd----:allow
            owner@:rwxp-DaARWcCos:------:allow
            group@:rwxp-DaARWc--s:------:allow
         everyone@:r-x---a-R-c--s:------:allow
nlxusd02cat:root>

==============================================================================================

How to unconfigure luns from metaset.

 How to unconfigure luns from metaset.

/users/a537069> powermt display dev=all
Pseudo name=emcpower49a
Symmetrix ID=000292603635
Logical device ID=0962
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838CD5Cd16s0 FA  8fA   active  alive       0      0
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838CD64d16s0 FA 10fA   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838CD58d16s0 FA  7fA   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838CD60d16s0 FA  9fA   active  alive       0      0

Pseudo name=emcpower34a
Symmetrix ID=000292603635
Logical device ID=0FEC
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838CD5Cd1s0 FA  8fA   active  alive       0      0
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838CD64d1s0 FA 10fA   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838CD58d1s0 FA  7fA   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838CD60d1s0 FA  9fA   active  alive       0      0

Pseudo name=emcpower33a
Symmetrix ID=000292603635
Logical device ID=0FF0
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838CD5Cd2s0 FA  8fA   active  alive       0      0
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838CD64d2s0 FA 10fA   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838CD58d2s0 FA  7fA   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838CD60d2s0 FA  9fA   active  alive       0      0

Pseudo name=emcpower36a
Symmetrix ID=000292603635
Logical device ID=0FF8
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838CD5Cd3s0 FA  8fA   active  alive       0      0
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838CD64d3s0 FA 10fA   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838CD58d3s0 FA  7fA   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838CD60d3s0 FA  9fA   active  alive       0      0

Pseudo name=emcpower35a
Symmetrix ID=000292603635
Logical device ID=0FFC
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838CD5Cd4s0 FA  8fA   active  alive       0      0
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838CD64d4s0 FA 10fA   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838CD58d4s0 FA  7fA   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838CD60d4s0 FA  9fA   active  alive       0      0

Pseudo name=emcpower37a
Symmetrix ID=000292603635
Logical device ID=1000
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838CD5Cd5s0 FA  8fA   active  alive       0      0
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838CD64d5s0 FA 10fA   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838CD58d5s0 FA  7fA   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838CD60d5s0 FA  9fA   active  alive       0      0

Pseudo name=emcpower39a
Symmetrix ID=000292603635
Logical device ID=1008
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838CD5Cd6s0 FA  8fA   active  alive       0      0
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838CD64d6s0 FA 10fA   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838CD58d6s0 FA  7fA   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838CD60d6s0 FA  9fA   active  alive       0      0

Pseudo name=emcpower38a
Symmetrix ID=000292603635
Logical device ID=1010
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838CD5Cd7s0 FA  8fA   active  alive       0      0
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838CD64d7s0 FA 10fA   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838CD58d7s0 FA  7fA   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838CD60d7s0 FA  9fA   active  alive       0      0

Pseudo name=emcpower40a
Symmetrix ID=000292603635
Logical device ID=1018
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838CD5Cd8s0 FA  8fA   active  alive       0      0
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838CD64d8s0 FA 10fA   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838CD58d8s0 FA  7fA   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838CD60d8s0 FA  9fA   active  alive       0      0

Pseudo name=emcpower32a
Symmetrix ID=000292603637
Logical device ID=14D9
state=dead; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
3073 UNKNOWN                  unknown   FA  7fB   active  dead        0      0
3073 UNKNOWN                  unknown   FA  9fB   active  dead        0      0
3072 UNKNOWN                  unknown   FA  8fB   active  dead        0      0
3072 UNKNOWN                  unknown   FA 10fB   active  dead        0      0

Pseudo name=emcpower26a
Symmetrix ID=000292603637
Logical device ID=14DD
state=dead; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
3073 UNKNOWN                  unknown   FA  7fB   active  dead        0      0
3073 UNKNOWN                  unknown   FA  9fB   active  dead        0      0
3072 UNKNOWN                  unknown   FA  8fB   active  dead        0      0
3072 UNKNOWN                  unknown   FA 10fB   active  dead        0      0

Pseudo name=emcpower27a
Symmetrix ID=000292603637
Logical device ID=14E1
state=dead; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
3073 UNKNOWN                  unknown   FA  7fB   active  dead        0      0
3073 UNKNOWN                  unknown   FA  9fB   active  dead        0      0
3072 UNKNOWN                  unknown   FA  8fB   active  dead        0      0
3072 UNKNOWN                  unknown   FA 10fB   active  dead        0      0

Pseudo name=emcpower29a
Symmetrix ID=000292603637
Logical device ID=14E5
state=dead; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
3073 UNKNOWN                  unknown   FA  7fB   active  dead        0      0
3073 UNKNOWN                  unknown   FA  9fB   active  dead        0      0
3072 UNKNOWN                  unknown   FA  8fB   active  dead        0      0
3072 UNKNOWN                  unknown   FA 10fB   active  dead        0      0

Pseudo name=emcpower28a
Symmetrix ID=000292603637
Logical device ID=14ED
state=dead; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
3073 UNKNOWN                  unknown   FA  7fB   active  dead        0      0
3073 UNKNOWN                  unknown   FA  9fB   active  dead        0      0
3072 UNKNOWN                  unknown   FA  8fB   active  dead        0      0
3072 UNKNOWN                  unknown   FA 10fB   active  dead        0      0

Pseudo name=emcpower31a
Symmetrix ID=000292603637
Logical device ID=14F5
state=dead; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
3073 UNKNOWN                  unknown   FA  7fB   active  dead        0      0
3073 UNKNOWN                  unknown   FA  9fB   active  dead        0      0
3072 UNKNOWN                  unknown   FA  8fB   active  dead        0      0
3072 UNKNOWN                  unknown   FA 10fB   active  dead        0      0

Pseudo name=emcpower30a
Symmetrix ID=000292603637
Logical device ID=14FD
state=dead; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
3073 UNKNOWN                  unknown   FA  7fB   active  dead        0      0
3073 UNKNOWN                  unknown   FA  9fB   active  dead        0      0
3072 UNKNOWN                  unknown   FA  8fB   active  dead        0      0
3072 UNKNOWN                  unknown   FA 10fB   active  dead        0      0

Pseudo name=emcpower48a
Symmetrix ID=000292603643
Logical device ID=1430
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838ED9Dd8s0 FA  8gB   active  alive       0      0
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838EDA5d8s0 FA 10gB   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838ED99d8s0 FA  7gB   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838EDA1d8s0 FA  9gB   active  alive       0      0

Pseudo name=emcpower47a
Symmetrix ID=000292603643
Logical device ID=32E8
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838ED9Dd1s0 FA  8gB   active  alive       0      0
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838EDA5d1s0 FA 10gB   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838ED99d1s0 FA  7gB   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838EDA1d1s0 FA  9gB   active  alive       0      0

Pseudo name=emcpower46a
Symmetrix ID=000292603643
Logical device ID=32EC
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838ED9Dd2s0 FA  8gB   active  alive       0      0
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838EDA5d2s0 FA 10gB   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838ED99d2s0 FA  7gB   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838EDA1d2s0 FA  9gB   active  alive       0      0

Pseudo name=emcpower45a
Symmetrix ID=000292603643
Logical device ID=32F0
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838ED9Dd3s0 FA  8gB   active  alive       0      0
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838EDA5d3s0 FA 10gB   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838ED99d3s0 FA  7gB   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838EDA1d3s0 FA  9gB   active  alive       0      0

Pseudo name=emcpower44a
Symmetrix ID=000292603643
Logical device ID=32F4
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838ED9Dd4s0 FA  8gB   active  alive       0      0
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838EDA5d4s0 FA 10gB   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838ED99d4s0 FA  7gB   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838EDA1d4s0 FA  9gB   active  alive       0      0

Pseudo name=emcpower43a
Symmetrix ID=000292603643
Logical device ID=32FC
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838ED9Dd5s0 FA  8gB   active  alive       0      0
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838EDA5d5s0 FA 10gB   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838ED99d5s0 FA  7gB   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838EDA1d5s0 FA  9gB   active  alive       0      0

Pseudo name=emcpower42a
Symmetrix ID=000292603643
Logical device ID=3304
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838ED9Dd6s0 FA  8gB   active  alive       0      0
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838EDA5d6s0 FA 10gB   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838ED99d6s0 FA  7gB   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838EDA1d6s0 FA  9gB   active  alive       0      0

Pseudo name=emcpower41a
Symmetrix ID=000292603643
Logical device ID=330C
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838ED9Dd7s0 FA  8gB   active  alive       0      0
3072 pci@1d,700000/SUNW,qlc@1/fp@0,0 c3t500009740838EDA5d7s0 FA 10gB   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838ED99d7s0 FA  7gB   active  alive       0      0
3073 pci@1d,700000/SUNW,qlc@2/fp@0,0 c4t500009740838EDA1d7s0 FA  9gB   active  alive       0      0



/users/a537069>






===================================================================================================================

prod/d109: Soft Partition
    Device: prod/d100
    State: Okay
    Size: 93142892 blocks (44 GB)
        Extent              Start Block              Block count
             0               1038090368                 93142892

prod/d100: Mirror
    Submirror 1: prod/d120
      State: Okay
    Submirror 2: prod/d111
      State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 1325644800 blocks (632 GB)

prod/d120: Submirror of prod/d100
    State: Okay
    Size: 1405100160 blocks (670 GB)
    Stripe 0:
        Device                    Start Block  Dbase        State Reloc Hot Spare
        c3t500009740838EDA5d1s0          0     No            Okay   Yes
    Stripe 1:
        Device                    Start Block  Dbase        State Reloc Hot Spare
        c3t500009740838EDA5d2s0          0     No            Okay   Yes
    Stripe 2:
        Device                    Start Block  Dbase        State Reloc Hot Spare
        c3t500009740838EDA5d3s0          0     No            Okay   Yes
    Stripe 3:
        Device                    Start Block  Dbase        State Reloc Hot Spare
        c3t500009740838EDA5d4s0          0     No            Okay   Yes
    Stripe 4:
        Device                    Start Block  Dbase        State Reloc Hot Spare
        c3t500009740838EDA5d5s0          0     No            Okay   Yes
    Stripe 5:
        Device                    Start Block  Dbase        State Reloc Hot Spare
        c3t500009740838EDA5d6s0          0     No            Okay   Yes
    Stripe 6:
        Device                    Start Block  Dbase        State Reloc Hot Spare
        c3t500009740838EDA5d7s0          0     No            Okay   Yes


prod/d111: Submirror of prod/d100
    State: Okay
    Size: 1405100160 blocks (670 GB)
    Stripe 0:
        Device                    Start Block  Dbase        State Reloc Hot Spare
        c3t500009740838CD5Cd2s0          0     No            Okay   Yes
    Stripe 1:
        Device                    Start Block  Dbase        State Reloc Hot Spare
        c3t500009740838CD5Cd3s0          0     No            Okay   Yes
    Stripe 2:
        Device                    Start Block  Dbase        State Reloc Hot Spare
        c3t500009740838CD5Cd4s0          0     No            Okay   Yes
    Stripe 3:
        Device                    Start Block  Dbase        State Reloc Hot Spare
        c3t500009740838CD5Cd5s0          0     No            Okay   Yes
    Stripe 4:
        Device                    Start Block  Dbase        State Reloc Hot Spare
        c3t500009740838CD5Cd6s0          0     No            Okay   Yes
    Stripe 5:
        Device                    Start Block  Dbase        State Reloc Hot Spare
        c3t500009740838CD5Cd7s0          0     No            Okay   Yes
    Stripe 6:
        Device                    Start Block  Dbase        State Reloc Hot Spare
        c3t500009740838CD5Cd8s0          0     No            Okay   Yes


prod/d106: Soft Partition
    Device: prod/d100
    State: Okay
    Size: 31457280 blocks (15 GB)
        Extent              Start Block              Block count
             0                954204192                 20971520
             1               1131233280                 10485760

prod/d107: Soft Partition
    Device: prod/d100
    State: Okay
    Size: 127611007 blocks (60 GB)
        Extent              Start Block              Block count
             0                975175744                 20971520
             1               1141719072                 31457311
             2                954204167                       24
             3                975175713                       30
             4                996147265                       30
             5               1038090337                       30
             6               1131233261                       18
             7               1141719041                       30
             8               1215119425                 75182014

prod/d105: Soft Partition
    Device: prod/d100
    State: Okay
    Size: 10485760 blocks (5.0 GB)
        Extent              Start Block              Block count
             0                734003205                 10485760

prod/d104: Soft Partition
    Device: prod/d100
    State: Okay
    Size: 62914560 blocks (30 GB)
        Extent              Start Block              Block count
             0                671088644                 62914560

prod/d103: Soft Partition
    Device: prod/d100
    State: Okay
    Size: 146800640 blocks (70 GB)
        Extent              Start Block              Block count
             0                524288003                146800640

prod/d102: Soft Partition
    Device: prod/d100
    State: Okay
    Size: 629145600 blocks (300 GB)
        Extent              Start Block              Block count
             0                104857602                419430400
             1                744488966                209715200

prod/d101: Soft Partition
    Device: prod/d100
    State: Okay
    Size: 104857600 blocks (50 GB)
        Extent              Start Block              Block count
             0                        1                104857600

prod/d108: Soft Partition
    Device: prod/d100
    State: Okay
    Size: 119227520 blocks (56 GB)
        Extent              Start Block              Block count
             0                996147296                 41943040
             1               1173176384                 41943040
             2               1290301440                 35341440

Device Relocation Information:
Device                   Reloc  Device ID
c3t500009740838ED9Dd8    Yes    id1,ssd@w60000970000292603643533031343330
c3t500009740838CD5Cd16   Yes    id1,ssd@w60000970000292603635533030393632
c3t500009740838EDA5d1    Yes    id1,ssd@w60000970000292603643533033324538
c3t500009740838EDA5d2    Yes    id1,ssd@w60000970000292603643533033324543
c3t500009740838EDA5d3    Yes    id1,ssd@w60000970000292603643533033324630
c3t500009740838EDA5d4    Yes    id1,ssd@w60000970000292603643533033324634
c3t500009740838EDA5d5    Yes    id1,ssd@w60000970000292603643533033324643
c3t500009740838EDA5d6    Yes    id1,ssd@w60000970000292603643533033333034
c3t500009740838EDA5d7    Yes    id1,ssd@w60000970000292603643533033333043
c3t500009740838CD5Cd2    Yes    id1,ssd@w60000970000292603635533030464630
c3t500009740838CD5Cd3    Yes    id1,ssd@w60000970000292603635533030464638
c3t500009740838CD5Cd4    Yes    id1,ssd@w60000970000292603635533030464643
c3t500009740838CD5Cd5    Yes    id1,ssd@w60000970000292603635533031303030
c3t500009740838CD5Cd6    Yes    id1,ssd@w60000970000292603635533031303038
c3t500009740838CD5Cd7    Yes    id1,ssd@w60000970000292603635533031303130
c3t500009740838CD5Cd8    Yes    id1,ssd@w60000970000292603635533031303138
/users/a537069>
/users/a537069>
/users/a537069> df -kh
Filesystem             size   used  avail capacity  Mounted on
/dev/md/dsk/d10         15G   8.6G   6.1G    59%    /
/proc                    0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
fd                       0K     0K     0K     0%    /dev/fd
/dev/md/dsk/d30         15G   8.1G   6.5G    56%    /var
swap                    13G   176K    13G     1%    /var/run
swap                    13G   6.6M    13G     1%    /tmp
/dev/md/dsk/d50         15G    15M    15G     1%    /volumes/free3
/dev/md/dsk/d70         25G    25M    25G     1%    /volumes/free
/dev/md/dsk/d40         15G   5.8G   8.8G    40%    /volumes/v0
/dev/md/dsk/d60         42G    29G    13G    70%    /volumes/buhrn2
/dev/md/prod/dsk/d4     30G    14G    15G    49%    /volumes/app_prod4
/dev/md/prod/dsk/d5    4.9G   891M   4.0G    18%    /volumes/app_prod5
/dev/md/prod/dsk/d6     15G    12G   2.4G    84%    /volumes/app_prod6
/dev/md/prod/dsk/d7     59G    39G    19G    68%    /volumes/app_prod7
/dev/md/prod/dsk/d8     55G    42G    12G    78%    /volumes/app_prod8
/users/a537069>
/users/a537069>
/users/a537069>

/users/a537069> metaset -s prod

Set name = prod, Set number = 1

Host                Owner
  cor-9008app01      Yes
  cor-9008app02

Drive                             Dbase

/dev/dsk/c3t500009740838CD5Cd2    Yes

/dev/dsk/c3t500009740838CD5Cd4    Yes

/dev/dsk/c3t500009740838CD5Cd5    Yes

/dev/dsk/c3t500009740838CD5Cd6    Yes

/dev/dsk/c3t500009740838CD5Cd7    Yes

/dev/dsk/c3t500009740838CD5Cd3    Yes

/dev/dsk/c3t500009740838EDA5d2    Yes

/dev/dsk/c3t500009740838EDA5d3    Yes

/dev/dsk/c3t500009740838EDA5d4    Yes

/dev/dsk/c3t500009740838EDA5d5    Yes

/dev/dsk/c3t500009740838EDA5d6    Yes

/dev/dsk/c3t500009740838EDA5d7    Yes

/dev/dsk/c3t500009740838EDA5d1    Yes

/dev/dsk/c3t500009740838CD5Cd1    Yes

/dev/dsk/c3t500009740838CD5Cd16   Yes

/dev/dsk/c3t500009740838ED9Dd8    Yes
/users/a537069>


/users/a537069> metastat -s prod -p
prod/d8 -p prod/d0 -o 230686880 -b 117440512
prod/d0 -m prod/d1 prod/d2 1
prod/d1 1 1 c3t500009740838CD5Cd16s0
prod/d2 1 1 c3t500009740838ED9Dd8s0
prod/d7 -p prod/d0 -o 104857728 -b 125829120
prod/d6 -p prod/d0 -o 73400416 -b 31457280
prod/d5 -p prod/d0 -o 62914624 -b 10485760
prod/d4 -p prod/d0 -o 32 -b 62914560
prod/d109 -p prod/d100 -o 1038090368 -b 93142892
prod/d100 -m prod/d120 prod/d111 1
prod/d120 7 1 c3t500009740838EDA5d1s0 \
         1 c3t500009740838EDA5d2s0 \
         1 c3t500009740838EDA5d3s0 \
         1 c3t500009740838EDA5d4s0 \
         1 c3t500009740838EDA5d5s0 \
         1 c3t500009740838EDA5d6s0 \
         1 c3t500009740838EDA5d7s0
prod/d111 7 1 c3t500009740838CD5Cd2s0 \
         1 c3t500009740838CD5Cd3s0 \
         1 c3t500009740838CD5Cd4s0 \
         1 c3t500009740838CD5Cd5s0 \
         1 c3t500009740838CD5Cd6s0 \
         1 c3t500009740838CD5Cd7s0 \
         1 c3t500009740838CD5Cd8s0
prod/d106 -p prod/d100 -o 954204192 -b 20971520  -o 1131233280 -b 10485760
prod/d107 -p prod/d100 -o 975175744 -b 20971520  -o 1141719072 -b 31457311  -o 954204167 -b 24  -o 975175713 -b 30  -o 996147265 -b 30  -o 1038090337 -b 30  -o 1131233261 -b 18  -o 1141719041 -b 30  -o 1215119425 -b 75182014
prod/d105 -p prod/d100 -o 734003205 -b 10485760
prod/d104 -p prod/d100 -o 671088644 -b 62914560
prod/d103 -p prod/d100 -o 524288003 -b 146800640
prod/d102 -p prod/d100 -o 104857602 -b 419430400  -o 744488966 -b 209715200
prod/d101 -p prod/d100 -o 1 -b 104857600
prod/d108 -p prod/d100 -o 996147296 -b 41943040  -o 1173176384 -b 41943040  -o 1290301440 -b 35341440
/users/a537069>

metaclear -s prod -r d109

metaclear -s prod -r d108

metaclear -s prod -r d107

metaclear -s prod -r d106

metaclear -s prod -r d105

metaclear -s prod -r d104

metaclear -s prod -r d103

metaclear -s prod -r d102

metaclear -s prod -r d101-------Once this is done complete d100 gt cleared by clearing d111,d120.

metadetach -s prod d100 d111

metadetach -s prod d100 d120

metaclear -s prod d111

metaclear -s prod d120

metaclear -s prod d100

metastat -s prod -p

metaset -s prod -d c3t500009740838EDA5d1 c3t500009740838EDA5d2 c3t500009740838EDA5d3 c3t500009740838EDA5d4 c3t500009740838EDA5d5 c3t500009740838EDA5d6 c3t500009740838EDA5d7

metaset -s prod -d c3t500009740838CD5Cd2 c3t500009740838CD5Cd3 c3t500009740838CD5Cd4 c3t500009740838CD5Cd5 c3t500009740838CD5Cd6 c3t500009740838CD5Cd7 c3t500009740838CD5Cd8

if above comd wont work for deletion of disks from metatset use below format

metaset -s nlnehvdcs1cl89b-qa-ds01 -d /dev/did/rdsk/d13

metaset -s prod

metadb -s prod

Now how to release the storage you will have had to first deliberately have made the LUN(s) "unusable" (by unmapping them, deleting them, masking them away from this initiator, etc.)


cfgadm -o show_FCP_dev -al | grep unusable

cfgadm -o unusable_FCP_dev -c unconfigure c3::500009740838eda5

cfgadm -o unusable_FCP_dev -c unconfigure c3::500009740838CD5C

else

Powermt remove dev=c3t500009740838EDA5d1s2

luxadm  -e offline /dev/rdsk/c3t500009740838EDA5d1s2

luxadm  -e offline /dev/rdsk/c3t500009740838EDA5d2s2

luxadm -e offline  /dev/rdsk/c3t500009740838EDA5d3s2

devfsadm -Cv


LUNIDS:

Symmetrix ID:000292603643

c3t500009740838EDA5d1 ---32E8

c3t500009740838EDA5d2 ---32EC

c3t500009740838EDA5d3 ---32F0

c3t500009740838EDA5d4 ---32F4

c3t500009740838EDA5d5 ---32FC

c3t500009740838EDA5d6 ---3304

c3t500009740838EDA5d7---330C

Symmetrix ID:000292603635

c3t500009740838CD5Cd2 ---0FF0

c3t500009740838CD5Cd3 ---0FF8

c3t500009740838CD5Cd4 ---0FFC

c3t500009740838CD5Cd5 ---1000

c3t500009740838CD5Cd6 ---1008

c3t500009740838CD5Cd7 --1010

c3t500009740838CD5Cd8---1018


Symmetrix ID:000292603643

32E8
32EC
32F0
32F4
32FC
3304
330C

Symmetrix ID:000292603635

0FF0
0FF8
0FFC
1000
1008
1010
1018
===========================================================================================================

logs

====================================================================================================



/users/a537069> metaclear -s prod -r d109
prod/d109: Soft Partition is cleared
metaclear: cor-9008app01: prod/d100: metadevice in use

/users/a537069> metastat -s prod -p|grep -i d109
/users/a537069> metaclear -s prod -r d108
prod/d108: Soft Partition is cleared
metaclear: cor-9008app01: prod/d100: metadevice in use

/users/a537069> metaclear -s prod -r d107
prod/d107: Soft Partition is cleared
metaclear: cor-9008app01: prod/d100: metadevice in use

/users/a537069> metaclear -s prod -r d106
prod/d106: Soft Partition is cleared
metaclear: cor-9008app01: prod/d100: metadevice in use

/users/a537069> metaclear -s prod -r d105
prod/d105: Soft Partition is cleared
metaclear: cor-9008app01: prod/d100: metadevice in use

/users/a537069> metaclear -s prod -r d104
prod/d104: Soft Partition is cleared
metaclear: cor-9008app01: prod/d100: metadevice in use

/users/a537069> metaclear -s prod -r d103
prod/d103: Soft Partition is cleared
metaclear: cor-9008app01: prod/d100: metadevice in use

/users/a537069> metaclear -s prod -r d102
prod/d102: Soft Partition is cleared
metaclear: cor-9008app01: prod/d100: metadevice in use

/users/a537069> metaclear -s prod -r d101
prod/d101: Soft Partition is cleared
prod/d100: Mirror is cleared
prod/d120: Concat/Stripe is cleared
prod/d111: Concat/Stripe is cleared
/users/a537069> metastat -s prod -p
prod/d8 -p prod/d0 -o 230686880 -b 117440512
prod/d0 -m prod/d1 prod/d2 1
prod/d1 1 1 c3t500009740838CD5Cd16s0
prod/d2 1 1 c3t500009740838ED9Dd8s0
prod/d7 -p prod/d0 -o 104857728 -b 125829120
prod/d6 -p prod/d0 -o 73400416 -b 31457280
prod/d5 -p prod/d0 -o 62914624 -b 10485760
prod/d4 -p prod/d0 -o 32 -b 62914560


/users/a537069> metaset -s prod|wc -l
      40
/users/a537069> metaset -s prod -d c3t500009740838EDA5d1
/users/a537069> metaset -s prod|wc -l
      38
/users/a537069>

/users/a537069> metaset -s prod -d c3t500009740838EDA5d2
/users/a537069> metaset -s prod|wc -l
      36
/users/a537069> metaset -s prod -d c3t500009740838EDA5d3
/users/a537069> metaset -s prod|wc -l
      34
/users/a537069> metaset -s prod -d c3t500009740838EDA5d4
/users/a537069> metaset -s prod|wc -l
      32
/users/a537069> metaset -s prod -d c3t500009740838EDA5d5
/users/a537069> metaset -s prod -d c3t500009740838EDA5d6
/users/a537069> metaset -s prod -d c3t500009740838EDA5d7
/users/a537069> metaset -s prod

Set name = prod, Set number = 1

Host                Owner
  cor-9008app01      Yes
  cor-9008app02

Drive                             Dbase

/dev/dsk/c3t500009740838CD5Cd2    Yes

/dev/dsk/c3t500009740838CD5Cd4    Yes

/dev/dsk/c3t500009740838CD5Cd5    Yes

/dev/dsk/c3t500009740838CD5Cd6    Yes

/dev/dsk/c3t500009740838CD5Cd7    Yes

/dev/dsk/c3t500009740838CD5Cd3    Yes

/dev/dsk/c3t500009740838CD5Cd1    Yes

/dev/dsk/c3t500009740838CD5Cd16   Yes

/dev/dsk/c3t500009740838ED9Dd8    Yes
/users/a537069>


/users/a537069> metaset -s prod -d c3t500009740838CD5Cd3
/users/a537069> metaset -s prod -d c3t500009740838CD5Cd4

/users/a537069> metaset -s prod -d c3t500009740838CD5Cd5
/users/a537069> metaset -s prod -d c3t500009740838CD5Cd6
/users/a537069> metaset -s prod -d c3t500009740838CD5Cd7
/users/a537069> metaset -s prod -d c3t500009740838CD5Cd8
metaset: cor-9008app01: prod: drive c3t500009740838CD5Cd8 is not in set

/users/a537069> c3t500009740838CD5Cd8
/users/a537069> metaset -s prod

Set name = prod, Set number = 1

Host                Owner
  cor-9008app01      Yes
  cor-9008app02

Drive                             Dbase

/dev/dsk/c3t500009740838CD5Cd1    Yes

/dev/dsk/c3t500009740838CD5Cd16   Yes

/dev/dsk/c3t500009740838ED9Dd8    Yes


/users/a537069> metastat -s prod -p
prod/d8 -p prod/d0 -o 230686880 -b 117440512
prod/d0 -m prod/d1 prod/d2 1
prod/d1 1 1 c3t500009740838CD5Cd16s0
prod/d2 1 1 c3t500009740838ED9Dd8s0
prod/d7 -p prod/d0 -o 104857728 -b 125829120
prod/d6 -p prod/d0 -o 73400416 -b 31457280
prod/d5 -p prod/d0 -o 62914624 -b 10485760
prod/d4 -p prod/d0 -o 32 -b 62914560


/users/a537069> luxadm display /dev/rdsk/c3t500009740838EDA5d1s2
DEVICE PROPERTIES for disk: /dev/rdsk/c3t500009740838EDA5d1s2
  Vendor:               EMC
  Product ID:           SYMMETRIX
  Revision:             5874
  Serial Num:           603643!z;000
  Unformatted capacity: 20482.500 MBytes
  Read Cache:           Enabled
    Minimum prefetch:   0x0
    Maximum prefetch:   0xffff
  Device Type:          Disk device
  Path(s):

  /dev/rdsk/c3t500009740838EDA5d1s2
  /devices/pci@1d,700000/SUNW,qlc@1/fp@0,0/ssd@w500009740838eda5,1:c,raw
    LUN path port WWN:          500009740838eda5
    Host controller port WWN:   210000e08b1b8b20
    Path status:                O.K.
  /dev/rdsk/c3t500009740838ED9Dd1s2
  /devices/pci@1d,700000/SUNW,qlc@1/fp@0,0/ssd@w500009740838ed9d,1:c,raw
    LUN path port WWN:          500009740838ed9d
    Host controller port WWN:   210000e08b1b8b20
    Path status:                O.K.
  /dev/rdsk/c4t500009740838ED99d1s2
  /devices/pci@1d,700000/SUNW,qlc@2/fp@0,0/ssd@w500009740838ed99,1:c,raw
    LUN path port WWN:          500009740838ed99
    Host controller port WWN:   210000e08b1b0a2f
    Path status:                O.K.
  /dev/rdsk/c4t500009740838EDA1d1s2
  /devices/pci@1d,700000/SUNW,qlc@2/fp@0,0/ssd@w500009740838eda1,1:c,raw
    LUN path port WWN:          500009740838eda1
    Host controller port WWN:   210000e08b1b0a2f
    Path status:                O.K.



/users/a537069> luxadm -e dump_map /devices/pci@1d,700000/SUNW,qlc@1/fp@0,0:devctl
Pos  Port_ID Hard_Addr Port WWN         Node WWN         Type
0    10a00   0         500009740838ed9d 500009740838ec00 0x0  (Disk device)
1    14a00   0         500009740838eda5 500009740838ec00 0x0  (Disk device)
2    22f501  0         500009740838cd5c 500009740838cc00 0x0  (Disk device)
3    22f601  0         500009740838cd64 500009740838cc00 0x0  (Disk device)
4    1aa00   0         210000e08b1b8b20 200000e08b1b8b20 0x1f (Unknown Type,Host Bus Adapter)
/users/a537069> luxadm -e dump_map /devices/pci@1d,700000/SUNW,qlc@2/fp@0,0:devctl
Pos  Port_ID Hard_Addr Port WWN         Node WWN         Type
0    10a00   0         500009740838eda1 500009740838ec00 0x0  (Disk device)
1    14a00   0         500009740838ed99 500009740838ec00 0x0  (Disk device)
2    22f501  0         500009740838cd60 500009740838cc00 0x0  (Disk device)
3    22f601  0         500009740838cd58 500009740838cc00 0x0  (Disk device)
4    1aa00   0         210000e08b1b0a2f 200000e08b1b0a2f 0x1f (Unknown Type,Host Bus Adapter)
/users/a537069>



/users/a537069> cfgadm -al
Ap_Id                          Type         Receptacle   Occupant     Condition
c0                             scsi-bus     connected    configured   unknown
c0::dsk/c0t0d0                 CD-ROM       connected    configured   unknown
c1                             scsi-bus     connected    configured   unknown
c1::dsk/c1t0d0                 disk         connected    configured   unknown
c1::dsk/c1t1d0                 disk         connected    configured   unknown
c1::dsk/c1t2d0                 disk         connected    configured   unknown
c1::dsk/c1t3d0                 disk         connected    configured   unknown
c2                             scsi-bus     connected    unconfigured unknown
c3                             fc-fabric    connected    configured   unknown
c3::500009740838cd5c           disk         connected    configured   unknown
c3::500009740838cd64           disk         connected    configured   unknown
c3::500009740838ed9d           disk         connected    configured   unknown
c3::500009740838eda5           disk         connected    configured   unknown
c4                             fc-fabric    connected    configured   unknown
c4::500009740838cd58           disk         connected    configured   unknown
c4::500009740838cd60           disk         connected    configured   unknown
c4::500009740838ed99           disk         connected    configured   unknown
c4::500009740838eda1           disk         connected    configured   unknown
usb0/1                         unknown      empty        unconfigured ok
usb0/2                         unknown      empty        unconfigured ok
usb1/1                         unknown      empty        unconfigured ok
usb1/2                         unknown      empty        unconfigured ok
/users/a537069>

/users/a537069> cfgadm -o show_FCP_dev -al | grep unusable
/users/a537069> cfgadm -o show_FCP_dev -al
Ap_Id                          Type         Receptacle   Occupant     Condition
c3                             fc-fabric    connected    configured   unknown
c3::500009740838cd5c,1         disk         connected    configured   unknown
c3::500009740838cd5c,2         disk         connected    configured   unknown
c3::500009740838cd5c,3         disk         connected    configured   unknown
c3::500009740838cd5c,4         disk         connected    configured   unknown
c3::500009740838cd5c,5         disk         connected    configured   unknown
c3::500009740838cd5c,6         disk         connected    configured   unknown
c3::500009740838cd5c,7         disk         connected    configured   unknown
c3::500009740838cd5c,8         disk         connected    configured   unknown
c3::500009740838cd5c,16        disk         connected    configured   unknown
c3::500009740838cd64,1         disk         connected    configured   unknown
c3::500009740838cd64,2         disk         connected    configured   unknown
c3::500009740838cd64,3         disk         connected    configured   unknown
c3::500009740838cd64,4         disk         connected    configured   unknown
c3::500009740838cd64,5         disk         connected    configured   unknown
c3::500009740838cd64,6         disk         connected    configured   unknown
c3::500009740838cd64,7         disk         connected    configured   unknown
c3::500009740838cd64,8         disk         connected    configured   unknown
c3::500009740838cd64,16        disk         connected    configured   unknown
c3::500009740838ed9d,1         disk         connected    configured   unknown
c3::500009740838ed9d,2         disk         connected    configured   unknown
c3::500009740838ed9d,3         disk         connected    configured   unknown
c3::500009740838ed9d,4         disk         connected    configured   unknown
c3::500009740838ed9d,5         disk         connected    configured   unknown
c3::500009740838ed9d,6         disk         connected    configured   unknown
c3::500009740838ed9d,7         disk         connected    configured   unknown
c3::500009740838ed9d,8         disk         connected    configured   unknown
c3::500009740838eda5,1         disk         connected    configured   unknown
c3::500009740838eda5,2         disk         connected    configured   unknown
c3::500009740838eda5,3         disk         connected    configured   unknown
c3::500009740838eda5,4         disk         connected    configured   unknown
c3::500009740838eda5,5         disk         connected    configured   unknown
c3::500009740838eda5,6         disk         connected    configured   unknown
c3::500009740838eda5,7         disk         connected    configured   unknown
c3::500009740838eda5,8         disk         connected    configured   unknown
c4                             fc-fabric    connected    configured   unknown
c4::500009740838cd58,1         disk         connected    configured   unknown
c4::500009740838cd58,2         disk         connected    configured   unknown
c4::500009740838cd58,3         disk         connected    configured   unknown
c4::500009740838cd58,4         disk         connected    configured   unknown
c4::500009740838cd58,5         disk         connected    configured   unknown
c4::500009740838cd58,6         disk         connected    configured   unknown
c4::500009740838cd58,7         disk         connected    configured   unknown
c4::500009740838cd58,8         disk         connected    configured   unknown
c4::500009740838cd58,16        disk         connected    configured   unknown
c4::500009740838cd60,1         disk         connected    configured   unknown
c4::500009740838cd60,2         disk         connected    configured   unknown
c4::500009740838cd60,3         disk         connected    configured   unknown
c4::500009740838cd60,4         disk         connected    configured   unknown
c4::500009740838cd60,5         disk         connected    configured   unknown
c4::500009740838cd60,6         disk         connected    configured   unknown
c4::500009740838cd60,7         disk         connected    configured   unknown
c4::500009740838cd60,8         disk         connected    configured   unknown
c4::500009740838cd60,16        disk         connected    configured   unknown
c4::500009740838ed99,1         disk         connected    configured   unknown
c4::500009740838ed99,2         disk         connected    configured   unknown
c4::500009740838ed99,3         disk         connected    configured   unknown
c4::500009740838ed99,4         disk         connected    configured   unknown
c4::500009740838ed99,5         disk         connected    configured   unknown
c4::500009740838ed99,6         disk         connected    configured   unknown
c4::500009740838ed99,7         disk         connected    configured   unknown
c4::500009740838ed99,8         disk         connected    configured   unknown
c4::500009740838eda1,1         disk         connected    configured   unknown
c4::500009740838eda1,2         disk         connected    configured   unknown
c4::500009740838eda1,3         disk         connected    configured   unknown
c4::500009740838eda1,4         disk         connected    configured   unknown
c4::500009740838eda1,5         disk         connected    configured   unknown
c4::500009740838eda1,6         disk         connected    configured   unknown
c4::500009740838eda1,7         disk         connected    configured   unknown
c4::500009740838eda1,8         disk         connected    configured   unknown
/users/a537069>
/users/a537069>


/users/a537069> echo |format |grep -i c3t500009740838EDA5d1
      30. c3t500009740838EDA5d1 <EMC-SYMMETRIX-5874 cyl 21846 alt 2 hd 15 sec 128>
/users/a537069> echo |format |grep -i c3t500009740838EDA5d2
      31. c3t500009740838EDA5d2 <EMC-SYMMETRIX-5874 cyl 43694 alt 2 hd 15 sec 128>
/users/a537069> echo |format |grep -i c3t500009740838EDA5d3
      32. c3t500009740838EDA5d3 <EMC-SYMMETRIX-5874 cyl 54618 alt 2 hd 15 sec 128>
/users/a537069> echo |format |grep -i c3t500009740838EDA5d4
      33. c3t500009740838EDA5d4 <EMC-SYMMETRIX-5874 cyl 38232 alt 2 hd 60 sec 128>
/users/a537069> echo |format |grep -i c3t500009740838EDA5d5
      34. c3t500009740838EDA5d5 <EMC-SYMMETRIX-5874 cyl 38232 alt 2 hd 60 sec 128>
/users/a537069> echo |format |grep -i c3t500009740838EDA5d6
      35. c3t500009740838EDA5d6 <EMC-SYMMETRIX-5874 cyl 38232 alt 2 hd 60 sec 128>
/users/a537069> echo |format|grep -i c3t500009740838EDA5d7
      36. c3t500009740838EDA5d7 <EMC-SYMMETRIX-5874 cyl 38232 alt 2 hd 60 sec 128>
/users/a537069> echo |format|grep -i c3t500009740838CD5Cd2
       5. c3t500009740838CD5Cd2 <EMC-SYMMETRIX-5874 cyl 21846 alt 2 hd 15 sec 128>
/users/a537069> echo |format|grep -i  c3t500009740838CD5Cd3
       6. c3t500009740838CD5Cd3 <EMC-SYMMETRIX-5874 cyl 43694 alt 2 hd 15 sec 128>
/users/a537069> echo |format|grep -i  c3t500009740838CD5Cd4
       7. c3t500009740838CD5Cd4 <EMC-SYMMETRIX-5874 cyl 54618 alt 2 hd 15 sec 128>
/users/a537069> echo |format|grep -i c3t500009740838CD5Cd5
       8. c3t500009740838CD5Cd5 <EMC-SYMMETRIX-5874 cyl 38232 alt 2 hd 60 sec 128>
/users/a537069>  echo |format|grep -i c3t500009740838CD5Cd6
       9. c3t500009740838CD5Cd6 <EMC-SYMMETRIX-5874 cyl 38232 alt 2 hd 60 sec 128>
/users/a537069> echo |format|grep -i c3t500009740838CD5Cd7
      10. c3t500009740838CD5Cd7 <EMC-SYMMETRIX-5874 cyl 38232 alt 2 hd 60 sec 128>
/users/a537069> echo |format|grep -i c3t500009740838CD5Cd8
      11. c3t500009740838CD5Cd8 <EMC-SYMMETRIX-5874 cyl 38232 alt 2 hd 60 sec 128>
/users/a537069>

main thing to remove luns from server:

/users/a537069> powermt remove dev=c3t500009740838CD5Cd3s0

/users/a537069> cfgadm -o show_FCP_dev -al | grep -i "c3::500009740838cd5c,3"
c3::500009740838cd5c,3         disk         connected    configured   unknown
/users/a537069>


/users/a537069> luxadm -e offline /dev/rdsk/c3t500009740838CD5Cd3s2
/users/a537069> cfgadm -o show_FCP_dev -al | grep -i "c3::500009740838cd5c,3"
c3::500009740838cd5c,3         disk         connected    configured   unusable
/users/a537069>


/users/a537069> powermt remove dev=c3t500009740838CD5Cd4s0
/users/a537069> luxadm -e offline /dev/rdsk/c3t500009740838CD5Cd4s2
/users/a537069> cfgadm -o show_FCP_dev -al | grep -i "c3::500009740838cd5c,4"
c3::500009740838cd5c,4         disk         connected    configured   unusable
/users/a537069>


/users/a537069> powermt remove dev=c3t500009740838CD5Cd5s0
/users/a537069> luxadm -e offline /dev/rdsk/c3t500009740838CD5Cd5s2
/users/a537069> cfgadm -o show_FCP_dev -al | grep -i "c3::500009740838cd5c,5"
c3::500009740838cd5c,5         disk         connected    configured   unusable
/users/a537069>


/users/a537069> powermt remove dev=c3t500009740838CD5Cd6s0
/users/a537069> luxadm -e offline /dev/rdsk/c3t500009740838CD5Cd6s2
/users/a537069> cfgadm -o show_FCP_dev -al | grep -i "c3::500009740838cd5c,6"
c3::500009740838cd5c,6         disk         connected    configured   unusable

/users/a537069> powermt remove dev=c3t500009740838CD5Cd7s0
/users/a537069> luxadm -e offline /dev/rdsk/c3t500009740838CD5Cd7s2
/users/a537069> cfgadm -o show_FCP_dev -al | grep -i "c3::500009740838cd5c,7"
c3::500009740838cd5c,7         disk         connected    configured   unusable


/users/a537069> powermt remove dev=c3t500009740838CD5Cd8s0
/users/a537069>  luxadm -e offline /dev/rdsk/c3t500009740838CD5Cd8s2
/users/a537069> cfgadm -o show_FCP_dev -al | grep -i "c3::500009740838cd5c,8"
c3::500009740838cd5c,8         disk         connected    configured   unusable
/users/a537069>


/users/a537069> cfgadm -o show_FCP_dev -al | grep unusable
c3::500009740838cd5c,2         disk         connected    configured   unusable
c3::500009740838cd5c,3         disk         connected    configured   unusable
c3::500009740838cd5c,4         disk         connected    configured   unusable
c3::500009740838cd5c,5         disk         connected    configured   unusable
c3::500009740838cd5c,6         disk         connected    configured   unusable
c3::500009740838cd5c,7         disk         connected    configured   unusable
c3::500009740838cd5c,8         disk         connected    configured   unusable
c3::500009740838eda5,1         disk         connected    configured   unusable
c3::500009740838eda5,2         disk         connected    configured   unusable
c3::500009740838eda5,3         disk         connected    configured   unusable
c3::500009740838eda5,4         disk         connected    configured   unusable
c3::500009740838eda5,5         disk         connected    configured   unusable
c3::500009740838eda5,6         disk         connected    configured   unusable
c3::500009740838eda5,7         disk         connected    configured   unusable
/users/a537069>


/users/a537069> cfgadm -o unusable_FCP_dev -c unconfigure c3::500009740838eda5


/users/a537069> cfgadm -o show_FCP_dev -al | grep unusable
c3::500009740838cd5c,2         disk         connected    configured   unusable
c3::500009740838cd5c,3         disk         connected    configured   unusable
c3::500009740838cd5c,4         disk         connected    configured   unusable
c3::500009740838cd5c,5         disk         connected    configured   unusable
c3::500009740838cd5c,6         disk         connected    configured   unusable
c3::500009740838cd5c,7         disk         connected    configured   unusable
c3::500009740838cd5c,8         disk         connected    configured   unusable


/users/a537069> cfgadm -o show_FCP_dev -al |grep -i "c3::500009740838eda5"
c3::500009740838eda5,1         disk         connected    unconfigured unknown
c3::500009740838eda5,2         disk         connected    unconfigured unknown
c3::500009740838eda5,3         disk         connected    unconfigured unknown
c3::500009740838eda5,4         disk         connected    unconfigured unknown
c3::500009740838eda5,5         disk         connected    unconfigured unknown
c3::500009740838eda5,6         disk         connected    unconfigured unknown
c3::500009740838eda5,7         disk         connected    unconfigured unknown
c3::500009740838eda5,8         disk         connected    configured   unknown
/users/a537069>


/users/a537069>  cfgadm -o unusable_FCP_dev -c unconfigure c3::500009740838cd5c

/users/a537069> cfgadm -o show_FCP_dev -al | grep unusable

/users/a537069> cfgadm -o show_FCP_dev -al |grep -i "c3::500009740838cd5c"
c3::500009740838cd5c,1         disk         connected    configured   unknown
c3::500009740838cd5c,2         disk         connected    unconfigured unknown
c3::500009740838cd5c,3         disk         connected    unconfigured unknown
c3::500009740838cd5c,4         disk         connected    unconfigured unknown
c3::500009740838cd5c,5         disk         connected    unconfigured unknown
c3::500009740838cd5c,6         disk         connected    unconfigured unknown
c3::500009740838cd5c,7         disk         connected    unconfigured unknown
c3::500009740838cd5c,8         disk         connected    unconfigured unknown
c3::500009740838cd5c,16        disk         connected    configured   unknown
/users/a537069>


devfsadm -Cv


/users/a537069> echo |format |grep -i "c3t500009740838EDA5d1"
/users/a537069> echo |format |grep -i "c3t500009740838EDA5d2"
/users/a537069> echo |format |grep -i "c3t500009740838EDA5d3"
/users/a537069> echo |format |grep -i "c3t500009740838EDA5d4"
/users/a537069> echo |format |grep -i "c3t500009740838EDA5d5"
/users/a537069> echo |format |grep -i "c3t500009740838EDA5d6"
/users/a537069> echo |format |grep -i "c3t500009740838EDA5d7"
/users/a537069> echo |format |grep -i "c3t500009740838CD5Cd2"
/users/a537069> echo |format |grep -i "c3t500009740838CD5Cd2^?3"
/users/a537069> echo |format |grep -i "c3t500009740838CD5Cd3"
/users/a537069> echo |format |grep -i "c3t500009740838CD5Cd4"



======================================================================================================================

Site:https://community.oracle.com/thread/1917729

http://thegeekdiary.com/how-to-identify-the-hba-cardsports-and-wwn-in-solaris/


Node WWN - WWN of HBA itself
Port WWN - WWN on a specific port of the HBA