ViPR Controller: Exporting VMAX3/AFA LUN fails with Error 12000

This ‘Error 12000’ maybe encountered while exporting a VMAX 3/AFA LUN from ViPR Controller as a shared datastore to a specific vSphere ESXi Cluster (ViPR shared export mask). The reason for the failure is because ViPR either attempts to add the new shared LUN to independent exclusive ESXi Masking Views or to a manually created shared cluster masking view:

This issue arises in scenarios where for example the ESXi hosts already have independent Masking Views created without the NO_VIPR suffix in the Masking view name and/or an ESXi Cluster Masking View (Tenant Pod in EHC terms) has been created outside of ViPR control.

Resolution:

In the case of VMAX ensure only one shared cluster Masking View (MV) exists for the tenant cluster (utilizing cascaded initiator groups) and is under ViPR management control – if the Cluster MV was created manually (for example VxBlock factory) then create a small volume for this manually created MV directly from Unisphere/Symcli and then perform a ViPR ingestion of this newly created volume, this will result in the MV coming under ViPR Management.

In the case of a VxBlock (including Cisco UCS blades) all hosts in the cluster must contain exclusive masking views for their respective boot volumes and these exclusive masking views MUST have a NO_VIPR suffix.

You may ask the question why each host has its own dedicated masking view?: Think Vblock/VxBlock with UCS, where each UCS ESXi blade server boots from a SAN-attached boot volume presented from the VMAX array (Vblock/VxBlock 700 series = VMAX). Further detail can be found here on how specific functioning Masking Views are configured on a Vblock/VxBlock:

vmax-masking-views-for-esxi-boot-and-shared-cluster-volumes

Key point: dedicated exclusive Masking views are required for VMware ESXi boot volumes and MUST have a NO_VIPR suffix in addition to Cluster Masking Views for shared vmfs datastores being under ViPR Control. Please reference the following post for guidance in relation to Boot Volumes exclusive masking views and how to ingest these in ViPR:

ViPR Controller – Ingest V(x)Block UCS Boot Volumes

In the case of ViPR in this scenario it is best to ingest the boot volumes as per the guidance above and then perform the export of a shared volume which will result in ViPR skipping over the exclusive masking views ( _NO_VIPR appended to their exclusive mask name) and ViPR either creating or utilizing (in the case of an existing ViPR export mask) a ViPR controlled shared cluster Masking View.

Note:  if you have circumvented this error by manually creating the shared Cluster Masking View (through Unisphere/SYMCLI) in advance of the first cluster wide ViPR export please ingest this Masking View in order to bring it under ViPR control as per above guidance else you will experience issues later (for example adding new ESXi hosts to the cluster).

VMAX VG2/8 – Masking View & Cisco Zoning Script

This post will cover the Masking and Zoning scripts for a VG when using Cisco MDS fabric switches. This post will not cover the creation or rules around the Control volumes, please reference the latest EMC publications for guidelines around quantity and size of the control volumes. The following example configuration applies to ‘VNX File OE 7.1’.

Note: Please reference EMC documentation for precise instructions as this is an example only config for deploying a ‘VNX VG’ with a VMAX.

The following is a list of the celerra control volumes and sizes required for the NAS installation:
• 2 x 12394 cylinders (11.62 GB)
• 3 x 2216 cylinders (2.03 GB)
• 1 x 69912 cylinders (64 GB)
• 1 x 2 cylinder volume for the gatekeeper device

VG Control Volumes and their respective HLU ID’s:
• The two ‘11.62 GB’ control LUNs map to HLU 0 and 1.
• The three ‘2.03 GB’ control LUNs map to HLU 2, 3, and 4.
• The ’64 GB’ control LUN maps to HLU 5.
• 1 x ‘2 cyl’ gatekeeper LUN maps to 0F.

Listing the Control Volumes in order to gather their HEX values:
symdev -sid XXX list -emulation celerra
VGZM1
Add -v for a more detailed report:
symdev list -emulation celerra -v

In this example configuration we are using the F:1 ports on Engines 4&5:
#### List the Celerra LUN/ACLX MAPPING TO F1 FA ports: ####
symcfg -sid xxx -dir 7f -p 1 list -addr -avail
symcfg -sid xxx -dir 8f -p 1 list -addr -avail
symcfg -sid xxx -dir 9f -p 1 list -addr -avail
symcfg -sid xxx -dir 10f -p 1 list -addr -avail

1. MASKING VIEW CONFIG

Create the initiator group:
symaccess -sid XXX -name VG_IG -type initiator create -consistent_lun
If you have identified the Xblade WWPN’s from the fabric switches then you may add now, else you can wait until they are displayed by the Control Station during the NAS install:
symaccess -sid XXX -name VG_IG -type initiator -wwn 50060160…… add
symaccess -sid XXX -name VG_IG -type initiator -wwn 50060160…… add
symaccess -sid XXX -name VG_IG -type initiator -wwn 50060160…… add
symaccess -sid XXX -name VG_IG -type initiator -wwn 50060160…… add

Create the port group using the VMAX FA Ports 7f:1,8f:1,9f:1,10f:1:
symaccess -sid XXX -name VG_PG -type port create
symaccess -sid XXX -name VG_PG -type port -dirport 7f:1,8f:1,9f:1,10f:1 add

Note: Ensure the ACLX volume is mapped to these FA ports 7f:1,8f:1,9f:1,10f:1 as 0E.
symdev -sid XXX list -aclx -v provides detailed information for the ACLX volume.
See here for further ACLX details: EMC VMAX – Access Control Logix (ACLX) Gatekeeper Mapping

Create the Storage Group:
Add the Control Devices as listed above (Do not add the gatekeeper volume at this stage to the SG).
symaccess -sid XXX -name VG_SG -type storage create
symaccess -sid XXX -name VG_SG -type storage add devs 0055-005A

Create Masking View:
symaccess -sid XXX create view -name VG_MV -sg VG_SG -pg VG_PG -ig VG_IG -celerra
symaccess -sid XXX show view VG_MV

Now add 1 x 2 cyl Gatekeeper with a HLU value of 0F:
symaccess -sid XXX -name VG_SG -type storage add devs 005B -lun 0f -celerra

Verify the configuration:
symaccess -sid XXX show view VG_MV
symaccess -sid XXX list logins

2. Cisco MDS Zoning

It is good practice to isolate the file traffic on its own dedicated VSAN. In this example VSAN’s 20-Fabric-‘A’ and 21-Fabric-‘B’ are used specifically for the NAS traffic between the VG & VMAX. Traditional single initiator single target zones applied using standard cisco sequence: Create facalias | Create Zone | Add Members to Zone | Create Zoneset | Add Zones to Zoneset | Activate Zoneset | Save Config.

This example uses pWWN for the FCALIAS (you can also use FCID or fabric port WWN (fWWN)).

Fabric A Zoning

## Collect Interface details: ##
show interface description | grep VMAX40K
fc2/15 VMAX40K_7f1
fc3/19 VMAX40K_9f1
show interface description | grep XBlade
fc1/17 XBlade 2-00/00
fc4/29 XBlade 3-00/00

## VMAX WWNs: ##
show flogi database interface fc 2/15
7f1: 50:00:09:75:00:xx:xx:59
show flogi database interface fc 3/19
9f1: 50:00:09:75:00:xx:xx:61

## XBLADE WWNs: ##
show flogi database interface fc 1/17
XBlade 2: 50:06:01:60:xx:xx:xx:xx
show flogi database interface fc 4/29
XBlade 3: 50:06:01:68:xx:xx:xx:xx

## Configure: ##
conf t
interface fc2/15, fc3/19, fc1/17, fc4/29
no shut

vsan database
vsan 20 name NAS_WORKLOAD_VSAN_A
vsan 20 interface fc2/15, fc3/19, fc1/17, fc4/29


fcdomain domain 1 static vsan 20
fcdomain priority 2 vsan 20
fcdomain restart vsan 20

fcalias name XBlade2-00-00 vsan 20
member pwwn 50:06:01:60:xx:xx:xx:xx

fcalias name XBlade3-00-00 vsan 20
member pwwn 50:06:01:68:xx:xx:xx:xx

fcalias name VMAX40K_7f1 vsan 20
member pwwn 50:00:09:75:00:xx:xx:59

fcalias name VMAX40K_9f1 vsan 20
member pwwn 50:00:09:75:00:xx:xx:61

zone name XBlade2-00-00_to_VMAX-7f-1 vsan 20
member fcalias VMAX40K_7f1
member fcalias XBlade2-00-00

zone name XBlade3-00-00_to_VMAX-9f-1 vsan 20
member fcalias XBlade3-00-00
member fcalias VMAX40K_9f1

zoneset name zs_vsan20 vsan 20
zone name XBlade2-00-00_to_VMAX-7f-1
zone name XBlade3-00-00_to_VMAX-9f-1

zoneset activate name zs_vsan20 vsan 20
zone commit vsan 20

Copy Run Start
show zoneset active vsan 20

Fabric B Zoning

show interface description | grep VMAX40K
fc2/15 VMAX40K_10f1
fc3/19 VMAX40K_8f1
show interface description | grep XBlade
fc1/17 XBlade 2-00/00
fc4/29 XBlade 3-00/00

## VMAX WWNs: ##
show flogi database interface fc 2/15
10f1: 50:00:09:75:00:xx:xx:65
show flogi database interface fc 3/19
8f1: 50:00:09:75:00:xx:xx:5d

## XBLADE WWNs: ##
show flogi database interface fc 1/17
XBlade 2: 50:06:01:61:xx:xx:xx:xx
show flogi database interface fc 4/29
XBlade 3: 50:06:01:69:xx:xx:xx:xx

## Configure: ##
conf t
interface fc2/15, fc3/19, fc1/17, fc4/29
no shut

conf t
vsan database
vsan 21 name NAS_WORKLOAD_VSAN_B
vsan 21 interface fc2/15, fc3/19, fc1/17, fc4/29

fcdomain domain 2 static vsan 21
fcdomain priority 2 vsan 21
fcdomain restart vsan 21

fcalias name XBlade2-00-01 vsan 21
member pwwn 50:06:01:61:xx:xx:xx:xx

fcalias name XBlade3-00-01 vsan 21
member pwwn 50:06:01:69:xx:xx:xx:xx

fcalias name VMAX40K_10f1 vsan 21
member pwwn 50:00:09:75:00:xx:xx:65

fcalias name VMAX40K_8f1 vsan 21
member pwwn 50:00:09:75:00:xx:xx:5d

zone name XBlade2-00-01_to_VMAX-10f-1 vsan 21
member fcalias XBlade2-00-01
member fcalias VMAX40K_10f1

zone name XBlade3-00-01_to_VMAX-8f-1 vsan 21
member fcalias XBlade3-00-01
member fcalias VMAX40K_8f1

zoneset name zs_vsan21 vsan 21
zone name XBlade2-00-01_to_VMAX-10f-1
zone name XBlade3-00-01_to_VMAX-8f-1

zoneset activate name zs_vsan21 vsan 21
zone commit vsan 21

copy run start
show zoneset active vsan 21

NEXT: INSTALL NAS ON CONTROL STATION 0
====================================SUMMARY===================================
Congratulations!! Install for VNX software to release 7.1.76-4 succeeded.

Status: Success
Actual Time Spent: 40 minutes
Total Number of attempts: 1
Log File: /nas/log/install.7.1.76-4.Dec-02-11:54.log
=====================================END=======================================

3. Perform Checks

Verify NAS Services are running:
Login to the Control Station as ‘nasadmin’ and issue the cmd /nas/sbin/getreason from the CS console. The reason code output should be as follows (see detailed list of Reason Codes below):
10 - slot_0 primary control station
11 - slot_1 secondary control station
5 - slot_2 contacted
5 - slot_3 contacted

Check the status of the DATA Movers and view which slot is active:
nas_server -info -all

Confirm the VMAX is connected to the VG:
nas_storage -check -all
nas_storage -list

List detailed information of the config:
/nas/bin/nas_storage –info –all

Code Levels:
List the datamovers: nas_server -list
Check the DART code installed on the Data Movers: server_version ALL
Check the NAS code installed on the Control Station: nas_version

Network Configuration:
Control Station: /sbin/ifconfig (eth3 is the mgmt interface)
Data Movers: server_ifconfig server_2 -all

Date & Time:
Control Station: date
Data Movers: server_date ALL

List the disk table to ensure all of the Control Volumes have been presented to both Data Movers:
nas_disk -list

Check the File Systems:
df -h

Confirm the EMC NAS version installed and the model name:
/nasmcd/bin/nas_version
/nas/sbin/model

Check IP & DNS info on the CS:
nas_cs -info

Log Files:
Log file location: /var/log/messages
Example of NAS services starting successfully:
grep -A10 “Starting NAS services” /var/log/messages*
Output:
Dec 8 19:07:27 emcnas_i0 S95nas: Starting NAS services
Dec 8 19:07:46 emcnas_i0 EMCServer: nas_mcd: MCD will monitor CS IPMI connection.
Dec 8 19:08:46 emcnas_i0 EMCServer: nas_mcd: slot 0 missed 10 heartbeats from slot 1.
Dec 8 19:08:50 emcnas_i0 EMCServer: nas_mcd: Install Manager is running on slot 0, skipping slot 1 reboot
Dec 8 19:08:50 emcnas_i0 EMCServer: nas_mcd: Slot 0 becomes primary due to timeout
Dec 8 19:08:52 emcnas_i0 mcd_helper: All NBS devices are up
Dec 8 19:09:08 emcnas_i0 kernel: kjournald starting. Commit interval 5 seconds

Check the Data Mover Logs:
server_log server_2

Complete a Health Check:
/nas/bin/nas_checkup

Failing over a Control Station:
Failover:
/nas/sbin/./cs_standby -failover
Takeover:
/nasmcd/sbin/./cs_standby -takeover
Or reboot:
nas_cs –reboot

Determine the failover status of the Blades (Datamovers):
/nas/bin/nas_server -info –all

Initiate a manual failover of server_2 to the standby Datamover:
server_standby server_2 -activate mover

List the status of the Datamover’s:
nas_server -list

Review the information for server_2:
nas_server -info server_2

Shutdown Datamover (blade):
/nas/bin/server_cpu server_2 -halt now

Power on the Datamover (blade):
/nasmcd/sbin/t2reset pwron -s 2

Restore the original primary Datamover:
server_standby server_2 -restore mover

VG Shutdown:
Shutdown Control Stations and DATA Movers:
/nasmcd/sbin/nas_halt -f now

List of Reason Codes:
0 – Reset (or unknown state)
1 – DOS boot phase, BIOS check, boot sequence
2 – SIB POST failures (that is, hardware failures)
3 – DART is loaded on Data Mover, DOS boot and execution of boot.bat, boot.cfg.
4 – DART is ready on Data Mover, running, and MAC threads started.
5 – DART is in contact with Control Station box monitor.
6 – Control Station is ready, but is not running NAS service.
7 – DART is in panic state.
9 – DART reboot is pending or in halted state.
10 – Primary Control Station reason code
11 – Secondary Control Station reason code
13 – DART panicked and completed memory dump (single Data Mover configurations only, same as code 7, but done with dump)
14 – This reason code can be set for the Blade for any of the following:
• Data Mover enclosure-ID was not found at boot time
• Data Mover’s local network interface MAC address is different from MAC address in configuration file
• Data Mover’s serial number is different from serial number in configuration file
• Data Mover was PXE booted with install configuration
• SLIC IO Module configuration mismatch (Foxglove systems)
15 – Data Mover is flashing firmware. DART is flashing BIOS and/or POST firmware. Data Mover cannot be reset.
17 – Data Mover Hardware fault detected
18 – DM Memory Test Failure. BIOS detected memory error
19 – DM POST Test Failure. General POST error
20 – DM POST NVRAM test failure. Invalid NVRAM content error (checksum, WWN, etc.)
21 – DM POST invalid peer Data Mover type
22 – DM POST invalid Data Mover part number
23 – DM POST Fibre Channel test failure. Error in blade Fibre connection (controller, Fibre discovery, etc.)
24 – DM POST network test failure. Error in Ethernet controller
25 – DM T2NET Error. Unable to get blade reason code due to management switch problems.

EMC VMAX – Removal Of A TDEV

You may no longer have use for a specific TDEV volume in a Storage Pool and want to free this space to create a new volume or expand an existing volume. These are the steps involved to delete the TDEV and reclaim this space in the Pool.

This example details how to delete a single TDEV volume in a Storage Group. An example of such a configuration is in the case where a single TDEV was mapped and used as a dedicated VMware ESX boot volume with no other volumes being present in the Storage Group. Scenario: It may be a case where an ESX Host is being decommissioned and we are reclaiming the space used in the Storage Pool.

The steps involved are:

1. Delete the ESX Host Masking View
2. Remove ESX Boot volume (TDEV) from Storage group
3. Delete the ESX Host Storage Group
4. Mark the TDEV as not_ready
5. Unmap the TDEV from the FA ports
6. Unbind TDEV from Pool
7. (If) META Volume Dissolve
8. Delete the Device

In order to view the list of all TDEV’s created in the Pool (ESX-BOOT pool):
symcfg -sid xxx list -tdev -gb -thin -pool ESX-BOOT
List all TDEVs in the system:
symcfg -sid xxx list -tdev

1. Delete the ESX Host Masking View
List the all the views:
symaccess -sid xxx list view
Remove the ESX01_BOOT_MV Masking View:
symaccess -sid xxx delete view -name ESX01_BOOT_MV

2. Remove ESX Boot volume (TDEV) from Storage group
View the details of the storage group ESX01_BOOT_SG in order to gather the correct dev ID (for example 0234):
symaccess -sid xxx list -type STORAGE
symaccess –sid xxx show ESX01_BOOT_SG -type storage

Remove the dev 0234 from the Storage Group:
symaccess -sid xxx -name ESX01_BOOT_SG -type storage remove devs 0234

3. Delete the ESX Host Storage Group
symaccess -sid xxx -name ESX01_BOOT_SG -type storage delete

4. Mark the TDEV as not_ready
symdev -sid xxx not_ready 0234
If you require to change the status of all the devices in a SG:
symsg -sid xxx -sg SG-Name not_ready

5. Unmap the TDEV from FA ports
symconfigure -sid xxx -cmd “unmap dev 0234;” PREVIEW
symconfigure -sid xxx -cmd “unmap dev 0234;” COMMIT

If you require to Unmap a range of devs:
symconfigure -sid xxx -cmd “unmap dev 0234:0236;” COMMIT

6. Unbind TDEV from Pool
symconfigure -sid xxx -cmd “unbind tdev 0234 from pool ESX-BOOT;” PREVIEW
symconfigure -sid xxx -cmd “unbind tdev 0234 from pool ESX-BOOT;” COMMIT

Check if the Unbind was successful:
symcfg -sid xxx list -tdev -gb -thin -pool ESX-BOOT
Once the TDEV is unbound all pointers to the data pool are removed and those tracks that were consumed by the TDEV are marked as available space in the ESX-BOOT Pool.
If you require to Unbind a range of devs:
symconfigure -sid xxx -cmd “unbind tdev 0234:0235 from pool ESX-BOOT;”

7. Dissolve (IF) Meta Volume
symconfigure -sid xxx -cmd “dissolve meta dev 0234;” PREVIEW
symconfigure -sid xxx -cmd “dissolve meta dev 0234;” COMMIT

8. Delete the Device
symconfigure -sid xxx -cmd “delete dev 0234;” PREVIEW
symconfigure -sid xxx -cmd “delete dev 0234;” COMMIT

If you require to Delete a range of devs:
symconfigure -sid xxx -cmd “delete dev 0234:0235;” COMMIT
Comfirm Delete was successful:
symcfg -sid xxx list -tdev

##################################################################
Following on from Burhan’s comment below
Burhan Halilov (@7400N) says:
“If you use -unmap to “symaccess delete view” in 1. you can skip steps 4 and 5
Also you can run “symsg -sid xxx -sg yyy unbind” before you delete the SG in 3. and save step 6.
The symconfigure bind/unbind are being depreciated after se7.6 and replaced with symdev/symsg/symdg bind/unbind”

Here is another way of achieving the removal a TDEV in a quicker fashion as per above, but also including the scenario where the SG is associated with a FAST Policy:

1. Delete the ESX Host Masking View & Unmap the TDEV from FA ports
symaccess -sid xxx delete view -name MV-NAME -unmap -NOP

2. If the SG is associated with a Fast Policy then it will need to be disassociated
symfast -sid xxx disassociate -sg SG-NAME -fp_name FP-NAME

3. Unbind the SG TDEV(s) from Pool
symsg -sid xxx -sg SG-NAME unbind -NOP

4. Remove the TDEVs from the storage group
symaccess -sid xxx -name SG-NAME -type storage remove devs XXXX:XXXX

5. Delete the Storage Group
symaccess -sid xxx -name SG-NAME -type storage delete -NOP

7. Dissolve (IF) Meta Volume
symconfigure -sid xxx -cmd “dissolve meta dev XXXX:XXXX;” PREVIEW -NOP
symconfigure -sid xxx -cmd “dissolve meta dev XXXX:XXXX;” COMMIT -NOP
symcfg list -tdev

8. Delete the Device(s)
symconfigure -sid xxx -cmd “delete dev XXXX:XXXX;” PREVIEW -NOP
symconfigure -sid xxx -cmd “delete dev XXXX:XXXX;” COMMIT -NOP
symcfg list -tdev

VMAX3 TDEV Deletion Process:

Example TDEV '0234'

1. Delete the ESX Host Masking View & Unmap the TDEV from FA ports:
symaccess -sid xxx delete view -name MV-NAME -unmap -NOP

2. Remove the TDEVs from the storage group:
symaccess -sid xxx -name SG-NAME -type storage remove devs 0234

3. Delete the Storage Group:
symaccess -sid xxx -name SG-NAME -type storage delete -NOP

4. Free all Allocations associated with the device:
symconfigure -sid xxx FREE -ALL 0234 -NOP
If you need to FREE ALL for a range of devs:
symdev -sid 289 FREE -ALL -DEVS 0234:0236 -NOP

5. Delete the Device(s)
symconfigure -sid xxx -cmd “delete dev 0234;” PREVIEW -NOP
symconfigure -sid xxx -cmd “delete dev 0234;” COMMIT -NOP
symcfg list -tdev

If you need to delete a range of TDEVs on VMAX3:
Gather details:
symaccess -sid XXX list view -name MV-NAME -detail
symaccess -sid XXX list -type storage -name SG-NAME
symaccess -sid XXX -type storage show SG-NAME

Delete:
symaccess -sid XXX -name SG-NAME -type storage remove devs 00CE:00D0
symdev -sid XXX FREE -ALL -DEVS 00CE:00D0 -NOP
symconfigure -sid XXX -cmd “delete dev 00CE:00D0;” COMMIT -NOP

EMC Symmetrix VMAX – Masking Views for VMware ESX Boot & Shared Cluster VMFS Volumes

This script is a result of having to create quite a large number of dedicated Masking views for VMware ESX 5.x server boot volumes and Masking Views for shared vmfs datastore clusters. In this example I will create two dedicated ESX server MV’s and one Cluster Masking View consisting of the two ESX Hosts sharing a VMFS datastore.

Each VMware ESX server boots from a SAN-attached boot volume presented from the VMAX array. As an example the boot LUNs are 20GB devices which are configured from a dedicated RAID5 3+1 disk group:
symconfigure -sid xxx -cmd “create dev count=2, config=Raid-5, data_member_count=3, emulation=FBA, size=20GB, disk_group=1;” COMMIT
List the newly created devices:
symdev -sid xxx list -disk_group 1

If you wish to confirm that a device has not already been assigned to a host:
symaccess -sid xx list assignment -dev xxx
Or if you need to check a series of devices:
symaccess -sid xxx list assignment -dev xxx:xxx

“symaccess” command performs all Auto-provisioning functions. Using the symaccess command we will create a port group, initiator group and a storage group for each VMware ESX host and combine these newly created groups into a Masking View.

Port Group Configuration
1. Create the Port Group that will be used for the two hosts:
symaccess -sid xxx -name ESX-Cluster-PG -type port create
2. Add FA ports to the port group; in this example we will add ports from Directors 8&9 From Engines 4&5 8e:0,9e:0:
symaccess -sid xxx -name ESX-Cluster-PG -type port -dirport 8e:0,9e:0 add

Note on Port Groups: Where possible to achieve best performance and availability hosts should be mapped to two or more Front-End ports on directors. If you have multiple engines then spread across engines and directors – Rule 17 (20/40K). Please see post: EMC VMAX 10K Zoning with Cisco MDS Switches

Check that the Host HBAs are logging in:
symaccess -sid xxx list logins -dirport 8e:0
symaccess -sid xxx list logins -dirport 9e:0

Host ESX01 Masking View Configuration
1. Create the Initiator Group for ESX01:
symaccess -sid xxx -name ESX01_ig -type initiator create -consistent_lun
2. Add the ESX Initiator HBA WWNs to the Initiator Group:
symaccess -sid xxx -name ESX01_ig -type initiator -wwn wwn_A add
symaccess -sid xxx -name ESX01_ig -type initiator -wwn wwn_B add

3. Create the Storage Group for the first ESX host Boot volume:
symaccess –sid xxx -name ESX01_sg -type storage create
4. Add the Symmetrix boot volume device to the Storage Group:
symaccess -sid xxx -name ESX01_sg -type storage add devs ####
5. Create the Masking View:
symaccess -sid xxx create view -name ESX01_mv -sg ESX01_sg -pg ESX-Cluster-PG -ig ESX01_ig

Host ESX02 Masking View Configuration
1. symaccess -sid xxx -name ESX02_ig -type initiator create -consistent_lun
2. symaccess -sid xxx -name ESX02_ig -type initiator -wwn wwn_A add
symaccess -sid xxx -name ESX02_ig -type initiator -wwn wwn_B add
3. symaccess -sid xxx -name ESX02_sg -type storage create
4. symaccess -sid xxx -name ESX02_sg -type storage add devs ####
5. symaccess -sid xxx create view -name ESX02_mv -sg ESX02_sg -pg ESX-Cluster-PG -ig ESX02_ig

Configuration of Cluster1 (ESX01,ESX02) with shared VMFS Datastore
1. We begin by cascading the cluster hosts into a single Initiator Group:
symaccess -sid xxx -name Cluster1_IG -type initiator create -consistent_lun
symaccess -sid xxx -name Cluster1_IG -type initiator -ig ESX01_ig add
symaccess -sid xxx -name Cluster1_IG -type initiator -ig ESX02_ig add

2. Create the Storage Group containing the shared Datastore(s):
symaccess -sid xxx -name Cluster1_SG -type storage create
4. Add the Symmetrix shared Datastore(s) device(s):
symaccess -sid xxx -name Cluster1_SG -type storage add devs ####(:####)
5. The Port Group contains the director Front-End ports zoned to the ESX Hosts (As per the PG created above):
symaccess -sid xxx -name ESX-Cluster-PG -type port create
symaccess -sid xxx -name ESX-Cluster-PG -type port -dirport 8e:0,9e:0 add

6. The Masking View for the entire ESX cluster:
symaccess -sid xxx create view -name Cluster1_MV -sg Cluster1_SG -pg ESX-Cluster-PG -ig Cluster1_IG

View Configuration Details
To view the configuration of the groups PG,IG,SG and MV (use -v for more detail):
symaccess -sid xxx list -type storage|port|initiator -v
symaccess -sid xxx list -type storage|port|initiator -name group_name
symaccess -sid xxx show group_name -type storage|port|initiator
symaccess -sid xxx list view -v
symaccess -sid xxx list view -name view_name
symaccess -sid xxx list view -name view_name -detail
symaccess -sid xxx list assignment -dev DevID

Examples:
symaccess -sid xxx list -type port (Lists all exisiting port group names)
symaccess -sid xxx show ESX-Cluster-PG -type port
symaccess -sid xxx list -type port -dirport 8e:0 (Lists all port groups that a particular director port belongs to)
symaccess -sid xxx show -type initiator Cluster1_IG -detail
symaccess -sid xxx list logins -wwn xxxx (Verify that wwn xxx is logged in to the FAs)
symaccess -sid xxx list -type initiator -wwn xxxx(Verify that the HBA is a member of the correct Initiator Group)
symaccess -sid xxx show Cluster1_SG -type storage
symaccess -sid xxx show view Cluster1_MV
symaccess -sid xxx list assignment -dev XXXX
(Shows the masking details of devices)

Verify BOOT|DATA LUN Assignment to FA Port(s) (LUN To PORT GROUP Assignment):
symaccess -sid xxx list assignment -devs ####
symaccess -sid xxx list assignment -devs ####:####

Backup Masking View to File
The masking information can then be backed up to a file using the following command:
symaccess -sid xxx backup -file backupFileName
The backup file can then be used to retrieve and restore group and masking information.

The SYMAPI database file can be found in the Solutions enabler directory: for example “D:\Program Files\EMC\SYMAPI\db\symapi_db.bin” if you wish to confirm the SE install location quickly then issue the following registry query cmd:
reg.exe query “HKEY_LOCAL_MACHINE\SOFTWARE\EMC\EMC Solutions Enabler” /v InstallPath

Note: On the VMAX Service Processor the masking information is automatically backed up every 24 hours by the Scheduler. The file (accessDB.bin) is saved to O:\EMC\S/N\public\user\backup.

Restore Masking View from File
To restore the masking information to Symmetrix enter the following command:
symaccess -sid xxx restore -file backupFileName