ViPR Controller -Configuring AD Authentication

The default built-in administrative accounts may not be granular enough to meet your business needs, if this is the case then adding an authentication provider such as Active Directory which we highlight as part of this configuration allows you to assign users or groups to specific roles.

The example configuration provided here was part of an Enterprise Hybrid Cloud solution. Continue reading

Introducing VCE VxRAIL

A Quantum Leap in Hyper-Converged Appliances

EMC|VCE and VMware have added a Hyper-Converged Infrastructure Appliance (HCIA) offering named VxRail to the existing CI portfolio of Vblocks, VxBlocks, VxRacks  (Blocks,Racks,Appliances).  

 

Vxrail1VxRail is built on a modular scale out clustering architecture that comprises of appliances (Base building block) where each appliance can house up to 4x industry standard x86 hardware compute nodes (four independent ESXi hosts) inclusive of storage. The appliance has a small footprint which is consuming only two rack units ‘2U’ in height, thus we will use the term ‘2U4N’ as abbreviation for the rack space and compute node count within the appliance. With the initial launch (Q1 2016) VxRail will have the ability to scale up to 8x appliances allowing for a total of 32x compute nodes (the Q2 VxRail release will allow for twice these scaling counts allowing for up to 16x Appliances resulting in a max total of 64 nodes in a VxRail cluster).

Continue reading

EMC VNX2 – Drive Layout (Guidelines & Considerations)

Applies only to VNX2 Systems.

CHOICES made in relation to the physical placement of Drives within a VNX can have an impact on how the VNX performs. The intention here is to shed some light on how to best optimize the VNX by placing Drives in their best physical locations within the VNX Array. The guidelines here deal with optimising the Back-End system resources. While these considerations and examples may help with choices around the physical location of Drives you should always work with a EMC certified resource in completing such an exercise.

VNX2Layout1

Maximum Available Drive Slots
You cannot exceed the maximum slot count, doing so will result in drives becoming unavailable. Drive form factor and DAE type may be a consideration here to ensure you are not exceeding the stated maximum. Thus the max slot count dictates the maximum drives and the overall capacity a system can support.
VNX2Layout2

BALANCE
BALANCE is the key when designing the VNX drive layout:

Where possible the best practice is to EVENLY BALANCE each drive type across all available back-end system BUSES.This will result in the best utilization of system resources and help to avoid potential system bottlenecks. VNX2 has no restrictions around using or spanning drives across Bus 0 Enclosure 0.
VNX2Layout3

DRIVE PERFORMANCE
These are rule of thumb figures which can be used as a guideline for each type of drive used in a VNX2 system.
Throughput (IOPS) figures are based on small block random I/O workloads:
VNX2Layout4

Bandwidth (MB/s) figures are based on large block sequential I/O workloads:
VNX2Layout5

Recommended Order of Drive Population:

1. FAST Cache
2. FLASH VP
3. SAS 15K
4. SAS 10K
5. NL-SAS

Physical placement should always begin at Bus0 Enclosure0 (0_0) and the first drives to get placed are always the fastest drives as per the above order. Start at the first available slot on each BUS and evenly balance the available Flash drives across the first slots of the first enclosure of each bus beginning with the FAST Cache drives. This ensures that FLASH Drives endure the lowest latency possible on the system and the greatest RoI is achieved.

FAST CACHE
FAST Cache drives are configured as RAID-1 mirrors and again it is good practice to balance the drives across all available back-end buses. Amount of FAST Cache drives per B/E Bus differs for each system but ideally aim for no more than 8 drives per bus (Including SPARE), this is due to the fact that FAST Cache drives are extremely I/O Intensive and placing more than the recommended maximum per Bus may cause I/O saturation on the Bus.
VNX2Layout6

Note: Do not mix different drive capacity sizes for FAST Cache, either use all 100GB or all 200GB drive types.

Also for VNX2 systems there are two types of SSD available:
• ‘FAST Cache SSDs’ are single-level cell (SLC) Flash drives that are targeted for use with FAST Cache. These drives are available in 100GB and 200GB capacities and can be used both as FAST Cache and as TIER-1 drives in a storage pool.
• ‘FAST VP SSDs’ are enterprise Multi-level cell (eMLC) drives that are targeted for use as TIER-1 drives in a storage pool (Not supported as ‘FAST Cache’ drives). They are available in three flavors 100GB, 200GB and 400GB.

More detailed post on FAST Cache: ‘EMC VNX – FAST Cache’

DRIVE FORM FACTOR
Drive form factor (2.5″ | 3.5“) is an important consideration. For example if you have a 6 BUS System with 6 DAE’s (one DAE per BUS) consisting of 2 x 2.5” Derringer DAEs and 4 x 3.5” Viper DAEs as follows:
VNX2Layout7

MCx HOT SPARING CONSIDERATIONS
Best practice is to ensure 1 spare is available per 30 of each drive type. When there are different drives of the same type in a VNX, but different speeds, form factors or capacities, then these should ideally be placed on different buses.

Note: Vault drives 0_0_0 – 0_0_3 if 300GB in size then no spare is required, but if larger than 300G is used and user luns are present on the Vault then a spare is required in this case.

While all un-configured drives in the VNX2 Array will be available to be used as a Hot Spare, a specific set of rules are used to determine the most suitable drive to use as a replacement for a failed drive:

1. Drive Type: All suitable drive types are gathered.
2. Bus: Which of the suitable drives are contained within the same bus as the failing drive.
3. Size: Following on from the Bus query MCx will then select a drive of the same size or if none available then a larger drive will be chosen.
4. Enclosure: This is another new feature where MCx will analyse the results of the previous steps to check if the Enclosure that contains the actual Failing drive has a suitable replacement within the DAE itself.

See previous post for more info: ‘EMC VNX – MCx Hot Sparing’


Drive Layout EXAMPLE 1:

VNX 5600 (2 BUS)

VNX2Layout8
FAST Cache:
1 X Spare, 8 x FAST Cache Avail.
8 / 2 BUSES = 4 FAST Cache Drives Per BUS
1 x 2.5” SPARE Placed on 0_0_24
1 X Spare, 20 x Flash VP Avail.
———————————
Fast VP:
20 / 2 BUSES = 10 Per BUS
10 x 3.5” Placed on BUS 0 Encl 1
10 x 2.5” Placed on BUS 1 Encl 0
1 X 2.5” SPARE Placed on 1_0_24
———————————
VNX2Layout9


Drive Layout EXAMPLE 2:

VNX 5800 (6 BUS)
VNX2Layout10

VNX2Layout11


Drive Layout EXAMPLE 3:

VNX 8000 (16 BUS)

VNX2Layout12

VNX2Layout12a

Useful Reference:
EMC VNX2 Unified Best Practices for Performance

EMC VMAX – Fully Pre-allocate TDEV

By Fully Pre-allocating a TDEV all the tracks associated with the device are reserved, this may be useful for mission critical apps or avoiding any write miss penalties.

Example SYMCLI:
Single TDEV example:
symconfigure -sid xxx -cmd “start allocate on tdev 0c66 end_cyl=last_cyl allocate_type=persistent;” commitCliPreAll1

Range of TDEVs:
symconfigure -sid xxx -cmd “start allocate on tdev 0c6e:1116 end_cyl=last_cyl allocate_type=persistent;” commitCliPreAll3

CliPreAll2

Example UNISPHERE:
From the Unisphere GUI navigate to storage>volumes right click the device you wish to modify and select ‘Start allocate’.

UniPreAll1

UniPreAll3

UniPreAll2

UniPreAll4

EMC VMAX3 – Adding Gatekeeper RDM Volumes To VMware MGMT VM

This post outlines how to create the VMAX Masking view for Gatekeeper RDM volumes and the Script to Automate Adding RDM Disk’s to a VMware MGMT VM.

First some notes on Gatekeeper volumes:
SE(CLI) and Unisphere(GUI) are the main tools used to manage a VMAX Array, Gatekeeper volumes are required in order to carry these commands from both CLI&GUI and generate low level commands which are sent to the VMAX Array to complete the required instruction such as IG,SG,PG,MV or volume creation. It is good practice to use dedicated Gatekeeper devices and avoid using any devices which contain user or application data which may be impacted by the I/O requirement from the instruction command. For example if the device used as a gatekeeper is also servicing application I/O then a scenario may arise if the VMAX is executing a command which takes some time, as a result of this latency the application may encounter poor performance. These are the reasons why EMC strongly recommends to create and map dedicated devices as Gatekeepers.

VMAX3: Creating the RDM Volumes and Associated Masking View

This is an example Masking View for a two node ESXi cluster on which the VMAX management virtual machine shall reside:

1. Create a Port Group with the VMAX FA ports that the ESXi hosts have been zoned to:
symaccess -sid 123 -name MGMT_VM_PG -type port create
symaccess -sid 123 -name MGMT_VM_PG -type port -dirport 1d:24,2d:31,3D:28,4d:27 add

2. Create the Initiator Group containing the ESXi hosts WWNS:
symaccess -sid 123 -name MGMT_VM_IG -type initiator create -consistent_lun
symaccess -sid 123 -name MGMT_VM_IG -type initiator -wwn 21000024ff5CXXF8 add
symaccess -sid 123 -name MGMT_VM_IG -type initiator -wwn 21000024ff5CXXF9 add
symaccess -sid 123 -name MGMT_VM_IG -type initiator -wwn 21000024ff55XX4C add
symaccess -sid 123 -name MGMT_VM_IG -type initiator -wwn 21000024ff55XX4D add

3. Create the Storage Group for the Gatekeeper RDM Volumes:
symsg -sid 123 create MGMT_VM_SG -slo optimized -srp SRP_1
Listing the SRP:
symcfg list -srp

4. Create the Gatekeeper volumes (10 Gatekeeper volumes in this example) and add to the MGMT_VM_SG:
symconfigure -sid 123 -cmd “create dev count=10, emulation=FBA, sg=MGMT_VM_SG, size=3 CYL, config=tdev”; preview -nop
symconfigure -sid 123 -cmd “create dev count=10, emulation=FBA, sg=MGMT_VM_SG, size=3 CYL, config=tdev”; prepare -nop
symconfigure -sid 123 -cmd “create dev count=10, emulation=FBA, sg=MGMT_VM_SG, size=3 CYL, config=tdev”; commit -nop

5. Create the Masking View:
symaccess -sid 123 create view -name MGMT_VM_MV -sg MGMT_VM_SG -pg MGMT_VM_PG -ig MGMT_VM_IG

View Configuration Details

Confirm that the HOSTS are logged into the correct VMAX ports:
symaccess -sid 123 list logins -wwn 21000024ff5CXXF8
symaccess -sid 123 list logins -wwn 21000024ff5CXXF9
symaccess -sid 123 list logins -wwn 21000024ff55XX4C
symaccess -sid 123 list logins -wwn 21000024ff55XX4D

Verify that the HBA is a member of the correct Initiator Group:
symaccess -sid 123 list -type initiator -wwn 21000024ff5CXXF8
symaccess -sid 123 list -type initiator -wwn 21000024ff5CXXF9
symaccess -sid 123 list -type initiator -wwn 21000024ff55XX4C
symaccess -sid 123 list -type initiator -wwn 21000024ff55XX4D

Storage Group details:
symaccess -sid 123 list -type storage -name AMP_SG -v
symaccess -sid 123 show AMP_SG -type storage

Port Group details:
symaccess -sid 123 list -type port -name AMP_PG -v
symaccess -sid 123 show AMP_PG -type port

Initiator Group details:
symaccess -sid 123 list -type initiator -name AMP_IG -v
symaccess -sid 123 show AMP_IG -type initiator

Masking View details:
symaccess -sid 123 list view -name AMP_MV
symaccess -sid 123 list view -name AMP_MV -detail

Gatekeeper details:
symdev -sid 340 list -cap 3 -captype cyl
symaccess -sid 123 list assignment -dev 049:052

If you need to remove the devs from the SG:
symaccess -sid 123 -name AMP_SG -type storage remove devs 049:052


####################################################################
Script to Automate Adding RDM Disk’s:

PowerCLI Script: Automate Adding RDM Disk’s
Here is a script which scans for the Host LUN ID and then attributes the $LUN_# parameter
to the ‘ConsoleDeviceName’. This greatly simplifies the process of adding large quantities of RDM Disk’s.

There are 4 parameters used in the script. The following 3 shall be prompted for:
“Your-ESXi-Hostname” $VMhostname
“Your-VM-Name” $VM
“Your-VMFS-DS-Name” $Datastore

Please edit the runtime name as required, the script default is :
“vmhba0:C0:T0:L#”

The following example script will automatically create 10 RDM Disks on a Virtual Machine and place the pointer files
in a VMFS Datastore based on the parameters provided.

#####################################################################

Write-Host “Please edit the runtime name in the script if required before proceeding, the default is:” -ForegroundColor Red
Write-Host “vmhba0:C0:T0:L#” -ForegroundColor Green

Write-Host “Please enter the ESXi/Vcenter Host IP Address:” -ForegroundColor Yellow -NoNewline
$VMHost = Read-Host

Write-Host “Please enter the ESXi/Vcenter Username:” -ForegroundColor Yellow -NoNewline
$User = Read-Host

Write-Host “Please enter the ESXi/Vcenter Password:” -ForegroundColor Yellow -NoNewline
$Pass = Read-Host

Connect-VIServer -Server $VMHost -User $User -Password $Pass

##########################################

$VMhostname = ‘*’

ForEach ($VMhostname in (Get-VMHost -name $VMhostname)| sort)
{

Write-Host $VMhostname

}

Write-Host “Please enter the ESXi Hostname where your target VM resides:” -ForegroundColor Yellow -NoNewline
$VMhostname = Read-Host

######################################

$Datastore = ‘*’

ForEach ($Datastore in (Get-Datastore -name $Datastore)| sort)
{

Write-Host $Datastore

}

Write-Host “From the list provided – Please enter the VMFS datastore where the RDM pointer files will reside:” -ForegroundColor Yellow -NoNewline
$Datastore = Read-Host

######################################

$VM = ‘*’

ForEach ($VM in (Get-VM -name $VM)| sort)
{
Write-Host $VM
}

Write-Host “From the list provided – Please enter the VM Name where the RDM volumes shall be created on:” -ForegroundColor Yellow -NoNewline
$VM = Read-Host

##############
Write-Host “ESXi Hostname you have chosen: ” -ForegroundColor Yellow
Write-Host “$VMhostname” -ForegroundColor Green
Write-Host “VMFS you have chosen: ” -ForegroundColor Yellow
Write-Host “$Datastore” -ForegroundColor Green
Write-Host “Vitual Machine you have chosen: ” -ForegroundColor Yellow
Write-Host “$VM” -ForegroundColor Green

################
## ACLX T0:L0 ##
################
$LUN_0 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L0”} | Select ConsoleDeviceName,runtimename
$LUN_0 = $LUN_0 | Select ConsoleDeviceName
$LUN_0 = $LUN_0 -replace “@{ConsoleDeviceName=”, “”
$LUN_0 = $LUN_0 -replace “}”, “”
$LUN_0
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_0 -DataStore $Datastore

#####################
## Gatekeepers x10 ##
#####################
$LUN_1 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L1”} | Select ConsoleDeviceName,runtimename
$LUN_1 = $LUN_1 | Select ConsoleDeviceName
$LUN_1 = $LUN_1 -replace “@{ConsoleDeviceName=”, “”
$LUN_1 = $LUN_1 -replace “}”, “”
$LUN_1
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_1 -DataStore $Datastore

$LUN_2 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L2”} | Select ConsoleDeviceName,runtimename
$LUN_2 = $LUN_2 | Select ConsoleDeviceName
$LUN_2 = $LUN_2 -replace “@{ConsoleDeviceName=”, “”
$LUN_2 = $LUN_2 -replace “}”, “”
$LUN_2
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_2 -DataStore $Datastore

$LUN_3 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L3”} | Select ConsoleDeviceName,runtimename
$LUN_3 = $LUN_3 | Select ConsoleDeviceName
$LUN_3 = $LUN_3 -replace “@{ConsoleDeviceName=”, “”
$LUN_3 = $LUN_3 -replace “}”, “”
$LUN_3
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_3 -DataStore $Datastore

$LUN_4 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L4”} | Select ConsoleDeviceName,runtimename
$LUN_4 = $LUN_4 | Select ConsoleDeviceName
$LUN_4 = $LUN_4 -replace “@{ConsoleDeviceName=”, “”
$LUN_4 = $LUN_4 -replace “}”, “”
$LUN_4
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_4 -DataStore $Datastore

$LUN_5 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L5”} | Select ConsoleDeviceName,runtimename
$LUN_5 = $LUN_5 | Select ConsoleDeviceName
$LUN_5 = $LUN_5 -replace “@{ConsoleDeviceName=”, “”
$LUN_5 = $LUN_5 -replace “}”, “”
$LUN_5
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_5 -DataStore $Datastore

$LUN_6 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L6”} | Select ConsoleDeviceName,runtimename
$LUN_6 = $LUN_6 | Select ConsoleDeviceName
$LUN_6 = $LUN_6 -replace “@{ConsoleDeviceName=”, “”
$LUN_6 = $LUN_6 -replace “}”, “”
$LUN_6
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_6 -DataStore $Datastore

$LUN_7 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L7”} | Select ConsoleDeviceName,runtimename
$LUN_7 = $LUN_7 | Select ConsoleDeviceName
$LUN_7 = $LUN_7 -replace “@{ConsoleDeviceName=”, “”
$LUN_7 = $LUN_7 -replace “}”, “”
$LUN_7
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_7 -DataStore $Datastore

$LUN_8 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L8”} | Select ConsoleDeviceName,runtimename
$LUN_8 = $LUN_8 | Select ConsoleDeviceName
$LUN_8 = $LUN_8 -replace “@{ConsoleDeviceName=”, “”
$LUN_8 = $LUN_8 -replace “}”, “”
$LUN_8
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_8 -DataStore $Datastore

$LUN_9 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L9”} | Select ConsoleDeviceName,runtimename
$LUN_9 = $LUN_9 | Select ConsoleDeviceName
$LUN_9 = $LUN_9 -replace “@{ConsoleDeviceName=”, “”
$LUN_9 = $LUN_9 -replace “}”, “”
$LUN_9
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_9 -DataStore $Datastore

$LUN_10 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L10”} | Select ConsoleDeviceName,runtimename
$LUN_10 = $LUN_10 | Select ConsoleDeviceName
$LUN_10 = $LUN_10 -replace “@{ConsoleDeviceName=”, “”
$LUN_10 = $LUN_10 -replace “}”, “”
$LUN_10
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_10 -DataStore $Datastore

##############
### VERIFY ###
##############
##Finding RDMs Using PowerCLI:##
# Detailed #
# Get-VM | Get-HardDisk -DiskType “RawPhysical” | Select Parent,Name,DiskType,ScsiCanonicalName,DeviceName,CapacityGB | fl
# Brief #
# Get-ScsiLun -VMHost $VMhostname -LunType disk
# NAA #
# Get-ScsiLun -VMHost $VMhostname -LunType disk | select CanonicalName

### Get IP Address for ViClient to check GUI ###
# Get-VMHost -Name $VMhostname | Get-VMHostNetworkAdapter

EMC VNX – SMI-S Configuration & Discovery

The following are some configuration notes for configuring SMI-S to allow communication with the ‘VNX Storage Processors’, SMI-S can then be leveraged by for example VCE Vision or ViPR to configure/report on the VNX array. Before proceeding ensure you have the both VNX Storage Processor A&B IP addresses to hand, the SMI-S host will use these IP’s to allow for out-of-band communication over IP with the VNX. EMC SMI-S provider is included as part of ‘Solutions Enabler with SMIS’ install package which can be downloaded from ‘support.emc.com’.

Begin by installing SMI-S Provider, ensuring you select the ‘Array provider’ (Windows does not require Host provider) and chose the option for SMISPROVIDER_COMPONENT:
VisionVMAX1

From the windows services.msc console check that both the ‘ECOM’ and ‘storsrvd’ services are set to automatic and in a running state:
VisionVMAX2
Check that EMC storsrvd daemon is installed and running from a Windows cmd prompt using stordaemon.exe:
stordaemon install storsrvd -autostart
stordaemon start storsrvd
stordaemon.exe list

VisionVMAX3
Or using the SC (service control) command you can query/start/config the ECOM and storsrvd services:
sc query ECOM.exe
sc query storsrvd
sc start ECOM.exe
sc start storsrvd
sc config ECOM.exe start=auto
sc config storsrvd start=auto

VisionVMAX4

Run netstat -a and check the host is listening on ports 5988 5989:
VisionVMAX5

UPDATE ENVIRONMENT VARIABLES:
Add the SYMCLI installation directory path (DRIVE:\Program Files\EMC\ECIM\ECOM\BIN) to the list of system paths:
VisionVMAX2a
Or use the windows CLI to add the SYMCLI and ECOM directories to the PATH environment variable:

setx /M PATH "%PATH%;C:\Program Files\EMC\SYMCLI\bin;C:\Program Files\EMC\ECIM\ECOM\bin"

If experiencing issues such as the ECOM service failing to start it is worth rebooting the mgmt server at this stage.

ECOM SERVER: ADD A NEW SMI-S Provider User
Provided all the validations are successful then proceed to login to the ECOM server and create the user you would like to use for (Vision/ViPR) connectivity:
Open https://localhost:5989/ecomconfig
Login with default credentials of: admin / #1Password
VisionVMAX6a

Select the option to add a new user and create the Vision user with administrator role and scope local:
Visionvmax7ab
VisionVMAX8ab

Windows Firewall
If the Windows firewall is enabled then rules will need to be created to allow ECOM ports TCP 5988&5989 and SLP port UDP 427. For example using the windows command line netsh to create rules for SLP and ECOM:

netsh advfirewall firewall add rule name="SLP" dir=in protocol=UDP localport=427 action=allow

netsh advfirewall firewall add rule name=”ECOM” dir=in protocol=TCP localport=5988-5989 action=allow

netsh advfirewall firewall show rule name=SLP
netsh advfirewall firewall show rule name=ECOM

VisionVMAX11

Discover and Add the VNX using TestSMIProvider:
Confirm communication to the VNX from the SMI-S host by running the navicli getagent cmd on both VNX Storage Processors from the Element Manager cmd prompt:
naviseccli -h SPA-IP getagent
choose option 2 if prompted
naviseccli -h SPB-IP getagent
choose option 2 if prompted

Or using credentials:
naviseccli -h SPIP -user sysadmin -password sysadmin -scope 0 getagent

Open a Windows cmd prompt session as admin user, if the environment variable has not been set then you will need to ‘cd’ to ‘cd D:\Program Files\EMC\SYMCLI\bin
symcfg auth add -host SPA_IP -username sysuser -password syspw
symcfg auth add -host SPB_IP -username sysuser -password syspw

Create a text file, for example called SPIP.txt that contains the IP addresses for SP A&B. Then run the following commands to discover and list the VNX:
symcfg discover -clariion -file D:\spip.txt
symcfg list -clariion

Again from a Windows cmd prompt session as admin user, if the environment variable has not been set then you will need to ‘cd’ to c:\Program Files\EMC\ECIM\ECOM\BIN. Type TestSMIProvider.exe at the prompt, from here chose all defaults except for the Vision user and password created through the ECOM console:
VisionVMAX9

At the prompt type ‘addsys’ to confirm connectivity between the VNX Array and the SMI-S Host:


(localhost:5988) ? addsys
Add System {y|n} [n]: y

ArrayType (1=Clar, 2=Symm) [1]:
One or more IP address or Hostname or Array ID

Elements for Addresses
IP address or hostname or array id 0 (blank to quit): SPA_IP
IP address or hostname or array id 1 (blank to quit): SPB_IP
IP address or hostname or array id 2 (blank to quit):
Address types corresponding to addresses specified above.
(1=URL, 2=IP/Nodename, 3=Array ID)
Address Type (0) [default=2]:
Address Type (1) [default=2]:
User [null]: sysuser
Password [null]: syspw
++++ EMCAddSystem ++++
OUTPUT : 0
Legend:0=Success, 1=Not Supported, 2=Unknown, 3=Timeout, 4=Failed
5=Invalid Parameter
4096=Job Queued, 4097=Size Not Supported
Note: Not all above values apply to all methods – see MOF for the method.

System : //SPA_IP/root/emc:Clar_StorageSystem.CreationClassName=”Clar_Stora
geSystem”,Name=”CLARiiON+CKM00100000123″

In 12.468753 Seconds

Please press enter key to continue…

At the prompt type ‘dv‘ to confirm connectivity between the VNX and SMI-S Host:
VisionVMAX10

For any troubleshooting please refer to: ‘C:\Program Files\EMC\ECIM\ECOM\log’

Note: When configuring VCE Vision please ensure to use the ‘SMI-S Host’ IP address for VNX Block entries in the Vblock.xml configuration file, the NAS portion of the VNX uses the Control Station IP addresses for communication which have ECOM configured by default.

How to remove VNX systems using SMI-S “remsys” command:

  1. Log into the SMI-S Provider server
  2. Open a command prompt (cmd).
  3. Change (cd) to C:\Program Files\EMC\ECIM\ECOM\bin
  4. Run TestSmiProvider.exe
  5. Enter ein
  6. Enter symm_StorageSystem
  7. Copy the line that specifies the VNX system you want to remove:
    Clar_StorageSystem.CreationClassName=”Clar_StorageSystem”,Name=”CLARiiON+CKM001xxxxxxxx”
  8. Enter remsys
  9. Enter Y
  10. Paste the line specifying the VNX system you want to remove that you copied in the preceding step.
  11. Enter Y
  12. Run a dv command to confirm the VNX system has been removed.

Built with EMC SMI-S Provider: V4.6.2
Namespace: root/emc
repeat count: 1
(localhost:5988) ? remsys
Remove System {y|n} [n]: y
System’s ObjectPath[null]: Clar_StorageSystem.CreationClassName=”Clar_StorageSys
tem”,Name=”CLARiiON+CKM001xxxxxxxx
About to delete system Clar_StorageSystem.CreationClassName=”Clar_StorageSystem”
,Name=”CLARiiON+CKM001xxxxxxxx
Are you sure {y|n} [n]: y

 

EMC VMAX3 – CLI Cheat Sheet

Guest post by the VMAX Guru – Paul Martin @rawstorage

VMAX3 CLI Cheat Sheet

Disclaimer, this is not a comprehensive how to, just a toe in the ocean of VMAX3, there is always more and there is always why. The information here is not a substitute for the product guides which have been consolidated into a single downloadable PDF documentation set please download and refer to the documentation set for full feature descriptions.

https://support.emc.com/docu59402_Solutions-Enabler-8.0.3-Documentation-Set.pdf
Also see the new features paper for more details on VMAX3 and features in general

https://www.emc.com/collateral/technical-documentation/h13578-vmax3-family-new-features-wp.pdf

FAST with SLO

One of the major changes with V3 is the way we provision storage. FAST has been enhanced to work on a more granular level (128KB track level) and we have abstracted a lot of the internals so that the end user need not be so concerned about the mechanics of the array they can simply provision capacity and set a performance expectation which the array will work to achieve.
In VMAX3 FAST is always on and the majority of the configuration is pre-configured, available SLO are dictated by the disks available in the array and Storage Resource Pools are defined in the bin file.
Provisioning storage on a VMAX3 is easier that on previous Symm/VMAX arrays, we are no longer required to create meta devices to support larger devices and the SLO model makes provisioning intuitive and easy. From the command line it’s pretty much a three step process:

1. Create your storage group and assign your SLO and workload (optional), if no SLO or workload is specified FAST will still manage everything but your SLO will be optimized. The storage can represent your applications devices as a whole and can be used in SRDF and Timefinder meaning if you design storage with application==storagegroup snapshot/srdf design becomes simpler later on too. VMAX3 supports 64K storage groups so there is no reason not to configure 1 per app.
symsg –sid 007 create myapp_sg –slo gold –workload oltp
2. Create and add your devices, here I am creating 5 x 2048 GB devices and adding to my storage group. Note I can just create 2048 GB devices, no meta is created. At present we can create devs up to 16TB soon to be increased further.
symconfigure -sid 007 -cmd "create dev count =5 config=tdev, emulation=fba size=2048 GB sg=myapp_sg;" preview
3. Present to the host via a masking view, no change from VMAX here.
symaccess –sid 007 create view –name myapp_mv –sg myapp_sg –pg myapp_pg –ig myapp_ig

Here I will highlight a few of the key commands to gather information about the configuration and interaction with the SRP and SLO.
NOTE:- Monitoring and Alerting of FAST SLO is built into Unisphere for VMAX. SLO compliance is reported at every level when looking at storage group components in Unisphere.

Viewing SRP Configured On The Array

Most VMAX3 arrays will only have a single SRP however it is possible to have multiple, if you are using FAST.X or ProtectPoint you may have an additional SRP in the config, the following command shows you what is available:
symcfg list –srp

Note the default SRP is set to be usable by RDFA DSE, this is normal. There is no need to configure a separate pool for DSE in VMAX3, we can reserve and cap some space from the default SRP for this purpose.

VMAX3-CLI1
VMAX3-CLI2

Viewing the Available SLO

symcfg list -slo
VMAX3-CLI3

To get a more detailed look at the SLO’s and the workloads that can be associated with storage groups you can run the following command. The output shows the approximate response time for each.

symcfg list –slo –detail –by_resptime –all

VMAX3-CLI4

SRP Capacity Consumption

In order to get an idea of how your storage is being consumed from the command line you can run the CMD:
symsg list –srp –demand –type slo
this will show you how your SRP is being consumed by each of the SLO, it will also list how much is consumed by DSE and Snapshot, remember this capacity all comes from your SRP so it’s worth keeping an eye on.
VMAX3-CLI5

Listing SLO associations by Storage Group

The previous command gives us a good idea at a high level, but if we want to see from a storage group level which storage groups are associated with each SLO we have a command for that too:
Symsg list –by_SLO –detail
this shows each storage group and whether or not it is associated with an SLO, we also get some detail about the number of devices but we don’t see much regarding the capacity.
VMAX3-CLI6

Additionally you can see consumption on an individual device level on the application storage group.
VMAX3-CLI7

You can see the full breakdown of your SRP including drive pools and which SLO you have available as well as TDAT information. The output below shows all the thin devices (TDEVS) bound to the SRP and how much space they are each consuming.
VMAX3-CLI8

Changing SLO On Existing Storage Groups

Changing Service Level Objective to Platinum and Workload to OLTP_REP for a storage group test:
symsg –sg test -sid 123 set –slo Platinum -wl OLTP_REP

Solutions Enabler 8.X also allows for moving devices between groups non-disruptively
• Moving devices between child storage groups of a parent storage group when the masking view uses the parent group.
• Moving devices between storage groups when a view is on each storage group and both the initiator group (IG) and the port group (PG) elements are common to the views (initiators and ports from the source group must be present in the target).
• Moving devices from a storage group with no masking view or service level to one in a masking view. This is useful as you can now create devices and automatically add to storage group from CLI, so a staging group may exist. Command is:
symsg –test –sid 123 –sg staging_sg move dev 345 gold_sg

VMAX3-CLI9

SnapVX – Space efficient Targetless Snapshots
I’m not going to go into the full details of SnapVX and what makes it revolutionary in the VMAX3, we have a very good technote that already covers this in detail. Needless to say, taking snapshots on VMAX3 is quicker, more efficient and easier than it has been on any previous generations. See the technote for full details.
Like most features in the VMAX to access the functionality simply put the word sym in front of the feature name. SnapVX is controlled with the symsnapvx command set. Really the only command you should need is symsnapvx –h, this will get you the full set of options. I’ll highlight a few of the main commands here.

Creating Snapshots

SnapVX is simplest when your storage has been designed with an application per storage group, you can still use device groups or files if you want but VMAX3 supports 64K storage groups, that is enough for one per application in most environments and means only managing a single entity for each application for provisioning as well as local replication and remote replication. You can snap multiple applications together using a cascaded storage group containing all of the child storage groups for each application. SnapVX snapshots are consistent by design so no need to specify any additional flags to obtain a point in time image of a live system.
To create a snapshot simply grab the storagegroup name which contains all the devices for your application and execute the establish command, the example below will create a snapshot hourly snapshot and will automatically terminate the same snapshot 24 hours after it was created:
symsnapvx –sg test–snapshotname hourlysnapshot establish –ttl –delta 1 –nop
You could run the command above in a cron job or batch file every hour and snapvx will create a new generation each time (gen 0).

Listing SnapVX Snapshots And Capacity Consumed

In order to see which storage groups are consuming the most space we can run the following cmd:
symcfg list –srp –demand –type sg
The output lists the storage groups showing their subscribed capacity (how much potential space they can consume) as well as their actual allocated capacity. A Particulary useful output here is the SnapShot Allocated (GB) Column, if you are in a bind for space you can quickly identify which storage group has consumed the most snapshot space and terminate some snapshots to return space to the SRP.

Note your storage group will only show up in this command output if it is FAST managed. Although everything in VMAX3 is under fast control it is possible to create storage groups that are not FAST managed for various use cases. A storage group is FAST managed if you explicitly specify the SRP and or assign an SLO. Shown below SourceSG1 has a large capacity of snapshot allocated storage.
VMAX3-CLI10

To find out more about your snaps you can run the following cmd:
symsnapvx –sid –sg groupname list –detail

VMAX3-CLI11
If I want to link off and access a snap I can use a storage group which I have pre-created with the same number of devices as the source/target devices can be same size or larger..
VMAX3-CLI12

For deeper dive and more on the internals please see the Technote on EMC.com
https://www.emc.com/collateral/technical-documentation/h13697-emc-vmax3-local-replication.pdf

Useful Commands For Everyday Use!:

This information is at your finger tips with symcli -v

SYMCLI BASE Commands:

symapierr- Used to translate SYMAPI error code numbers into SYMAPI error messages.
symaudit – List records from a Symmetrix audit log file.
symbcv – Perform BCV support operations on Symmetrix BCV devices.
symcfg – Discover or display Symmetrix configuration information. Refresh the host’s Symmetrix database file or remove Symmetrix info from the file. Can also be used to view or release a ‘hanging’ Symmetrix exclusive lock.
symchg – Monitor changes to Symmetrix devices or to logical objects stored on Symmetrix devices.
symcli – Provides the version number and a brief description of the commands included in the Symmetrix Command Line
symdev – Perform operations on a device given the device’s Symmetrix name. Can also be used to view Symmetrix device locks.
symdg- Perform operations on a device group (dg).
symdisk – Display information about the disks within a Symmetrix.
symdrv – List DRV devices on a Symmetrix.
symevent – Monitor or inspect the history of events within a Symmetrix.
symhost – Display host configuration information and performance statistics.
syminq – Issues a SCSI Inquiry command on one or all devices. Interface.
symipsec – Administers IPSec encryption on Gigabit Ethernet connections.
symlabel – Perform label support operations on a Symmetrix device.
symlmf – Registers SYMAPI license keys.
sympd- Perform operations on a device given the device’s physical name.
symsg- Perform operations on a storage device group (sg).
symstat – Display statistics information about a Symmetrix, a Director, a device group, or a device.
symreturn- Used for supplying return codes in pre-action and post-action script files.

SYMCLI CONTROL Commands:

symaccess- Administer Symmetrix Access Logix. (Mapping and Masking of devices)
symacl – Administer Symmetrix access control information.
symauth – Administer Symmetrix user authorization information.
symcg- Perform operations on an composite group (cg).
symchksum- Administer checksum checks when an Oracle database writes data files on Symmetrix devices.
symclone – Perform Clone control operations on a device group or on a device within the device group.
symconfigure – Perform modifications on the Symmetrix configuration.
symconnect – Setup or Modify Symmetrix Connection Security functionality.
symfast – Administer Symmetrix FAST (Fully Automated Storage Tiering) policies, associations, and the FAST Controller.
symmask – Setup or Modify Symmetrix Device Masking functionality.(Older Symmetrix Pre 5977)
symmaskdb- Backup, Restore, Initialize or Show the contents of the device masking database. (Older Symmetrix Pre 5977)
symmigrate – Migrates the physical disk space associated with a Symmetrix device to a different data protection scheme, or to disks with different performance characteristics. (VMAX 10K/20K/40K)
symmir – Perform BCV control operations on a device group or on a device within the device group.
symoptmz – Perform Symmetrix Optimizer control operations.
symqos – Perform Quality of Service operations on Symmetrix Devices.
symrcopy – Perform Symmetrix Rcopy control operations on devices in a device file.
symrdf – Perform RDF control operations on a device group or on a device within the device group.
symrecover – Perform automated SRDF session recovery operations.
symreplicate – Perform automated, consistent replication of data given a pre-configured RDF/Timefinder setup.
symsan – List ports and LUNs visible on the SAN
symsnap – Perform Symmetrix Snap control operations on a device group or on devices in a device file.
symsnapvx- Perform Symmetrix Snapvx control operations.
symstar – Perform SRDF STAR management operations.
symtier – Create and manage storage tiers within a Symmetrix.
symtw- Manage time windows for the Optimizer, FAST and FAST VP controller within a Symmetrix. (VMAX 10K/20K/40K)

SYMCLI SRM(Mapping) Commands symhostfs- Display information about a host File, Directory, or host File System.
symioctl – Send IO control commands to a specified application.
symlv- Display information about a volume in Logical Volume Group (vg).
sympart – Display partition information about a host device.
symrdb – Display information about a third-party Relational Database.
symrslv – Display detailed Logical to Physical mapping information about a logical object stored on Symmetrix devices.
symvg- Display information about a Logical Volume Group (vg).