Cisco UCS – Determining ESXi FNIC&ENIC via PowerCLI

The following script allows the user to retrieve a listing of Network (ENIC) & Storage (FNIC) firmware drivers installed on Cisco UCS blades at a per vSphere cluster level. You may download the ‘Cisco_FNIC_ENIC.ps1‘ script here: Cisco_FNIC_ENIC.ps1 (Remove the .doc extension).

The script will begin by prompting you to enter the vCenter IP Address, username and password. A list of all the available clusters residing in vCenter will be returned. Followed by a prompt to enter the vSphere cluster name, from the cluster defined the script will retrieve a per ESXi listing of ENIC&FNIC firmware levels. The script will firstly prompt the user to enable SSH on all the hosts in the cluster:

UCS_FNIC_ENIC1

UCS_FNIC_ENIC2

 

Once you have completed the tasks on the hosts that required SSH Access, you may then return to the running script and type option ‘y’ in order to again disable SSH on all the hosts in the specified cluster:

UCS_FNIC_ENIC3

PowerCLI Script:

#######################################
# Confirm CISCO FNIC & ENIC Drivers
# Date: 2016-07-01
# Created by: David Ring
#######################################

###### vCenter Connectivity Details ######

Write-Host “Please enter the vCenter Host IP Address:” -ForegroundColor Yellow -NoNewline

$VMHost = Read-Host

Write-Host “Please enter the vCenter Username:” -ForegroundColor Yellow -NoNewline

$User = Read-Host

Write-Host “Please enter the vCenter Password:” -ForegroundColor Yellow -NoNewline

$Pass = Read-Host

Connect-VIServer -Server $VMHost -User $User -Password $Pass

###### Please enter the Cluster to check CISCO Versions #######

Write-Host “Clusters Associated with this vCenter:” -ForegroundColor Green

$VMcluster = ‘*’

ForEach ($VMcluster in (Get-Cluster -name $VMcluster)| sort)

{
Write-Host $VMcluster
}

Write-Host “Please enter the Cluster to lookup CISCO FNIC & ENIC Drivers:” -ForegroundColor Yellow -NoNewline

$VMcluster = Read-Host

###### Enabling SSH ######

Write-Host “Do you need to Enable SSH on the Cluster ESXi Hosts? ” -ForegroundColor Yellow -NoNewline

Write-Host ” Y/N:” -ForegroundColor Red -NoNewline

$SSHEnable = Read-Host

if ($SSHEnable -eq “y”) {

Write-Host “Enabling SSH on all hosts in your specified cluster:” -ForegroundColor Green

Get-Cluster $VMcluster | Get-VMHost | ForEach {Start-VMHostService -HostService ($_ | Get-VMHostService | Where {$_.Key -eq “TSM-SSH”})}

}

###### Confirm Driver Versions ######

Write-Host “Confirm CISCO FNIC & ENIC Drivers” -ForegroundColor Green

$hosts = Get-Cluster $VMcluster | Get-VMHost

forEach ($vihost in $hosts)

{

Write-Host -ForegroundColor Magenta “Gathering Driver versions on” $vihost

$esxcli = get-vmhost $vihost | Get-EsxCli

$esxcli.software.vib.list() | Where { $_.Name -like “net-enic”} | Select @{N=”VMHost”;E={$ESXCLI.VMHost}}, Name, Version

$esxcli.software.vib.list() | Where { $_.Name -like “scsi-fnic”} | Select @{N=”VMHost”;E={$ESXCLI.VMHost}}, Name, Version

}

###### Disabling SSH ######

Write-Host “Ready to Disable SSH? ” -ForegroundColor Yellow -NoNewline

Write-Host ” Y/N:” -ForegroundColor Red -NoNewline

$SSHDisable = Read-Host

if ($SSHDisable -eq “y”) {

Write-Host “Disabling SSH” -ForegroundColor Green

Get-Cluster $VMcluster | Get-VMHost | ForEach {Stop-VMHostService -HostService ($_ | Get-VMHostService | Where {$_.Key -eq “TSM-SSH”}) -Confirm:$FALSE}

}

 

Useful References:

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1027206

http://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-b-series-blade-servers/115764-ucs-san-tshoot-00.html

EMC UIM/P – Editing The Database

Thank you @CliffCahill for providing this trick!

Ensure to back up the UIM/P(Unified Infrastructure Manager for Provisioning) DB before you begin. The following provides detailed steps on how to modify IP settings for ESXi host service offerings stored in the UIM DB.

Login to the UIM CLI via putty.
#To login to uim voyencedb database:
su – pgdba
psql voyencedb uim

To pull back all ESXi O/S settings:
select *, from ossettings;

To update the gateway for all service offerings / ESXi host:
update ossettings set gateway = ‘10.10.1.254’;

To update the IP address on individual ESXi host – id is listed when you run “select” command:
update ossettings set ip_address = ‘10.10.1.10’ where id = 2338;
update ossettings set ip_address = ‘10.10.1.11’ where id = 2302;

Vblock – Advanced Management Pod Second Generation (AMP-2)

Vblock Advanced Management Pod (AMP)

The AMP consists of the management components of a Vblock system, these management components are self-Contained and provide Out-of-Band Management for the entire Vblock Infrastructure. The servers and storage(in the case of AMP-2HA) that make up the AMP host all of the Vblock management applications in a dedicated environment separate from that of the production environment. This type of separation allows for the Vblock to operate even in the event of an AMP failure scenario.

Management Software stack:
Code versions and exact details of the AMP software management components are deterministic on the RCM (Release Code Matrix) level used to configure the Vblock. Here is an example of some of the core AMP management components:
– VMware vCenter Server Suite
– Unisphere for VNX|VMAX (RecoverPoint and VPLEX Optional)
– XtremIO XMS
– PowerPath Licensing Appliance
– EMC ESRS Appliances
– Cisco DCNM
– VCE Vision Intelligent Operations
– Cisco Nexus 1000v

AMP-2 Hardware
There are 3 AMP-2 models associated with the Vblock 340,540&740: AMP-2P | AMP-2RP | AMP-2HA

AMP-2P (“Physical”) – One Cisco UCS C220 dedicated Server to run the management workload.
AMP-2RP (“Redundant Physical”) – Two Cisco UCS C220 servers supports application and hardware redundancy.

The AMP-2HA includes 2 or 3 Cisco ‘C’ series servers for compute along with the highly available VNXe3200 storage array where the VM storage resides providing a redundant out-of-band management environment:
AMP-2HA BASE (“High Availability”) – Two Cisco UCS C220|240 servers and shared storage presented by a EMC VNXe3200 storage array.
AMP-2HA PERFORMANCE (“High Availability”) – Three Cisco UCS C220|240 servers and additional ‘FAST VP’ VNXe3200 storage.

Taking a look at the AMP-2HA Hardware stack:
The second generation of the Advanced Management Platform (AMP-2HA) is the high availability model of AMP-2HA that centralizes management components of the Vblock System and delivers out-of-band management.

Vblock 340: AMP-2HA ‘Base’ Hardware stack
2x C220 M3 SFF Server (1RU Per server)
2x 32 GB Flex Flash SD Card Modules (ESXi install location)
2x CPU Per server(2.4Ghz Ivy Bridge 6Cores Per CPU, 7.2GT/s QPI)
128GB RAM Per C220 (8x 16GB DDR3)
2x Cisco Nexus 3064-T Switch for management networking
VNXe 3200 Storage Array (25-Drive DPE loaded with 3x 100GB FAST Cache Drives & 11x 600GB 10K SAS 2.5″ Drives)

Vblock 340: AMP-2HA ‘Performance’ Hardware stack
3x C220 M3 SFF Server (1RU)
2x 32 GB Flex Flash SD Card Modules (ESXi install location)
2x CPU Per server(2.4Ghz Ivy Bridge 6Cores Per CPU, 7.2GT/s QPI)
128GB RAM Per C220 (8x 16GB DDR3)
VNXe 3200 Storage Array (25-Drive DPE loaded with 3x 100GB FAST Cache Drives, 6x 100GB FAST VP Drives & 11x 600GB 10K SAS 2.5″ Drives)

Vblock 540&740: AMP-2HA ‘Base’ Hardware stack
2x C240 M3 SFF Server (2RU Per Server)
2x 32 GB Flex Flash SD Card Modules (ESXi install location)
2x CPU Per server(2.4Ghz Ivy Bridge 6Cores Per CPU, 7.2GT/s QPI)
128GB RAM Per C240 (8x 16GB DDR3)
VNXe 3200 Storage Array (25-Drive DPE loaded with 3x 100GB FAST Cache Drives & 11x 600GB 10K SAS 2.5″ Drives)

Vblock 540&740: AMP-2HA ‘Performance’ Hardware stack
3
x C240 M3 SFF Server (2RU)
2x 32 GB Flex Flash SD Card Modules (ESXi install location)
2x CPU Per server(2.4Ghz Ivy Bridge 6C)
128GB RAM Per C240 (8x 16GB DDR3)
VNXe 3200 Storage Array (25-Drive DPE loaded with 3x 100GB FAST Cache Drives, 6x 100GB FAST VP Drives & 11x 600GB 10K SAS 2.5″ Drives)

VxBlock 340,540 & 740 –  ‘The AMP2-HA Performance’ model is the default for VMware NSX deployments allowing each of the 3 NSX controllers to be dedicated to an ESXi host for redundancy and scalability.

References:
http://www.vce.com/asset/documents/vblock-340-gen3-2-architecture-overview.pdf
http://www.vce.com/asset/documents/vblock-540-gen2-0-architecture-overview.pdf
http://www.vce.com/asset/documents/vblock-740-gen5-0-architecture-overview.pdf

http://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/ucs-c-series-rack-servers/C240M3_LFF_SpecSheet.pdf
http://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/ucs-c-series-rack-servers/C220M3_SFF_SpecSheet.pdf

EMC VMAX3 – Adding Gatekeeper RDM Volumes To VMware MGMT VM

This post outlines how to create the VMAX Masking view for Gatekeeper RDM volumes and the Script to Automate Adding RDM Disk’s to a VMware MGMT VM.

First some notes on Gatekeeper volumes:
SE(CLI) and Unisphere(GUI) are the main tools used to manage a VMAX Array, Gatekeeper volumes are required in order to carry these commands from both CLI&GUI and generate low level commands which are sent to the VMAX Array to complete the required instruction such as IG,SG,PG,MV or volume creation. It is good practice to use dedicated Gatekeeper devices and avoid using any devices which contain user or application data which may be impacted by the I/O requirement from the instruction command. For example if the device used as a gatekeeper is also servicing application I/O then a scenario may arise if the VMAX is executing a command which takes some time, as a result of this latency the application may encounter poor performance. These are the reasons why EMC strongly recommends to create and map dedicated devices as Gatekeepers.

VMAX3: Creating the RDM Volumes and Associated Masking View

This is an example Masking View for a two node ESXi cluster on which the VMAX management virtual machine shall reside:

1. Create a Port Group with the VMAX FA ports that the ESXi hosts have been zoned to:
symaccess -sid 123 -name MGMT_VM_PG -type port create
symaccess -sid 123 -name MGMT_VM_PG -type port -dirport 1d:24,2d:31,3D:28,4d:27 add

2. Create the Initiator Group containing the ESXi hosts WWNS:
symaccess -sid 123 -name MGMT_VM_IG -type initiator create -consistent_lun
symaccess -sid 123 -name MGMT_VM_IG -type initiator -wwn 21000024ff5CXXF8 add
symaccess -sid 123 -name MGMT_VM_IG -type initiator -wwn 21000024ff5CXXF9 add
symaccess -sid 123 -name MGMT_VM_IG -type initiator -wwn 21000024ff55XX4C add
symaccess -sid 123 -name MGMT_VM_IG -type initiator -wwn 21000024ff55XX4D add

3. Create the Storage Group for the Gatekeeper RDM Volumes:
symsg -sid 123 create MGMT_VM_SG -slo optimized -srp SRP_1
Listing the SRP:
symcfg list -srp

4. Create the Gatekeeper volumes (10 Gatekeeper volumes in this example) and add to the MGMT_VM_SG:
symconfigure -sid 123 -cmd “create dev count=10, emulation=FBA, sg=MGMT_VM_SG, size=3 CYL, config=tdev”; preview -nop
symconfigure -sid 123 -cmd “create dev count=10, emulation=FBA, sg=MGMT_VM_SG, size=3 CYL, config=tdev”; prepare -nop
symconfigure -sid 123 -cmd “create dev count=10, emulation=FBA, sg=MGMT_VM_SG, size=3 CYL, config=tdev”; commit -nop

5. Create the Masking View:
symaccess -sid 123 create view -name MGMT_VM_MV -sg MGMT_VM_SG -pg MGMT_VM_PG -ig MGMT_VM_IG

View Configuration Details

Confirm that the HOSTS are logged into the correct VMAX ports:
symaccess -sid 123 list logins -wwn 21000024ff5CXXF8
symaccess -sid 123 list logins -wwn 21000024ff5CXXF9
symaccess -sid 123 list logins -wwn 21000024ff55XX4C
symaccess -sid 123 list logins -wwn 21000024ff55XX4D

Verify that the HBA is a member of the correct Initiator Group:
symaccess -sid 123 list -type initiator -wwn 21000024ff5CXXF8
symaccess -sid 123 list -type initiator -wwn 21000024ff5CXXF9
symaccess -sid 123 list -type initiator -wwn 21000024ff55XX4C
symaccess -sid 123 list -type initiator -wwn 21000024ff55XX4D

Storage Group details:
symaccess -sid 123 list -type storage -name AMP_SG -v
symaccess -sid 123 show AMP_SG -type storage

Port Group details:
symaccess -sid 123 list -type port -name AMP_PG -v
symaccess -sid 123 show AMP_PG -type port

Initiator Group details:
symaccess -sid 123 list -type initiator -name AMP_IG -v
symaccess -sid 123 show AMP_IG -type initiator

Masking View details:
symaccess -sid 123 list view -name AMP_MV
symaccess -sid 123 list view -name AMP_MV -detail

Gatekeeper details:
symdev -sid 340 list -cap 3 -captype cyl
symaccess -sid 123 list assignment -dev 049:052

If you need to remove the devs from the SG:
symaccess -sid 123 -name AMP_SG -type storage remove devs 049:052


####################################################################
Script to Automate Adding RDM Disk’s:

PowerCLI Script: Automate Adding RDM Disk’s
Here is a script which scans for the Host LUN ID and then attributes the $LUN_# parameter
to the ‘ConsoleDeviceName’. This greatly simplifies the process of adding large quantities of RDM Disk’s.

There are 4 parameters used in the script. The following 3 shall be prompted for:
“Your-ESXi-Hostname” $VMhostname
“Your-VM-Name” $VM
“Your-VMFS-DS-Name” $Datastore

Please edit the runtime name as required, the script default is :
“vmhba0:C0:T0:L#”

The following example script will automatically create 10 RDM Disks on a Virtual Machine and place the pointer files
in a VMFS Datastore based on the parameters provided.

#####################################################################

Write-Host “Please edit the runtime name in the script if required before proceeding, the default is:” -ForegroundColor Red
Write-Host “vmhba0:C0:T0:L#” -ForegroundColor Green

Write-Host “Please enter the ESXi/Vcenter Host IP Address:” -ForegroundColor Yellow -NoNewline
$VMHost = Read-Host

Write-Host “Please enter the ESXi/Vcenter Username:” -ForegroundColor Yellow -NoNewline
$User = Read-Host

Write-Host “Please enter the ESXi/Vcenter Password:” -ForegroundColor Yellow -NoNewline
$Pass = Read-Host

Connect-VIServer -Server $VMHost -User $User -Password $Pass

##########################################

$VMhostname = ‘*’

ForEach ($VMhostname in (Get-VMHost -name $VMhostname)| sort)
{

Write-Host $VMhostname

}

Write-Host “Please enter the ESXi Hostname where your target VM resides:” -ForegroundColor Yellow -NoNewline
$VMhostname = Read-Host

######################################

$Datastore = ‘*’

ForEach ($Datastore in (Get-Datastore -name $Datastore)| sort)
{

Write-Host $Datastore

}

Write-Host “From the list provided – Please enter the VMFS datastore where the RDM pointer files will reside:” -ForegroundColor Yellow -NoNewline
$Datastore = Read-Host

######################################

$VM = ‘*’

ForEach ($VM in (Get-VM -name $VM)| sort)
{
Write-Host $VM
}

Write-Host “From the list provided – Please enter the VM Name where the RDM volumes shall be created on:” -ForegroundColor Yellow -NoNewline
$VM = Read-Host

##############
Write-Host “ESXi Hostname you have chosen: ” -ForegroundColor Yellow
Write-Host “$VMhostname” -ForegroundColor Green
Write-Host “VMFS you have chosen: ” -ForegroundColor Yellow
Write-Host “$Datastore” -ForegroundColor Green
Write-Host “Vitual Machine you have chosen: ” -ForegroundColor Yellow
Write-Host “$VM” -ForegroundColor Green

################
## ACLX T0:L0 ##
################
$LUN_0 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L0”} | Select ConsoleDeviceName,runtimename
$LUN_0 = $LUN_0 | Select ConsoleDeviceName
$LUN_0 = $LUN_0 -replace “@{ConsoleDeviceName=”, “”
$LUN_0 = $LUN_0 -replace “}”, “”
$LUN_0
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_0 -DataStore $Datastore

#####################
## Gatekeepers x10 ##
#####################
$LUN_1 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L1”} | Select ConsoleDeviceName,runtimename
$LUN_1 = $LUN_1 | Select ConsoleDeviceName
$LUN_1 = $LUN_1 -replace “@{ConsoleDeviceName=”, “”
$LUN_1 = $LUN_1 -replace “}”, “”
$LUN_1
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_1 -DataStore $Datastore

$LUN_2 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L2”} | Select ConsoleDeviceName,runtimename
$LUN_2 = $LUN_2 | Select ConsoleDeviceName
$LUN_2 = $LUN_2 -replace “@{ConsoleDeviceName=”, “”
$LUN_2 = $LUN_2 -replace “}”, “”
$LUN_2
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_2 -DataStore $Datastore

$LUN_3 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L3”} | Select ConsoleDeviceName,runtimename
$LUN_3 = $LUN_3 | Select ConsoleDeviceName
$LUN_3 = $LUN_3 -replace “@{ConsoleDeviceName=”, “”
$LUN_3 = $LUN_3 -replace “}”, “”
$LUN_3
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_3 -DataStore $Datastore

$LUN_4 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L4”} | Select ConsoleDeviceName,runtimename
$LUN_4 = $LUN_4 | Select ConsoleDeviceName
$LUN_4 = $LUN_4 -replace “@{ConsoleDeviceName=”, “”
$LUN_4 = $LUN_4 -replace “}”, “”
$LUN_4
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_4 -DataStore $Datastore

$LUN_5 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L5”} | Select ConsoleDeviceName,runtimename
$LUN_5 = $LUN_5 | Select ConsoleDeviceName
$LUN_5 = $LUN_5 -replace “@{ConsoleDeviceName=”, “”
$LUN_5 = $LUN_5 -replace “}”, “”
$LUN_5
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_5 -DataStore $Datastore

$LUN_6 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L6”} | Select ConsoleDeviceName,runtimename
$LUN_6 = $LUN_6 | Select ConsoleDeviceName
$LUN_6 = $LUN_6 -replace “@{ConsoleDeviceName=”, “”
$LUN_6 = $LUN_6 -replace “}”, “”
$LUN_6
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_6 -DataStore $Datastore

$LUN_7 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L7”} | Select ConsoleDeviceName,runtimename
$LUN_7 = $LUN_7 | Select ConsoleDeviceName
$LUN_7 = $LUN_7 -replace “@{ConsoleDeviceName=”, “”
$LUN_7 = $LUN_7 -replace “}”, “”
$LUN_7
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_7 -DataStore $Datastore

$LUN_8 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L8”} | Select ConsoleDeviceName,runtimename
$LUN_8 = $LUN_8 | Select ConsoleDeviceName
$LUN_8 = $LUN_8 -replace “@{ConsoleDeviceName=”, “”
$LUN_8 = $LUN_8 -replace “}”, “”
$LUN_8
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_8 -DataStore $Datastore

$LUN_9 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L9”} | Select ConsoleDeviceName,runtimename
$LUN_9 = $LUN_9 | Select ConsoleDeviceName
$LUN_9 = $LUN_9 -replace “@{ConsoleDeviceName=”, “”
$LUN_9 = $LUN_9 -replace “}”, “”
$LUN_9
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_9 -DataStore $Datastore

$LUN_10 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L10”} | Select ConsoleDeviceName,runtimename
$LUN_10 = $LUN_10 | Select ConsoleDeviceName
$LUN_10 = $LUN_10 -replace “@{ConsoleDeviceName=”, “”
$LUN_10 = $LUN_10 -replace “}”, “”
$LUN_10
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_10 -DataStore $Datastore

##############
### VERIFY ###
##############
##Finding RDMs Using PowerCLI:##
# Detailed #
# Get-VM | Get-HardDisk -DiskType “RawPhysical” | Select Parent,Name,DiskType,ScsiCanonicalName,DeviceName,CapacityGB | fl
# Brief #
# Get-ScsiLun -VMHost $VMhostname -LunType disk
# NAA #
# Get-ScsiLun -VMHost $VMhostname -LunType disk | select CanonicalName

### Get IP Address for ViClient to check GUI ###
# Get-VMHost -Name $VMhostname | Get-VMHostNetworkAdapter

EMC VNX – SMI-S Configuration & Discovery

The following are some configuration notes for configuring SMI-S to allow communication with the ‘VNX Storage Processors’, SMI-S can then be leveraged by for example VCE Vision or ViPR to configure/report on the VNX array. Before proceeding ensure you have the both VNX Storage Processor A&B IP addresses to hand, the SMI-S host will use these IP’s to allow for out-of-band communication over IP with the VNX. EMC SMI-S provider is included as part of ‘Solutions Enabler with SMIS’ install package which can be downloaded from ‘support.emc.com’.

Begin by installing SMI-S Provider, ensuring you select the ‘Array provider’ (Windows does not require Host provider) and chose the option for SMISPROVIDER_COMPONENT:
VisionVMAX1

From the windows services.msc console check that both the ‘ECOM’ and ‘storsrvd’ services are set to automatic and in a running state:
VisionVMAX2
Check that EMC storsrvd daemon is installed and running from a Windows cmd prompt using stordaemon.exe:
stordaemon install storsrvd -autostart
stordaemon start storsrvd
stordaemon.exe list

VisionVMAX3
Or using the SC (service control) command you can query/start/config the ECOM and storsrvd services:
sc query ECOM.exe
sc query storsrvd
sc start ECOM.exe
sc start storsrvd
sc config ECOM.exe start=auto
sc config storsrvd start=auto

VisionVMAX4

Run netstat -a and check the host is listening on ports 5988 5989:
VisionVMAX5

UPDATE ENVIRONMENT VARIABLES:
Add the SYMCLI installation directory path (DRIVE:\Program Files\EMC\ECIM\ECOM\BIN) to the list of system paths:
VisionVMAX2a
Or use the windows CLI to add the SYMCLI and ECOM directories to the PATH environment variable:

setx /M PATH "%PATH%;C:\Program Files\EMC\SYMCLI\bin;C:\Program Files\EMC\ECIM\ECOM\bin"

If experiencing issues such as the ECOM service failing to start it is worth rebooting the mgmt server at this stage.

ECOM SERVER: ADD A NEW SMI-S Provider User
Provided all the validations are successful then proceed to login to the ECOM server and create the user you would like to use for (Vision/ViPR) connectivity:
Open https://localhost:5989/ecomconfig
Login with default credentials of: admin / #1Password
VisionVMAX6a

Select the option to add a new user and create the Vision user with administrator role and scope local:
Visionvmax7ab
VisionVMAX8ab

Windows Firewall
If the Windows firewall is enabled then rules will need to be created to allow ECOM ports TCP 5988&5989 and SLP port UDP 427. For example using the windows command line netsh to create rules for SLP and ECOM:

netsh advfirewall firewall add rule name="SLP" dir=in protocol=UDP localport=427 action=allow

netsh advfirewall firewall add rule name=”ECOM” dir=in protocol=TCP localport=5988-5989 action=allow

netsh advfirewall firewall show rule name=SLP
netsh advfirewall firewall show rule name=ECOM

VisionVMAX11

Discover and Add the VNX using TestSMIProvider:
Confirm communication to the VNX from the SMI-S host by running the navicli getagent cmd on both VNX Storage Processors from the Element Manager cmd prompt:
naviseccli -h SPA-IP getagent
choose option 2 if prompted
naviseccli -h SPB-IP getagent
choose option 2 if prompted

Or using credentials:
naviseccli -h SPIP -user sysadmin -password sysadmin -scope 0 getagent

Open a Windows cmd prompt session as admin user, if the environment variable has not been set then you will need to ‘cd’ to ‘cd D:\Program Files\EMC\SYMCLI\bin
symcfg auth add -host SPA_IP -username sysuser -password syspw
symcfg auth add -host SPB_IP -username sysuser -password syspw

Create a text file, for example called SPIP.txt that contains the IP addresses for SP A&B. Then run the following commands to discover and list the VNX:
symcfg discover -clariion -file D:\spip.txt
symcfg list -clariion

Again from a Windows cmd prompt session as admin user, if the environment variable has not been set then you will need to ‘cd’ to c:\Program Files\EMC\ECIM\ECOM\BIN. Type TestSMIProvider.exe at the prompt, from here chose all defaults except for the Vision user and password created through the ECOM console:
VisionVMAX9

At the prompt type ‘addsys’ to confirm connectivity between the VNX Array and the SMI-S Host:


(localhost:5988) ? addsys
Add System {y|n} [n]: y

ArrayType (1=Clar, 2=Symm) [1]:
One or more IP address or Hostname or Array ID

Elements for Addresses
IP address or hostname or array id 0 (blank to quit): SPA_IP
IP address or hostname or array id 1 (blank to quit): SPB_IP
IP address or hostname or array id 2 (blank to quit):
Address types corresponding to addresses specified above.
(1=URL, 2=IP/Nodename, 3=Array ID)
Address Type (0) [default=2]:
Address Type (1) [default=2]:
User [null]: sysuser
Password [null]: syspw
++++ EMCAddSystem ++++
OUTPUT : 0
Legend:0=Success, 1=Not Supported, 2=Unknown, 3=Timeout, 4=Failed
5=Invalid Parameter
4096=Job Queued, 4097=Size Not Supported
Note: Not all above values apply to all methods – see MOF for the method.

System : //SPA_IP/root/emc:Clar_StorageSystem.CreationClassName=”Clar_Stora
geSystem”,Name=”CLARiiON+CKM00100000123″

In 12.468753 Seconds

Please press enter key to continue…

At the prompt type ‘dv‘ to confirm connectivity between the VNX and SMI-S Host:
VisionVMAX10

For any troubleshooting please refer to: ‘C:\Program Files\EMC\ECIM\ECOM\log’

Note: When configuring VCE Vision please ensure to use the ‘SMI-S Host’ IP address for VNX Block entries in the Vblock.xml configuration file, the NAS portion of the VNX uses the Control Station IP addresses for communication which have ECOM configured by default.

How to remove VNX systems using SMI-S “remsys” command:

  1. Log into the SMI-S Provider server
  2. Open a command prompt (cmd).
  3. Change (cd) to C:\Program Files\EMC\ECIM\ECOM\bin
  4. Run TestSmiProvider.exe
  5. Enter ein
  6. Enter symm_StorageSystem
  7. Copy the line that specifies the VNX system you want to remove:
    Clar_StorageSystem.CreationClassName=”Clar_StorageSystem”,Name=”CLARiiON+CKM001xxxxxxxx”
  8. Enter remsys
  9. Enter Y
  10. Paste the line specifying the VNX system you want to remove that you copied in the preceding step.
  11. Enter Y
  12. Run a dv command to confirm the VNX system has been removed.

Built with EMC SMI-S Provider: V4.6.2
Namespace: root/emc
repeat count: 1
(localhost:5988) ? remsys
Remove System {y|n} [n]: y
System’s ObjectPath[null]: Clar_StorageSystem.CreationClassName=”Clar_StorageSys
tem”,Name=”CLARiiON+CKM001xxxxxxxx
About to delete system Clar_StorageSystem.CreationClassName=”Clar_StorageSystem”
,Name=”CLARiiON+CKM001xxxxxxxx
Are you sure {y|n} [n]: y

 

EMC VMAX – SMI-S Configuration & Discovery

The following are some configuration notes for configuring the ‘VMAX Management HOST’ for communication via SMI-S for purposes such as ‘ViPR’ or with ‘VCE Vision’. Before proceeding ensure you have presented and configured the ‘VMAX Management HOST’ with gatekeeper volumes from the VMAX to allow for in-band communication over Fibre Channel. EMC SMI-S provider is included as part of ‘Solutions Enabler with SMIS’ install package which can be downloaded from ‘support.emc.com’.

Begin by installing SMI-S Provider, ensuring you select the ‘Array provider’ (Windows does not require Host provider) and chose the option for SMISPROVIDER_COMPONENT:
VisionVMAX1

From the windows services.msc console check that both the ‘ECOM’ and ‘storsrvd’ services are set to automatic and in a running state:
VisionVMAX2
Check that EMC storsrvd daemon is installed and running from a Windows cmd prompt using stordaemon.exe:
stordaemon install storsrvd -autostart
stordaemon start storsrvd
stordaemon.exe list

VisionVMAX3
Or using the SC (service control) command you can query/start/config the ECOM and storsrvd services:
sc query ECOM.exe
sc query storsrvd
sc start ECOM.exe
sc start storsrvd
sc config ECOM.exe start=auto
sc config storsrvd start=auto

VisionVMAX4

Run netstat -a and check the host is listening on ports 5988 5989:
VisionVMAX5

UPDATE ENVIRONMENT VARIABLES:
Add the SYMCLI installation directory path (DRIVE:\Program Files\EMC\ECIM\ECOM\BIN) to the list of system paths:
VisionVMAX2a
Or use the windows CLI to add the SYMCLI and ECOM directories to the PATH environment variable:

setx /M PATH "%PATH%;C:\Program Files\EMC\SYMCLI\bin;C:\Program Files\EMC\ECIM\ECOM\bin"

Perform a Symcfg Discover followed by Symcfg List to ensure communication is present between the VMAX and the VMAX management server.

If experiencing issues such as the ECOM service failing to start it is worth rebooting the mgmt server at this stage.

ECOM SERVER: Adding A NEW SMI-S Provider User 
Provided all the validations are successful then proceed to login to the ECOM server and create the user you would like to use for connectivity:
Open https://localhost:5989/ecomconfig
Login with default credentials of: admin / #1Password
VisionVMAX6a

Select the option to add a new user and create the Vision user with administrator role and scope local:
Visionvmax7ab
VisionVMAX8ab

Windows Firewall
If the Windows firewall is enabled then rules will need to be created to allow ECOM ports TCP 5988&5989 and SLP port UDP 427. For example using the windows command line netsh to create rules for SLP and ECOM:

netsh advfirewall firewall add rule name="SLP" dir=in protocol=UDP localport=427 action=allow

netsh advfirewall firewall add rule name=”ECOM” dir=in protocol=TCP localport=5988-5989 action=allow

netsh advfirewall firewall show rule name=SLP
netsh advfirewall firewall show rule name=ECOM

VisionVMAX11

Confirm VMAX Connectivity VIA SMI-S (TestSMIProvider)
Open a Windows cmd prompt session as admin user, if the environment variable has not been set then you will need to ‘cd’ to c:\Program Files\EMC\ECIM\ECOM\BIN. Type TestSMIProvider.exe at the prompt, from here chose all defaults except for the Vision user and password created through the ECOM console:

VisionVMAX9

At the prompt type ‘dv’ (display version info) command to confirm connectivity between the VMAX and SMI-S:
VisionVMAX10

For any troubleshooting please refer to: ‘C:\Program Files\EMC\ECIM\ECOM\log’

Note: When configuring Vision please ensure to use the ‘VMAX Management HOST’ IP address for all VMAX entries in the Vblock.xml configuration file.