EMC UIM/P – Editing The Database

Thank you @CliffCahill for providing this trick!

Ensure to back up the UIM/P(Unified Infrastructure Manager for Provisioning) DB before you begin. The following provides detailed steps on how to modify IP settings for ESXi host service offerings stored in the UIM DB.

Login to the UIM CLI via putty.
#To login to uim voyencedb database:
su – pgdba
psql voyencedb uim

To pull back all ESXi O/S settings:
select *, from ossettings;

To update the gateway for all service offerings / ESXi host:
update ossettings set gateway = ‘10.10.1.254’;

To update the IP address on individual ESXi host – id is listed when you run “select” command:
update ossettings set ip_address = ‘10.10.1.10’ where id = 2338;
update ossettings set ip_address = ‘10.10.1.11’ where id = 2302;

Vblock – Advanced Management Pod Second Generation (AMP-2)

Vblock Advanced Management Pod (AMP)

The AMP consists of the management components of a Vblock system, these management components are self-Contained and provide Out-of-Band Management for the entire Vblock Infrastructure. The servers and storage(in the case of AMP-2HA) that make up the AMP host all of the Vblock management applications in a dedicated environment separate from that of the production environment. This type of separation allows for the Vblock to operate even in the event of an AMP failure scenario.

Management Software stack:
Code versions and exact details of the AMP software management components are deterministic on the RCM (Release Code Matrix) level used to configure the Vblock. Here is an example of some of the core AMP management components:
– VMware vCenter Server Suite
– Unisphere for VNX|VMAX (RecoverPoint and VPLEX Optional)
– XtremIO XMS
– PowerPath Licensing Appliance
– EMC ESRS Appliances
– Cisco DCNM
– VCE Vision Intelligent Operations
– Cisco Nexus 1000v

AMP-2 Hardware
There are 3 AMP-2 models associated with the Vblock 340,540&740: AMP-2P | AMP-2RP | AMP-2HA

AMP-2P (“Physical”) – One Cisco UCS C220 dedicated Server to run the management workload.
AMP-2RP (“Redundant Physical”) – Two Cisco UCS C220 servers supports application and hardware redundancy.

The AMP-2HA includes 2 or 3 Cisco ‘C’ series servers for compute along with the highly available VNXe3200 storage array where the VM storage resides providing a redundant out-of-band management environment:
AMP-2HA BASE (“High Availability”) – Two Cisco UCS C220|240 servers and shared storage presented by a EMC VNXe3200 storage array.
AMP-2HA PERFORMANCE (“High Availability”) – Three Cisco UCS C220|240 servers and additional ‘FAST VP’ VNXe3200 storage.

Taking a look at the AMP-2HA Hardware stack:
The second generation of the Advanced Management Platform (AMP-2HA) is the high availability model of AMP-2HA that centralizes management components of the Vblock System and delivers out-of-band management.

Vblock 340: AMP-2HA ‘Base’ Hardware stack
2x C220 M3 SFF Server (1RU Per server)
2x 32 GB Flex Flash SD Card Modules (ESXi install location)
2x CPU Per server(2.4Ghz Ivy Bridge 6Cores Per CPU, 7.2GT/s QPI)
128GB RAM Per C220 (8x 16GB DDR3)
2x Cisco Nexus 3064-T Switch for management networking
VNXe 3200 Storage Array (25-Drive DPE loaded with 3x 100GB FAST Cache Drives & 11x 600GB 10K SAS 2.5″ Drives)

Vblock 340: AMP-2HA ‘Performance’ Hardware stack
3x C220 M3 SFF Server (1RU)
2x 32 GB Flex Flash SD Card Modules (ESXi install location)
2x CPU Per server(2.4Ghz Ivy Bridge 6Cores Per CPU, 7.2GT/s QPI)
128GB RAM Per C220 (8x 16GB DDR3)
VNXe 3200 Storage Array (25-Drive DPE loaded with 3x 100GB FAST Cache Drives, 6x 100GB FAST VP Drives & 11x 600GB 10K SAS 2.5″ Drives)

Vblock 540&740: AMP-2HA ‘Base’ Hardware stack
2x C240 M3 SFF Server (2RU Per Server)
2x 32 GB Flex Flash SD Card Modules (ESXi install location)
2x CPU Per server(2.4Ghz Ivy Bridge 6Cores Per CPU, 7.2GT/s QPI)
128GB RAM Per C240 (8x 16GB DDR3)
VNXe 3200 Storage Array (25-Drive DPE loaded with 3x 100GB FAST Cache Drives & 11x 600GB 10K SAS 2.5″ Drives)

Vblock 540&740: AMP-2HA ‘Performance’ Hardware stack
3
x C240 M3 SFF Server (2RU)
2x 32 GB Flex Flash SD Card Modules (ESXi install location)
2x CPU Per server(2.4Ghz Ivy Bridge 6C)
128GB RAM Per C240 (8x 16GB DDR3)
VNXe 3200 Storage Array (25-Drive DPE loaded with 3x 100GB FAST Cache Drives, 6x 100GB FAST VP Drives & 11x 600GB 10K SAS 2.5″ Drives)

VxBlock 340,540 & 740 –  ‘The AMP2-HA Performance’ model is the default for VMware NSX deployments allowing each of the 3 NSX controllers to be dedicated to an ESXi host for redundancy and scalability.

References:
http://www.vce.com/asset/documents/vblock-340-gen3-2-architecture-overview.pdf
http://www.vce.com/asset/documents/vblock-540-gen2-0-architecture-overview.pdf
http://www.vce.com/asset/documents/vblock-740-gen5-0-architecture-overview.pdf

http://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/ucs-c-series-rack-servers/C240M3_LFF_SpecSheet.pdf
http://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/ucs-c-series-rack-servers/C220M3_SFF_SpecSheet.pdf

EMC VMAX3 – Adding Gatekeeper RDM Volumes To VMware MGMT VM

This post outlines how to create the VMAX Masking view for Gatekeeper RDM volumes and the Script to Automate Adding RDM Disk’s to a VMware MGMT VM.

First some notes on Gatekeeper volumes:
SE(CLI) and Unisphere(GUI) are the main tools used to manage a VMAX Array, Gatekeeper volumes are required in order to carry these commands from both CLI&GUI and generate low level commands which are sent to the VMAX Array to complete the required instruction such as IG,SG,PG,MV or volume creation. It is good practice to use dedicated Gatekeeper devices and avoid using any devices which contain user or application data which may be impacted by the I/O requirement from the instruction command. For example if the device used as a gatekeeper is also servicing application I/O then a scenario may arise if the VMAX is executing a command which takes some time, as a result of this latency the application may encounter poor performance. These are the reasons why EMC strongly recommends to create and map dedicated devices as Gatekeepers.

VMAX3: Creating the RDM Volumes and Associated Masking View

This is an example Masking View for a two node ESXi cluster on which the VMAX management virtual machine shall reside:

1. Create a Port Group with the VMAX FA ports that the ESXi hosts have been zoned to:
symaccess -sid 123 -name MGMT_VM_PG -type port create
symaccess -sid 123 -name MGMT_VM_PG -type port -dirport 1d:24,2d:31,3D:28,4d:27 add

2. Create the Initiator Group containing the ESXi hosts WWNS:
symaccess -sid 123 -name MGMT_VM_IG -type initiator create -consistent_lun
symaccess -sid 123 -name MGMT_VM_IG -type initiator -wwn 21000024ff5CXXF8 add
symaccess -sid 123 -name MGMT_VM_IG -type initiator -wwn 21000024ff5CXXF9 add
symaccess -sid 123 -name MGMT_VM_IG -type initiator -wwn 21000024ff55XX4C add
symaccess -sid 123 -name MGMT_VM_IG -type initiator -wwn 21000024ff55XX4D add

3. Create the Storage Group for the Gatekeeper RDM Volumes:
symsg -sid 123 create MGMT_VM_SG -slo optimized -srp SRP_1
Listing the SRP:
symcfg list -srp

4. Create the Gatekeeper volumes (10 Gatekeeper volumes in this example) and add to the MGMT_VM_SG:
symconfigure -sid 123 -cmd “create dev count=10, emulation=FBA, sg=MGMT_VM_SG, size=3 CYL, config=tdev”; preview -nop
symconfigure -sid 123 -cmd “create dev count=10, emulation=FBA, sg=MGMT_VM_SG, size=3 CYL, config=tdev”; prepare -nop
symconfigure -sid 123 -cmd “create dev count=10, emulation=FBA, sg=MGMT_VM_SG, size=3 CYL, config=tdev”; commit -nop

5. Create the Masking View:
symaccess -sid 123 create view -name MGMT_VM_MV -sg MGMT_VM_SG -pg MGMT_VM_PG -ig MGMT_VM_IG

View Configuration Details

Confirm that the HOSTS are logged into the correct VMAX ports:
symaccess -sid 123 list logins -wwn 21000024ff5CXXF8
symaccess -sid 123 list logins -wwn 21000024ff5CXXF9
symaccess -sid 123 list logins -wwn 21000024ff55XX4C
symaccess -sid 123 list logins -wwn 21000024ff55XX4D

Verify that the HBA is a member of the correct Initiator Group:
symaccess -sid 123 list -type initiator -wwn 21000024ff5CXXF8
symaccess -sid 123 list -type initiator -wwn 21000024ff5CXXF9
symaccess -sid 123 list -type initiator -wwn 21000024ff55XX4C
symaccess -sid 123 list -type initiator -wwn 21000024ff55XX4D

Storage Group details:
symaccess -sid 123 list -type storage -name AMP_SG -v
symaccess -sid 123 show AMP_SG -type storage

Port Group details:
symaccess -sid 123 list -type port -name AMP_PG -v
symaccess -sid 123 show AMP_PG -type port

Initiator Group details:
symaccess -sid 123 list -type initiator -name AMP_IG -v
symaccess -sid 123 show AMP_IG -type initiator

Masking View details:
symaccess -sid 123 list view -name AMP_MV
symaccess -sid 123 list view -name AMP_MV -detail

Gatekeeper details:
symdev -sid 340 list -cap 3 -captype cyl
symaccess -sid 123 list assignment -dev 049:052

If you need to remove the devs from the SG:
symaccess -sid 123 -name AMP_SG -type storage remove devs 049:052


####################################################################
Script to Automate Adding RDM Disk’s:

PowerCLI Script: Automate Adding RDM Disk’s
Here is a script which scans for the Host LUN ID and then attributes the $LUN_# parameter
to the ‘ConsoleDeviceName’. This greatly simplifies the process of adding large quantities of RDM Disk’s.

There are 4 parameters used in the script. The following 3 shall be prompted for:
“Your-ESXi-Hostname” $VMhostname
“Your-VM-Name” $VM
“Your-VMFS-DS-Name” $Datastore

Please edit the runtime name as required, the script default is :
“vmhba0:C0:T0:L#”

The following example script will automatically create 10 RDM Disks on a Virtual Machine and place the pointer files
in a VMFS Datastore based on the parameters provided.

#####################################################################

Write-Host “Please edit the runtime name in the script if required before proceeding, the default is:” -ForegroundColor Red
Write-Host “vmhba0:C0:T0:L#” -ForegroundColor Green

Write-Host “Please enter the ESXi/Vcenter Host IP Address:” -ForegroundColor Yellow -NoNewline
$VMHost = Read-Host

Write-Host “Please enter the ESXi/Vcenter Username:” -ForegroundColor Yellow -NoNewline
$User = Read-Host

Write-Host “Please enter the ESXi/Vcenter Password:” -ForegroundColor Yellow -NoNewline
$Pass = Read-Host

Connect-VIServer -Server $VMHost -User $User -Password $Pass

##########################################

$VMhostname = ‘*’

ForEach ($VMhostname in (Get-VMHost -name $VMhostname)| sort)
{

Write-Host $VMhostname

}

Write-Host “Please enter the ESXi Hostname where your target VM resides:” -ForegroundColor Yellow -NoNewline
$VMhostname = Read-Host

######################################

$Datastore = ‘*’

ForEach ($Datastore in (Get-Datastore -name $Datastore)| sort)
{

Write-Host $Datastore

}

Write-Host “From the list provided – Please enter the VMFS datastore where the RDM pointer files will reside:” -ForegroundColor Yellow -NoNewline
$Datastore = Read-Host

######################################

$VM = ‘*’

ForEach ($VM in (Get-VM -name $VM)| sort)
{
Write-Host $VM
}

Write-Host “From the list provided – Please enter the VM Name where the RDM volumes shall be created on:” -ForegroundColor Yellow -NoNewline
$VM = Read-Host

##############
Write-Host “ESXi Hostname you have chosen: ” -ForegroundColor Yellow
Write-Host “$VMhostname” -ForegroundColor Green
Write-Host “VMFS you have chosen: ” -ForegroundColor Yellow
Write-Host “$Datastore” -ForegroundColor Green
Write-Host “Vitual Machine you have chosen: ” -ForegroundColor Yellow
Write-Host “$VM” -ForegroundColor Green

################
## ACLX T0:L0 ##
################
$LUN_0 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L0”} | Select ConsoleDeviceName,runtimename
$LUN_0 = $LUN_0 | Select ConsoleDeviceName
$LUN_0 = $LUN_0 -replace “@{ConsoleDeviceName=”, “”
$LUN_0 = $LUN_0 -replace “}”, “”
$LUN_0
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_0 -DataStore $Datastore

#####################
## Gatekeepers x10 ##
#####################
$LUN_1 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L1”} | Select ConsoleDeviceName,runtimename
$LUN_1 = $LUN_1 | Select ConsoleDeviceName
$LUN_1 = $LUN_1 -replace “@{ConsoleDeviceName=”, “”
$LUN_1 = $LUN_1 -replace “}”, “”
$LUN_1
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_1 -DataStore $Datastore

$LUN_2 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L2”} | Select ConsoleDeviceName,runtimename
$LUN_2 = $LUN_2 | Select ConsoleDeviceName
$LUN_2 = $LUN_2 -replace “@{ConsoleDeviceName=”, “”
$LUN_2 = $LUN_2 -replace “}”, “”
$LUN_2
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_2 -DataStore $Datastore

$LUN_3 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L3”} | Select ConsoleDeviceName,runtimename
$LUN_3 = $LUN_3 | Select ConsoleDeviceName
$LUN_3 = $LUN_3 -replace “@{ConsoleDeviceName=”, “”
$LUN_3 = $LUN_3 -replace “}”, “”
$LUN_3
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_3 -DataStore $Datastore

$LUN_4 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L4”} | Select ConsoleDeviceName,runtimename
$LUN_4 = $LUN_4 | Select ConsoleDeviceName
$LUN_4 = $LUN_4 -replace “@{ConsoleDeviceName=”, “”
$LUN_4 = $LUN_4 -replace “}”, “”
$LUN_4
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_4 -DataStore $Datastore

$LUN_5 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L5”} | Select ConsoleDeviceName,runtimename
$LUN_5 = $LUN_5 | Select ConsoleDeviceName
$LUN_5 = $LUN_5 -replace “@{ConsoleDeviceName=”, “”
$LUN_5 = $LUN_5 -replace “}”, “”
$LUN_5
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_5 -DataStore $Datastore

$LUN_6 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L6”} | Select ConsoleDeviceName,runtimename
$LUN_6 = $LUN_6 | Select ConsoleDeviceName
$LUN_6 = $LUN_6 -replace “@{ConsoleDeviceName=”, “”
$LUN_6 = $LUN_6 -replace “}”, “”
$LUN_6
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_6 -DataStore $Datastore

$LUN_7 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L7”} | Select ConsoleDeviceName,runtimename
$LUN_7 = $LUN_7 | Select ConsoleDeviceName
$LUN_7 = $LUN_7 -replace “@{ConsoleDeviceName=”, “”
$LUN_7 = $LUN_7 -replace “}”, “”
$LUN_7
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_7 -DataStore $Datastore

$LUN_8 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L8”} | Select ConsoleDeviceName,runtimename
$LUN_8 = $LUN_8 | Select ConsoleDeviceName
$LUN_8 = $LUN_8 -replace “@{ConsoleDeviceName=”, “”
$LUN_8 = $LUN_8 -replace “}”, “”
$LUN_8
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_8 -DataStore $Datastore

$LUN_9 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L9”} | Select ConsoleDeviceName,runtimename
$LUN_9 = $LUN_9 | Select ConsoleDeviceName
$LUN_9 = $LUN_9 -replace “@{ConsoleDeviceName=”, “”
$LUN_9 = $LUN_9 -replace “}”, “”
$LUN_9
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_9 -DataStore $Datastore

$LUN_10 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L10”} | Select ConsoleDeviceName,runtimename
$LUN_10 = $LUN_10 | Select ConsoleDeviceName
$LUN_10 = $LUN_10 -replace “@{ConsoleDeviceName=”, “”
$LUN_10 = $LUN_10 -replace “}”, “”
$LUN_10
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_10 -DataStore $Datastore

##############
### VERIFY ###
##############
##Finding RDMs Using PowerCLI:##
# Detailed #
# Get-VM | Get-HardDisk -DiskType “RawPhysical” | Select Parent,Name,DiskType,ScsiCanonicalName,DeviceName,CapacityGB | fl
# Brief #
# Get-ScsiLun -VMHost $VMhostname -LunType disk
# NAA #
# Get-ScsiLun -VMHost $VMhostname -LunType disk | select CanonicalName

### Get IP Address for ViClient to check GUI ###
# Get-VMHost -Name $VMhostname | Get-VMHostNetworkAdapter

EMC VNX – SMI-S Configuration & Discovery

The following are some configuration notes for configuring SMI-S to allow communication with the ‘VNX Storage Processors’, SMI-S can then be leveraged by for example VCE Vision or ViPR to configure/report on the VNX array. Before proceeding ensure you have the both VNX Storage Processor A&B IP addresses to hand, the SMI-S host will use these IP’s to allow for out-of-band communication over IP with the VNX. EMC SMI-S provider is included as part of ‘Solutions Enabler with SMIS’ install package which can be downloaded from ‘support.emc.com’.

Begin by installing SMI-S Provider, ensuring you select the ‘Array provider’ (Windows does not require Host provider) and chose the option for SMISPROVIDER_COMPONENT:
VisionVMAX1

From the windows services.msc console check that both the ‘ECOM’ and ‘storsrvd’ services are set to automatic and in a running state:
VisionVMAX2
Check that EMC storsrvd daemon is installed and running from a Windows cmd prompt using stordaemon.exe:
stordaemon install storsrvd -autostart
stordaemon start storsrvd
stordaemon.exe list

VisionVMAX3
Or using the SC (service control) command you can query/start/config the ECOM and storsrvd services:
sc query ECOM.exe
sc query storsrvd
sc start ECOM.exe
sc start storsrvd
sc config ECOM.exe start=auto
sc config storsrvd start=auto

VisionVMAX4

Run netstat -a and check the host is listening on ports 5988 5989:
VisionVMAX5

UPDATE ENVIRONMENT VARIABLES:
Add the SYMCLI installation directory path (DRIVE:\Program Files\EMC\ECIM\ECOM\BIN) to the list of system paths:
VisionVMAX2a
Or use the windows CLI to add the SYMCLI and ECOM directories to the PATH environment variable:

setx /M PATH "%PATH%;C:\Program Files\EMC\SYMCLI\bin;C:\Program Files\EMC\ECIM\ECOM\bin"

If experiencing issues such as the ECOM service failing to start it is worth rebooting the mgmt server at this stage.

ECOM SERVER: ADD A NEW SMI-S Provider User
Provided all the validations are successful then proceed to login to the ECOM server and create the user you would like to use for (Vision/ViPR) connectivity:
Open https://localhost:5989/ecomconfig
Login with default credentials of: admin / #1Password
VisionVMAX6a

Select the option to add a new user and create the Vision user with administrator role and scope local:
Visionvmax7ab
VisionVMAX8ab

Windows Firewall
If the Windows firewall is enabled then rules will need to be created to allow ECOM ports TCP 5988&5989 and SLP port UDP 427. For example using the windows command line netsh to create rules for SLP and ECOM:

netsh advfirewall firewall add rule name="SLP" dir=in protocol=UDP localport=427 action=allow

netsh advfirewall firewall add rule name=”ECOM” dir=in protocol=TCP localport=5988-5989 action=allow

netsh advfirewall firewall show rule name=SLP
netsh advfirewall firewall show rule name=ECOM

VisionVMAX11

Discover and Add the VNX using TestSMIProvider:
Confirm communication to the VNX from the SMI-S host by running the navicli getagent cmd on both VNX Storage Processors from the Element Manager cmd prompt:
naviseccli -h SPA-IP getagent
choose option 2 if prompted
naviseccli -h SPB-IP getagent
choose option 2 if prompted

Or using credentials:
naviseccli -h SPIP -user sysadmin -password sysadmin -scope 0 getagent

Open a Windows cmd prompt session as admin user, if the environment variable has not been set then you will need to ‘cd’ to ‘cd D:\Program Files\EMC\SYMCLI\bin
symcfg auth add -host SPA_IP -username sysuser -password syspw
symcfg auth add -host SPB_IP -username sysuser -password syspw

Create a text file, for example called SPIP.txt that contains the IP addresses for SP A&B. Then run the following commands to discover and list the VNX:
symcfg discover -clariion -file D:\spip.txt
symcfg list -clariion

Again from a Windows cmd prompt session as admin user, if the environment variable has not been set then you will need to ‘cd’ to c:\Program Files\EMC\ECIM\ECOM\BIN. Type TestSMIProvider.exe at the prompt, from here chose all defaults except for the Vision user and password created through the ECOM console:
VisionVMAX9

At the prompt type ‘addsys’ to confirm connectivity between the VNX Array and the SMI-S Host:


(localhost:5988) ? addsys
Add System {y|n} [n]: y

ArrayType (1=Clar, 2=Symm) [1]:
One or more IP address or Hostname or Array ID

Elements for Addresses
IP address or hostname or array id 0 (blank to quit): SPA_IP
IP address or hostname or array id 1 (blank to quit): SPB_IP
IP address or hostname or array id 2 (blank to quit):
Address types corresponding to addresses specified above.
(1=URL, 2=IP/Nodename, 3=Array ID)
Address Type (0) [default=2]:
Address Type (1) [default=2]:
User [null]: sysuser
Password [null]: syspw
++++ EMCAddSystem ++++
OUTPUT : 0
Legend:0=Success, 1=Not Supported, 2=Unknown, 3=Timeout, 4=Failed
5=Invalid Parameter
4096=Job Queued, 4097=Size Not Supported
Note: Not all above values apply to all methods – see MOF for the method.

System : //SPA_IP/root/emc:Clar_StorageSystem.CreationClassName=”Clar_Stora
geSystem”,Name=”CLARiiON+CKM00100000123″

In 12.468753 Seconds

Please press enter key to continue…

At the prompt type ‘dv‘ to confirm connectivity between the VNX and SMI-S Host:
VisionVMAX10

For any troubleshooting please refer to: ‘C:\Program Files\EMC\ECIM\ECOM\log’

Note: When configuring VCE Vision please ensure to use the ‘SMI-S Host’ IP address for VNX Block entries in the Vblock.xml configuration file, the NAS portion of the VNX uses the Control Station IP addresses for communication which have ECOM configured by default.

How to remove VNX systems using SMI-S “remsys” command:

  1. Log into the SMI-S Provider server
  2. Open a command prompt (cmd).
  3. Change (cd) to C:\Program Files\EMC\ECIM\ECOM\bin
  4. Run TestSmiProvider.exe
  5. Enter ein
  6. Enter symm_StorageSystem
  7. Copy the line that specifies the VNX system you want to remove:
    Clar_StorageSystem.CreationClassName=”Clar_StorageSystem”,Name=”CLARiiON+CKM001xxxxxxxx”
  8. Enter remsys
  9. Enter Y
  10. Paste the line specifying the VNX system you want to remove that you copied in the preceding step.
  11. Enter Y
  12. Run a dv command to confirm the VNX system has been removed.

Built with EMC SMI-S Provider: V4.6.2
Namespace: root/emc
repeat count: 1
(localhost:5988) ? remsys
Remove System {y|n} [n]: y
System’s ObjectPath[null]: Clar_StorageSystem.CreationClassName=”Clar_StorageSys
tem”,Name=”CLARiiON+CKM001xxxxxxxx
About to delete system Clar_StorageSystem.CreationClassName=”Clar_StorageSystem”
,Name=”CLARiiON+CKM001xxxxxxxx
Are you sure {y|n} [n]: y

 

EMC VMAX – SMI-S Configuration & Discovery

The following are some configuration notes for configuring the ‘VMAX Management HOST’ for communication via SMI-S for purposes such as ‘ViPR’ or with ‘VCE Vision’. Before proceeding ensure you have presented and configured the ‘VMAX Management HOST’ with gatekeeper volumes from the VMAX to allow for in-band communication over Fibre Channel. EMC SMI-S provider is included as part of ‘Solutions Enabler with SMIS’ install package which can be downloaded from ‘support.emc.com’.

Begin by installing SMI-S Provider, ensuring you select the ‘Array provider’ (Windows does not require Host provider) and chose the option for SMISPROVIDER_COMPONENT:
VisionVMAX1

From the windows services.msc console check that both the ‘ECOM’ and ‘storsrvd’ services are set to automatic and in a running state:
VisionVMAX2
Check that EMC storsrvd daemon is installed and running from a Windows cmd prompt using stordaemon.exe:
stordaemon install storsrvd -autostart
stordaemon start storsrvd
stordaemon.exe list

VisionVMAX3
Or using the SC (service control) command you can query/start/config the ECOM and storsrvd services:
sc query ECOM.exe
sc query storsrvd
sc start ECOM.exe
sc start storsrvd
sc config ECOM.exe start=auto
sc config storsrvd start=auto

VisionVMAX4

Run netstat -a and check the host is listening on ports 5988 5989:
VisionVMAX5

UPDATE ENVIRONMENT VARIABLES:
Add the SYMCLI installation directory path (DRIVE:\Program Files\EMC\ECIM\ECOM\BIN) to the list of system paths:
VisionVMAX2a
Or use the windows CLI to add the SYMCLI and ECOM directories to the PATH environment variable:

setx /M PATH "%PATH%;C:\Program Files\EMC\SYMCLI\bin;C:\Program Files\EMC\ECIM\ECOM\bin"

Perform a Symcfg Discover followed by Symcfg List to ensure communication is present between the VMAX and the VMAX management server.

If experiencing issues such as the ECOM service failing to start it is worth rebooting the mgmt server at this stage.

ECOM SERVER: Adding A NEW SMI-S Provider User 
Provided all the validations are successful then proceed to login to the ECOM server and create the user you would like to use for connectivity:
Open https://localhost:5989/ecomconfig
Login with default credentials of: admin / #1Password
VisionVMAX6a

Select the option to add a new user and create the Vision user with administrator role and scope local:
Visionvmax7ab
VisionVMAX8ab

Windows Firewall
If the Windows firewall is enabled then rules will need to be created to allow ECOM ports TCP 5988&5989 and SLP port UDP 427. For example using the windows command line netsh to create rules for SLP and ECOM:

netsh advfirewall firewall add rule name="SLP" dir=in protocol=UDP localport=427 action=allow

netsh advfirewall firewall add rule name=”ECOM” dir=in protocol=TCP localport=5988-5989 action=allow

netsh advfirewall firewall show rule name=SLP
netsh advfirewall firewall show rule name=ECOM

VisionVMAX11

Confirm VMAX Connectivity VIA SMI-S (TestSMIProvider)
Open a Windows cmd prompt session as admin user, if the environment variable has not been set then you will need to ‘cd’ to c:\Program Files\EMC\ECIM\ECOM\BIN. Type TestSMIProvider.exe at the prompt, from here chose all defaults except for the Vision user and password created through the ECOM console:

VisionVMAX9

At the prompt type ‘dv’ (display version info) command to confirm connectivity between the VMAX and SMI-S:
VisionVMAX10

For any troubleshooting please refer to: ‘C:\Program Files\EMC\ECIM\ECOM\log’

Note: When configuring Vision please ensure to use the ‘VMAX Management HOST’ IP address for all VMAX entries in the Vblock.xml configuration file.

The Vblock vCake

Created by Cliff Cahill @CliffCahill

In conjunction with the announcement of the ‘Vblock 740’ @VCE’s latest addition to the Vblock range Cliff kindly put his creative and baking skills to good effect by creating the ‘Vblock 740 vCake’. The Vblock System 740 is VCE’s flagship converged infrastructure, an IT infrastructure built on industry-leading technology combining Network and Compute components from Cisco, Storage in the form of the EMC VMAX³ and virtualization from VMWare – all market leaders in their respective technology sectors.

The steps below outline the process that was followed in order to deliver the Vblock vCake:

1. Logical Configuration Summary(LCS) & Bill Of Materials(BOM)
2. Physical Build Commences
3. Configuring the Advanced Management Platform (AMP)
4. Configuring the Cisco Network Components (MDS, NEXUS) & Compute Components (Cisco UCS)
5. Configuring the EMC VMAX³ & VMWare Virtualization Infrastructure (ESXi, vCenter, VUM)
6. Logical Configuration QA
7. Vblock Sent to Distribution
8. Deployment and Implentation Begins
9. Deployment and Implentation Knowledge Transfer
10. Vblock Enters Production

1. Logical Configuration Summary and Bill of Materials
The vArchitect has taken the customers requirements and expertly sized the Vblock to deliver optimum performance for the customers mission critical applications. The bill of materials has been completed and the Technical Program Team has qualified and validated the solution design. The Logical Configuration Survey has been signed off with the customer and the Vblock has been added to the Production Schedule. All components are laid out in preparation for integration:

vCake1

2. Physical Build Commences
Next our manufacturing team place each part of the Vblock platform into its correct location within the cabinet:
vCake3

The Cisco UCS, fabric interconnects, network switches and EMC VMAX³ are now installed:
vCake4

Then the power outlet units (POU’s) and all network and power cabling get installed and connected to the appropriate ports based on a Master Port Map:
vCake7

3. Configuring the Advanced Management Platform (AMP)
Next the Vblock platform goes through the Logical integration phase where it is expertly configured by the LB Team to meet specific customer requirements, beginning with the AMP configuration which runs the software that manages the platform (Cisco C220 servers and EMC VNXe storage array):
vCake11

4. Configuring the Cisco Network Components (MDS, NEXUS) & Compute Components (Cisco UCS)
Code Upgrades are complete based on the fully tested and validated RCM. Initial and advanced scripting executed:
vCake10

5. Configuring the EMC VMAX³ & VMware Virtualization infrastructure (ESXi, vCenter, VUM)
The EMC VMAX³ Bin File has been loaded and the VMWare ESXi clusters are provisioned via UIM:
vCake12

6. Logical Configuration QA
Logical Configuration Complete, QA Done and VCE Vision verifies compliance with RCM. Hand Off Email Sent:
vCake13

7. Vblock Sent to Distribution
The Vblock is down in distribution and ready for dispatch. Customer agreed delivery date is achieved. Vblock is covered to ensure immaculate delivery to its proud new owner:
vCake14

8. Deployment and Implementation Begins
Vblock arrives at the customer site as a piece of truly converged infrastructure and the Deployment and Implementation phase commences:
photo 1
photo 2

9. Deployment and Implentation Knowledge Transfer
The Vblock has been integrated into the customer environment and Deployment & Implementation knowledge transfer begins:
photo 3

10. Vblock Enters Production
The Vblock is now in operation – End User is already after taking a few compute slices:
photo 4
photo 5

Vblock Infrastructure Platform is Transforming IT
vCake Final

BOM for Biscuit Cake
400g digestives
2 Crunchies
450g milk choc
90g butter
5tbsp double cream
1tbsp golden syrup

BOM for Ganache:
180ml double cream
28g butter
227g dark choc

Logical Build Instructions for vCake
Biscuit Cake
Melt milk choc with butter over pot of simmering water.
Break up biscuits and crush crunchies in a large bowl.
Once milk choc melted, cool slightly and then add cream and golden syrup.
Mix using a whisk until completely mixed through.
Add to digestives and make sure every biscuit is covered.
Put it in to a prelared tim lined with cling film and refrigerate.

Ganache
Break Dark chocolate into Single Squares and add to Glass Bowl
Slowly Bring cream and butter to the boil and poor over dark chocolate.
Allow to sit for a couple of minutes and then mix thoroughly until choc completely melted.
Allow to set until desired consistency has been reached.

EMC XtremIO – Welcome to Vblock

Vblock Specialized System for Extreme Applications

This Vblock as the name suggests is VCE’s solution targeted at Extreme Applications such as VDI and High performance database applications. Using the XtremIO all-flash enterprise storage array this Vblock delivers outstanding performance with consistent sub-millisecond response times and excellent real-time inline deduplication providing the ability to remove redundant information and thus lowering the amount of capacity required. If your workload has a requirement for low latency, a high number of IOPs and is random in nature then this is an optimal solution for your application.

There are two flavors of the Specialized Vblock: a single-cabinet system and a dual-cabinet system. The single-cab configuration contains a single XtremIO X-Brick while the dual-cab offering presents a two X-Brick configuration.
XTVblockComponents

The Cisco C220 servers and the VNXe are used for application management workloads, while the UCS Servers and XtremIO array will drive the production workloads. VMware vSphere 5.x provides the virtualization layer bringing great benefits from VMware VAAI Integration allowing for the offload of provisioning tasks such as instant cloning of VMs. Vblock Specialized System for Extreme Applications includes a pair of Nexus 3064-T switches to support the management connectivity for all components of the Vblock and the Cisco Nexus 5548UP switches configured as a VPC pair will provide 10 GbE and Fibre Channel over Ethernet (FCoE) connectivity for the production workload. For the Dual-Cabinet Vblock 2 X-Brick system, the backend is boosted by two 18xPort infiniband switches enabling high speed Remote Direct Memory Access (RDMA) communication between the four X-Brick nodes in the cluster. The EMC VNXe3150 for the single-cabinet solution or VNXe3300 for the dual-cabinet are used to house the storage requirements of the management VMs. The Vblock Specialized System as you can gather from the components included is a no single point of failure fully redundant and extremely performant platform.

XIO Table

Some Figures
Vblock Specialized System for Extreme Applications Single/Dual cabinet Vblocks can support the following VDI environments:

XTVblockVDI

Single Rack system
♦ The Single Rack system can host approximately 3500 virtual desktops with one X-Brick, providing 7.5TB of usable storage / 37.5TB(5:1 Deduplication).
♦ 1x xbrick= 150K fully random 4K IOPS @ 50% read/50% write (250K IOPS @ 100% read) Latency <1ms

1RXAPP

Two Rack system
♦ The Dual Rack system can host approximately 7000 virtual desktops with two X-Bricks, providing 15TB of usable storage / 75TB(5:1 Deduplication).
♦ 2x xbrick= 300K fully random 4K IOPS @ 50% read/50% write (500K IOPS @ 100% read) Latency <1ms

2RXAPP

Summary
The Vblock Specialized System for Extreme Applications is a pre-configured, pre-tested all Flash system which can meet extremely high levels of performance, particularly for random I/O workloads that require low latency.

Thanks to Shree Das and Pankesh Mehta for providing content.

Useful Links
http://xtremio.com/vblock
http://www.vce.com/products/specialized/extreme-apps
http://virtualgeek.typepad.com/virtual_geek/2013/11/xtremio-taking-the-time-to-do-it-right.html
http://www.thulinaround.com/2013/11/14/peeling-back-the-layers-of-xtremio-what-is-an-x-brick/

XT1