VxBlock + VMAX 950F/FX

VxBlock offers the award-winning  all-flash enterprise storage VMAX 950F – Either with a VxBlock 740 as primary storage or as a VxBlock Storage Technology Extension (STE) add-on to a VxBlock 350, 540 or 740.

vxblock_vmax950_0

The VMAX 950 hardware code-named Thunderbolt , along with running the Hypermax Cypress release offers further enhancements in terms of performance and functionality. When combined with a VxBlock 740 you now have the most powerful, intelligent, scalable and enterprise ready industry leading CI to support your most mission-critical applications. Continue reading

DellEMC ViPR 3.6 – Install&Config for VMAX AFA

This post details the wizard driven installation steps to install ViPR Controller and configure for a VMAX AFA system.

Recommendation when deploying on a vSphere cluster is to deploy the ViPR Controller on a minimal of a 3 node ESXi DRS cluster, and to set an anti-affinity rule among the
ViPR Controller nodes to, “Separate Virtual Machines,” on available ESXi nodes.

Begin by downloading the ViPR Controller packages from support.emc.com

ViPR 3.6 Install & Config1

This ova will deploy three VMs in a 2+1 redundant fashion allowing for the failure of a single controller without affecting availability. There is also a 3+2 ova available. Continue reading

VMAX AFA – Compression Notes

VMAX All Flash compression feature – released with HYPERMAX OS 5977.945.890

Some key points:
1. This new feature compresses data before it is written to flash drives.
2. Compression is managed per Storage Group level.
3. Approx 2:1 savings in storage efficiency.
4. One compression I/O module is required per director, which are installed on all VMAX All Flash systems. Continue reading

Embedded Managment for VMAX All Flash and VMAX3

The following is an excellent post written by Paul Martin (@rawstorage) which details the eMGMT feature available on VMAX All Flash and VMAX3 Systems:

Embedded Managment for VMAX All Flash and VMAX3 – Part 1 Introduction to Embedded Management and Configuring Client Server Access

Two key questions that have come up recently and is addressed in Paul’s post:

What can I do with eManagement?

eManagement is a fully functional install of Unisphere for VMAX, you can do everything that is possible with Unisphere.  So you have full control over array management, performance statistics, reports, Database Storage Analyzer and a full REST API.

So what happens if I need command line access for any reason, is that still possible?  The answer is yes it’s still possible you can always have an external host with solutions enabler installed and gatekeepers mapped if this is something you will require on an ongoing basis.  You can also configure client server access to connect and utilize the solutions enabler instance on the eManagment server.  I’ll take you through that in the next section.

How do I update eManagment software running on my VMAX array?

The good news here is you don’t have to, when a new release of HYPERMAX OS (the VMAX operating environment) is installed the container running the eManagement software is updated for you automatically.  One less moving part to have to worry about, and you will automatically have new features in the microcode available to you through the latest user interface..

In addition ViPR support for eMGMT is targeted for Q3 2016.

 

EMC VMAX – Fully Pre-allocate TDEV

By Fully Pre-allocating a TDEV all the tracks associated with the device are reserved, this may be useful for mission critical apps or avoiding any write miss penalties.

Example SYMCLI:
Single TDEV example:
symconfigure -sid xxx -cmd “start allocate on tdev 0c66 end_cyl=last_cyl allocate_type=persistent;” commitCliPreAll1

Range of TDEVs:
symconfigure -sid xxx -cmd “start allocate on tdev 0c6e:1116 end_cyl=last_cyl allocate_type=persistent;” commitCliPreAll3

CliPreAll2

Example UNISPHERE:
From the Unisphere GUI navigate to storage>volumes right click the device you wish to modify and select ‘Start allocate’.

UniPreAll1

UniPreAll3

UniPreAll2

UniPreAll4

EMC VMAX3 – Adding Gatekeeper RDM Volumes To VMware MGMT VM

This post outlines how to create the VMAX Masking view for Gatekeeper RDM volumes and the Script to Automate Adding RDM Disk’s to a VMware MGMT VM.

First some notes on Gatekeeper volumes:
SE(CLI) and Unisphere(GUI) are the main tools used to manage a VMAX Array, Gatekeeper volumes are required in order to carry these commands from both CLI&GUI and generate low level commands which are sent to the VMAX Array to complete the required instruction such as IG,SG,PG,MV or volume creation. It is good practice to use dedicated Gatekeeper devices and avoid using any devices which contain user or application data which may be impacted by the I/O requirement from the instruction command. For example if the device used as a gatekeeper is also servicing application I/O then a scenario may arise if the VMAX is executing a command which takes some time, as a result of this latency the application may encounter poor performance. These are the reasons why EMC strongly recommends to create and map dedicated devices as Gatekeepers.

VMAX3: Creating the RDM Volumes and Associated Masking View

This is an example Masking View for a two node ESXi cluster on which the VMAX management virtual machine shall reside:

1. Create a Port Group with the VMAX FA ports that the ESXi hosts have been zoned to:
symaccess -sid 123 -name MGMT_VM_PG -type port create
symaccess -sid 123 -name MGMT_VM_PG -type port -dirport 1d:24,2d:31,3D:28,4d:27 add

2. Create the Initiator Group containing the ESXi hosts WWNS:
symaccess -sid 123 -name MGMT_VM_IG -type initiator create -consistent_lun
symaccess -sid 123 -name MGMT_VM_IG -type initiator -wwn 21000024ff5CXXF8 add
symaccess -sid 123 -name MGMT_VM_IG -type initiator -wwn 21000024ff5CXXF9 add
symaccess -sid 123 -name MGMT_VM_IG -type initiator -wwn 21000024ff55XX4C add
symaccess -sid 123 -name MGMT_VM_IG -type initiator -wwn 21000024ff55XX4D add

3. Create the Storage Group for the Gatekeeper RDM Volumes:
symsg -sid 123 create MGMT_VM_SG -slo optimized -srp SRP_1
Listing the SRP:
symcfg list -srp

4. Create the Gatekeeper volumes (10 Gatekeeper volumes in this example) and add to the MGMT_VM_SG:
symconfigure -sid 123 -cmd “create dev count=10, emulation=FBA, sg=MGMT_VM_SG, size=3 CYL, config=tdev”; preview -nop
symconfigure -sid 123 -cmd “create dev count=10, emulation=FBA, sg=MGMT_VM_SG, size=3 CYL, config=tdev”; prepare -nop
symconfigure -sid 123 -cmd “create dev count=10, emulation=FBA, sg=MGMT_VM_SG, size=3 CYL, config=tdev”; commit -nop

5. Create the Masking View:
symaccess -sid 123 create view -name MGMT_VM_MV -sg MGMT_VM_SG -pg MGMT_VM_PG -ig MGMT_VM_IG

View Configuration Details

Confirm that the HOSTS are logged into the correct VMAX ports:
symaccess -sid 123 list logins -wwn 21000024ff5CXXF8
symaccess -sid 123 list logins -wwn 21000024ff5CXXF9
symaccess -sid 123 list logins -wwn 21000024ff55XX4C
symaccess -sid 123 list logins -wwn 21000024ff55XX4D

Verify that the HBA is a member of the correct Initiator Group:
symaccess -sid 123 list -type initiator -wwn 21000024ff5CXXF8
symaccess -sid 123 list -type initiator -wwn 21000024ff5CXXF9
symaccess -sid 123 list -type initiator -wwn 21000024ff55XX4C
symaccess -sid 123 list -type initiator -wwn 21000024ff55XX4D

Storage Group details:
symaccess -sid 123 list -type storage -name AMP_SG -v
symaccess -sid 123 show AMP_SG -type storage

Port Group details:
symaccess -sid 123 list -type port -name AMP_PG -v
symaccess -sid 123 show AMP_PG -type port

Initiator Group details:
symaccess -sid 123 list -type initiator -name AMP_IG -v
symaccess -sid 123 show AMP_IG -type initiator

Masking View details:
symaccess -sid 123 list view -name AMP_MV
symaccess -sid 123 list view -name AMP_MV -detail

Gatekeeper details:
symdev -sid 340 list -cap 3 -captype cyl
symaccess -sid 123 list assignment -dev 049:052

If you need to remove the devs from the SG:
symaccess -sid 123 -name AMP_SG -type storage remove devs 049:052


####################################################################
Script to Automate Adding RDM Disk’s:

PowerCLI Script: Automate Adding RDM Disk’s
Here is a script which scans for the Host LUN ID and then attributes the $LUN_# parameter
to the ‘ConsoleDeviceName’. This greatly simplifies the process of adding large quantities of RDM Disk’s.

There are 4 parameters used in the script. The following 3 shall be prompted for:
“Your-ESXi-Hostname” $VMhostname
“Your-VM-Name” $VM
“Your-VMFS-DS-Name” $Datastore

Please edit the runtime name as required, the script default is :
“vmhba0:C0:T0:L#”

The following example script will automatically create 10 RDM Disks on a Virtual Machine and place the pointer files
in a VMFS Datastore based on the parameters provided.

#####################################################################

Write-Host “Please edit the runtime name in the script if required before proceeding, the default is:” -ForegroundColor Red
Write-Host “vmhba0:C0:T0:L#” -ForegroundColor Green

Write-Host “Please enter the ESXi/Vcenter Host IP Address:” -ForegroundColor Yellow -NoNewline
$VMHost = Read-Host

Write-Host “Please enter the ESXi/Vcenter Username:” -ForegroundColor Yellow -NoNewline
$User = Read-Host

Write-Host “Please enter the ESXi/Vcenter Password:” -ForegroundColor Yellow -NoNewline
$Pass = Read-Host

Connect-VIServer -Server $VMHost -User $User -Password $Pass

##########################################

$VMhostname = ‘*’

ForEach ($VMhostname in (Get-VMHost -name $VMhostname)| sort)
{

Write-Host $VMhostname

}

Write-Host “Please enter the ESXi Hostname where your target VM resides:” -ForegroundColor Yellow -NoNewline
$VMhostname = Read-Host

######################################

$Datastore = ‘*’

ForEach ($Datastore in (Get-Datastore -name $Datastore)| sort)
{

Write-Host $Datastore

}

Write-Host “From the list provided – Please enter the VMFS datastore where the RDM pointer files will reside:” -ForegroundColor Yellow -NoNewline
$Datastore = Read-Host

######################################

$VM = ‘*’

ForEach ($VM in (Get-VM -name $VM)| sort)
{
Write-Host $VM
}

Write-Host “From the list provided – Please enter the VM Name where the RDM volumes shall be created on:” -ForegroundColor Yellow -NoNewline
$VM = Read-Host

##############
Write-Host “ESXi Hostname you have chosen: ” -ForegroundColor Yellow
Write-Host “$VMhostname” -ForegroundColor Green
Write-Host “VMFS you have chosen: ” -ForegroundColor Yellow
Write-Host “$Datastore” -ForegroundColor Green
Write-Host “Vitual Machine you have chosen: ” -ForegroundColor Yellow
Write-Host “$VM” -ForegroundColor Green

################
## ACLX T0:L0 ##
################
$LUN_0 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L0”} | Select ConsoleDeviceName,runtimename
$LUN_0 = $LUN_0 | Select ConsoleDeviceName
$LUN_0 = $LUN_0 -replace “@{ConsoleDeviceName=”, “”
$LUN_0 = $LUN_0 -replace “}”, “”
$LUN_0
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_0 -DataStore $Datastore

#####################
## Gatekeepers x10 ##
#####################
$LUN_1 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L1”} | Select ConsoleDeviceName,runtimename
$LUN_1 = $LUN_1 | Select ConsoleDeviceName
$LUN_1 = $LUN_1 -replace “@{ConsoleDeviceName=”, “”
$LUN_1 = $LUN_1 -replace “}”, “”
$LUN_1
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_1 -DataStore $Datastore

$LUN_2 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L2”} | Select ConsoleDeviceName,runtimename
$LUN_2 = $LUN_2 | Select ConsoleDeviceName
$LUN_2 = $LUN_2 -replace “@{ConsoleDeviceName=”, “”
$LUN_2 = $LUN_2 -replace “}”, “”
$LUN_2
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_2 -DataStore $Datastore

$LUN_3 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L3”} | Select ConsoleDeviceName,runtimename
$LUN_3 = $LUN_3 | Select ConsoleDeviceName
$LUN_3 = $LUN_3 -replace “@{ConsoleDeviceName=”, “”
$LUN_3 = $LUN_3 -replace “}”, “”
$LUN_3
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_3 -DataStore $Datastore

$LUN_4 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L4”} | Select ConsoleDeviceName,runtimename
$LUN_4 = $LUN_4 | Select ConsoleDeviceName
$LUN_4 = $LUN_4 -replace “@{ConsoleDeviceName=”, “”
$LUN_4 = $LUN_4 -replace “}”, “”
$LUN_4
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_4 -DataStore $Datastore

$LUN_5 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L5”} | Select ConsoleDeviceName,runtimename
$LUN_5 = $LUN_5 | Select ConsoleDeviceName
$LUN_5 = $LUN_5 -replace “@{ConsoleDeviceName=”, “”
$LUN_5 = $LUN_5 -replace “}”, “”
$LUN_5
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_5 -DataStore $Datastore

$LUN_6 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L6”} | Select ConsoleDeviceName,runtimename
$LUN_6 = $LUN_6 | Select ConsoleDeviceName
$LUN_6 = $LUN_6 -replace “@{ConsoleDeviceName=”, “”
$LUN_6 = $LUN_6 -replace “}”, “”
$LUN_6
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_6 -DataStore $Datastore

$LUN_7 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L7”} | Select ConsoleDeviceName,runtimename
$LUN_7 = $LUN_7 | Select ConsoleDeviceName
$LUN_7 = $LUN_7 -replace “@{ConsoleDeviceName=”, “”
$LUN_7 = $LUN_7 -replace “}”, “”
$LUN_7
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_7 -DataStore $Datastore

$LUN_8 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L8”} | Select ConsoleDeviceName,runtimename
$LUN_8 = $LUN_8 | Select ConsoleDeviceName
$LUN_8 = $LUN_8 -replace “@{ConsoleDeviceName=”, “”
$LUN_8 = $LUN_8 -replace “}”, “”
$LUN_8
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_8 -DataStore $Datastore

$LUN_9 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L9”} | Select ConsoleDeviceName,runtimename
$LUN_9 = $LUN_9 | Select ConsoleDeviceName
$LUN_9 = $LUN_9 -replace “@{ConsoleDeviceName=”, “”
$LUN_9 = $LUN_9 -replace “}”, “”
$LUN_9
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_9 -DataStore $Datastore

$LUN_10 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L10”} | Select ConsoleDeviceName,runtimename
$LUN_10 = $LUN_10 | Select ConsoleDeviceName
$LUN_10 = $LUN_10 -replace “@{ConsoleDeviceName=”, “”
$LUN_10 = $LUN_10 -replace “}”, “”
$LUN_10
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_10 -DataStore $Datastore

##############
### VERIFY ###
##############
##Finding RDMs Using PowerCLI:##
# Detailed #
# Get-VM | Get-HardDisk -DiskType “RawPhysical” | Select Parent,Name,DiskType,ScsiCanonicalName,DeviceName,CapacityGB | fl
# Brief #
# Get-ScsiLun -VMHost $VMhostname -LunType disk
# NAA #
# Get-ScsiLun -VMHost $VMhostname -LunType disk | select CanonicalName

### Get IP Address for ViClient to check GUI ###
# Get-VMHost -Name $VMhostname | Get-VMHostNetworkAdapter