V(x)Block – AMP VUM & SQL Active Directory Integration

When a VxBlock is shipped from the factory all Windows & SQL user/db accounts are setup as local accounts, due to obvious reasons (customer AD does not exist in factory!). This post details the steps to integrate a VUM VM & SQL with Active Directory and change the local WIN&SQL accounts to AD accounts, along with modifying the SQL DB permissions to an assigned AD account.

At a high level these are the prerequisite steps:

– Change DNS values on the Windows VUM VM (if different from LCS stated values).
– Join Windows VUM VM to AD.
– Reboot VUM VM.
– Snapshot VUM VM (precautionary step).
– Add domain\svc_vum to local admin group of the VUM VM.

Use the following procedure to configure domain service accounts for the VUM Server and services & configure SQL Server access permissions on a VxBlock based EHC deployment:

Continue reading

ViPR Controller – Ingest V(x)Block UCS Boot Volumes

The following provides guidance in a scenario where your ESXi host boots from SAN, for example this is the standard configuration for Cisco UCS blades included in a Vblock/VxBlock. These boot volumes will most likely have been created outside of ViPR Controller, thus it will be a requirement to perform an ingestion of these volumes prior to performing any host migration procedures, for example moving a UCS ESXi blade to another cluster in vCenter.

Note: by not performing the ingestion will result in the removal of the Boot volume from the ESXi host masking if you are initiating the migration process utilizing ViPR commands (more on this later).

Note: To avoid ViPR provisioning issues ensure the ESXi BOOT volume masking views have _NO_VIPR appended to their exclusive mask name, this will prevent ViPR from using the exclusive export mask when adding A NEW ESXi host to a cluster:

BootVolumeMaskingViewName_NO_VIPR

Continue reading

Cisco UCS – Determining ESXi FNIC&ENIC via PowerCLI

The following script allows the user to retrieve a listing of Network (ENIC) & Storage (FNIC) firmware drivers installed on Cisco UCS blades at a per vSphere cluster level. You may download the ‘Cisco_FNIC_ENIC.ps1‘ script here: Cisco_FNIC_ENIC.ps1 (Remove the .doc extension).

The script will begin by prompting you to enter the vCenter IP Address, username and password. A list of all the available clusters residing in vCenter will be returned. Followed by a prompt to enter the vSphere cluster name, from the cluster defined the script will retrieve a per ESXi listing of ENIC&FNIC firmware levels. The script will firstly prompt the user to enable SSH on all the hosts in the cluster:

UCS_FNIC_ENIC1

UCS_FNIC_ENIC2

 

Once you have completed the tasks on the hosts that required SSH Access, you may then return to the running script and type option ‘y’ in order to again disable SSH on all the hosts in the specified cluster:

UCS_FNIC_ENIC3

PowerCLI Script:

#######################################
# Confirm CISCO FNIC & ENIC Drivers
# Date: 2016-07-01
# Created by: David Ring
#######################################

###### vCenter Connectivity Details ######

Write-Host “Please enter the vCenter Host IP Address:” -ForegroundColor Yellow -NoNewline

$VMHost = Read-Host

Write-Host “Please enter the vCenter Username:” -ForegroundColor Yellow -NoNewline

$User = Read-Host

Write-Host “Please enter the vCenter Password:” -ForegroundColor Yellow -NoNewline

$Pass = Read-Host

Connect-VIServer -Server $VMHost -User $User -Password $Pass

###### Please enter the Cluster to check CISCO Versions #######

Write-Host “Clusters Associated with this vCenter:” -ForegroundColor Green

$VMcluster = ‘*’

ForEach ($VMcluster in (Get-Cluster -name $VMcluster)| sort)

{
Write-Host $VMcluster
}

Write-Host “Please enter the Cluster to lookup CISCO FNIC & ENIC Drivers:” -ForegroundColor Yellow -NoNewline

$VMcluster = Read-Host

###### Enabling SSH ######

Write-Host “Do you need to Enable SSH on the Cluster ESXi Hosts? ” -ForegroundColor Yellow -NoNewline

Write-Host ” Y/N:” -ForegroundColor Red -NoNewline

$SSHEnable = Read-Host

if ($SSHEnable -eq “y”) {

Write-Host “Enabling SSH on all hosts in your specified cluster:” -ForegroundColor Green

Get-Cluster $VMcluster | Get-VMHost | ForEach {Start-VMHostService -HostService ($_ | Get-VMHostService | Where {$_.Key -eq “TSM-SSH”})}

}

###### Confirm Driver Versions ######

Write-Host “Confirm CISCO FNIC & ENIC Drivers” -ForegroundColor Green

$hosts = Get-Cluster $VMcluster | Get-VMHost

forEach ($vihost in $hosts)

{

Write-Host -ForegroundColor Magenta “Gathering Driver versions on” $vihost

$esxcli = get-vmhost $vihost | Get-EsxCli

$esxcli.software.vib.list() | Where { $_.Name -like “net-enic”} | Select @{N=”VMHost”;E={$ESXCLI.VMHost}}, Name, Version

$esxcli.software.vib.list() | Where { $_.Name -like “scsi-fnic”} | Select @{N=”VMHost”;E={$ESXCLI.VMHost}}, Name, Version

}

###### Disabling SSH ######

Write-Host “Ready to Disable SSH? ” -ForegroundColor Yellow -NoNewline

Write-Host ” Y/N:” -ForegroundColor Red -NoNewline

$SSHDisable = Read-Host

if ($SSHDisable -eq “y”) {

Write-Host “Disabling SSH” -ForegroundColor Green

Get-Cluster $VMcluster | Get-VMHost | ForEach {Stop-VMHostService -HostService ($_ | Get-VMHostService | Where {$_.Key -eq “TSM-SSH”}) -Confirm:$FALSE}

}

 

Useful References:

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1027206

http://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-b-series-blade-servers/115764-ucs-san-tshoot-00.html

EMC UIM/P – Editing The Database

Thank you @CliffCahill for providing this trick!

Ensure to back up the UIM/P(Unified Infrastructure Manager for Provisioning) DB before you begin. The following provides detailed steps on how to modify IP settings for ESXi host service offerings stored in the UIM DB.

Login to the UIM CLI via putty.
#To login to uim voyencedb database:
su – pgdba
psql voyencedb uim

To pull back all ESXi O/S settings:
select *, from ossettings;

To update the gateway for all service offerings / ESXi host:
update ossettings set gateway = ‘10.10.1.254’;

To update the IP address on individual ESXi host – id is listed when you run “select” command:
update ossettings set ip_address = ‘10.10.1.10’ where id = 2338;
update ossettings set ip_address = ‘10.10.1.11’ where id = 2302;

Vblock – Advanced Management Pod Second Generation (AMP-2)

Vblock Advanced Management Pod (AMP)

The AMP consists of the management components of a Vblock system, these management components are self-Contained and provide Out-of-Band Management for the entire Vblock Infrastructure. The servers and storage(in the case of AMP-2HA) that make up the AMP host all of the Vblock management applications in a dedicated environment separate from that of the production environment. This type of separation allows for the Vblock to operate even in the event of an AMP failure scenario.

Management Software stack:
Code versions and exact details of the AMP software management components are deterministic on the RCM (Release Code Matrix) level used to configure the Vblock. Here is an example of some of the core AMP management components:
– VMware vCenter Server Suite
– Unisphere for VNX|VMAX (RecoverPoint and VPLEX Optional)
– XtremIO XMS
– PowerPath Licensing Appliance
– EMC ESRS Appliances
– Cisco DCNM
– VCE Vision Intelligent Operations
– Cisco Nexus 1000v

AMP-2 Hardware
There are 3 AMP-2 models associated with the Vblock 340,540&740: AMP-2P | AMP-2RP | AMP-2HA

AMP-2P (“Physical”) – One Cisco UCS C220 dedicated Server to run the management workload.
AMP-2RP (“Redundant Physical”) – Two Cisco UCS C220 servers supports application and hardware redundancy.

The AMP-2HA includes 2 or 3 Cisco ‘C’ series servers for compute along with the highly available VNXe3200 storage array where the VM storage resides providing a redundant out-of-band management environment:
AMP-2HA BASE (“High Availability”) – Two Cisco UCS C220|240 servers and shared storage presented by a EMC VNXe3200 storage array.
AMP-2HA PERFORMANCE (“High Availability”) – Three Cisco UCS C220|240 servers and additional ‘FAST VP’ VNXe3200 storage.

Taking a look at the AMP-2HA Hardware stack:
The second generation of the Advanced Management Platform (AMP-2HA) is the high availability model of AMP-2HA that centralizes management components of the Vblock System and delivers out-of-band management.

Vblock 340: AMP-2HA ‘Base’ Hardware stack
2x C220 M3 SFF Server (1RU Per server)
2x 32 GB Flex Flash SD Card Modules (ESXi install location)
2x CPU Per server(2.4Ghz Ivy Bridge 6Cores Per CPU, 7.2GT/s QPI)
128GB RAM Per C220 (8x 16GB DDR3)
2x Cisco Nexus 3064-T Switch for management networking
VNXe 3200 Storage Array (25-Drive DPE loaded with 3x 100GB FAST Cache Drives & 11x 600GB 10K SAS 2.5″ Drives)

Vblock 340: AMP-2HA ‘Performance’ Hardware stack
3x C220 M3 SFF Server (1RU)
2x 32 GB Flex Flash SD Card Modules (ESXi install location)
2x CPU Per server(2.4Ghz Ivy Bridge 6Cores Per CPU, 7.2GT/s QPI)
128GB RAM Per C220 (8x 16GB DDR3)
VNXe 3200 Storage Array (25-Drive DPE loaded with 3x 100GB FAST Cache Drives, 6x 100GB FAST VP Drives & 11x 600GB 10K SAS 2.5″ Drives)

Vblock 540&740: AMP-2HA ‘Base’ Hardware stack
2x C240 M3 SFF Server (2RU Per Server)
2x 32 GB Flex Flash SD Card Modules (ESXi install location)
2x CPU Per server(2.4Ghz Ivy Bridge 6Cores Per CPU, 7.2GT/s QPI)
128GB RAM Per C240 (8x 16GB DDR3)
VNXe 3200 Storage Array (25-Drive DPE loaded with 3x 100GB FAST Cache Drives & 11x 600GB 10K SAS 2.5″ Drives)

Vblock 540&740: AMP-2HA ‘Performance’ Hardware stack
3
x C240 M3 SFF Server (2RU)
2x 32 GB Flex Flash SD Card Modules (ESXi install location)
2x CPU Per server(2.4Ghz Ivy Bridge 6C)
128GB RAM Per C240 (8x 16GB DDR3)
VNXe 3200 Storage Array (25-Drive DPE loaded with 3x 100GB FAST Cache Drives, 6x 100GB FAST VP Drives & 11x 600GB 10K SAS 2.5″ Drives)

VxBlock 340,540 & 740 –  ‘The AMP2-HA Performance’ model is the default for VMware NSX deployments allowing each of the 3 NSX controllers to be dedicated to an ESXi host for redundancy and scalability.

References:
http://www.vce.com/asset/documents/vblock-340-gen3-2-architecture-overview.pdf
http://www.vce.com/asset/documents/vblock-540-gen2-0-architecture-overview.pdf
http://www.vce.com/asset/documents/vblock-740-gen5-0-architecture-overview.pdf

http://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/ucs-c-series-rack-servers/C240M3_LFF_SpecSheet.pdf
http://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/ucs-c-series-rack-servers/C220M3_SFF_SpecSheet.pdf

EMC VMAX3 – Adding Gatekeeper RDM Volumes To VMware MGMT VM

This post outlines how to create the VMAX Masking view for Gatekeeper RDM volumes and the Script to Automate Adding RDM Disk’s to a VMware MGMT VM.

First some notes on Gatekeeper volumes:
SE(CLI) and Unisphere(GUI) are the main tools used to manage a VMAX Array, Gatekeeper volumes are required in order to carry these commands from both CLI&GUI and generate low level commands which are sent to the VMAX Array to complete the required instruction such as IG,SG,PG,MV or volume creation. It is good practice to use dedicated Gatekeeper devices and avoid using any devices which contain user or application data which may be impacted by the I/O requirement from the instruction command. For example if the device used as a gatekeeper is also servicing application I/O then a scenario may arise if the VMAX is executing a command which takes some time, as a result of this latency the application may encounter poor performance. These are the reasons why EMC strongly recommends to create and map dedicated devices as Gatekeepers.

VMAX3: Creating the RDM Volumes and Associated Masking View

This is an example Masking View for a two node ESXi cluster on which the VMAX management virtual machine shall reside:

1. Create a Port Group with the VMAX FA ports that the ESXi hosts have been zoned to:
symaccess -sid 123 -name MGMT_VM_PG -type port create
symaccess -sid 123 -name MGMT_VM_PG -type port -dirport 1d:24,2d:31,3D:28,4d:27 add

2. Create the Initiator Group containing the ESXi hosts WWNS:
symaccess -sid 123 -name MGMT_VM_IG -type initiator create -consistent_lun
symaccess -sid 123 -name MGMT_VM_IG -type initiator -wwn 21000024ff5CXXF8 add
symaccess -sid 123 -name MGMT_VM_IG -type initiator -wwn 21000024ff5CXXF9 add
symaccess -sid 123 -name MGMT_VM_IG -type initiator -wwn 21000024ff55XX4C add
symaccess -sid 123 -name MGMT_VM_IG -type initiator -wwn 21000024ff55XX4D add

3. Create the Storage Group for the Gatekeeper RDM Volumes:
symsg -sid 123 create MGMT_VM_SG -slo optimized -srp SRP_1
Listing the SRP:
symcfg list -srp

4. Create the Gatekeeper volumes (10 Gatekeeper volumes in this example) and add to the MGMT_VM_SG:
symconfigure -sid 123 -cmd “create dev count=10, emulation=FBA, sg=MGMT_VM_SG, size=3 CYL, config=tdev”; preview -nop
symconfigure -sid 123 -cmd “create dev count=10, emulation=FBA, sg=MGMT_VM_SG, size=3 CYL, config=tdev”; prepare -nop
symconfigure -sid 123 -cmd “create dev count=10, emulation=FBA, sg=MGMT_VM_SG, size=3 CYL, config=tdev”; commit -nop

5. Create the Masking View:
symaccess -sid 123 create view -name MGMT_VM_MV -sg MGMT_VM_SG -pg MGMT_VM_PG -ig MGMT_VM_IG

View Configuration Details

Confirm that the HOSTS are logged into the correct VMAX ports:
symaccess -sid 123 list logins -wwn 21000024ff5CXXF8
symaccess -sid 123 list logins -wwn 21000024ff5CXXF9
symaccess -sid 123 list logins -wwn 21000024ff55XX4C
symaccess -sid 123 list logins -wwn 21000024ff55XX4D

Verify that the HBA is a member of the correct Initiator Group:
symaccess -sid 123 list -type initiator -wwn 21000024ff5CXXF8
symaccess -sid 123 list -type initiator -wwn 21000024ff5CXXF9
symaccess -sid 123 list -type initiator -wwn 21000024ff55XX4C
symaccess -sid 123 list -type initiator -wwn 21000024ff55XX4D

Storage Group details:
symaccess -sid 123 list -type storage -name AMP_SG -v
symaccess -sid 123 show AMP_SG -type storage

Port Group details:
symaccess -sid 123 list -type port -name AMP_PG -v
symaccess -sid 123 show AMP_PG -type port

Initiator Group details:
symaccess -sid 123 list -type initiator -name AMP_IG -v
symaccess -sid 123 show AMP_IG -type initiator

Masking View details:
symaccess -sid 123 list view -name AMP_MV
symaccess -sid 123 list view -name AMP_MV -detail

Gatekeeper details:
symdev -sid 340 list -cap 3 -captype cyl
symaccess -sid 123 list assignment -dev 049:052

If you need to remove the devs from the SG:
symaccess -sid 123 -name AMP_SG -type storage remove devs 049:052


####################################################################
Script to Automate Adding RDM Disk’s:

PowerCLI Script: Automate Adding RDM Disk’s
Here is a script which scans for the Host LUN ID and then attributes the $LUN_# parameter
to the ‘ConsoleDeviceName’. This greatly simplifies the process of adding large quantities of RDM Disk’s.

There are 4 parameters used in the script. The following 3 shall be prompted for:
“Your-ESXi-Hostname” $VMhostname
“Your-VM-Name” $VM
“Your-VMFS-DS-Name” $Datastore

Please edit the runtime name as required, the script default is :
“vmhba0:C0:T0:L#”

The following example script will automatically create 10 RDM Disks on a Virtual Machine and place the pointer files
in a VMFS Datastore based on the parameters provided.

#####################################################################

Write-Host “Please edit the runtime name in the script if required before proceeding, the default is:” -ForegroundColor Red
Write-Host “vmhba0:C0:T0:L#” -ForegroundColor Green

Write-Host “Please enter the ESXi/Vcenter Host IP Address:” -ForegroundColor Yellow -NoNewline
$VMHost = Read-Host

Write-Host “Please enter the ESXi/Vcenter Username:” -ForegroundColor Yellow -NoNewline
$User = Read-Host

Write-Host “Please enter the ESXi/Vcenter Password:” -ForegroundColor Yellow -NoNewline
$Pass = Read-Host

Connect-VIServer -Server $VMHost -User $User -Password $Pass

##########################################

$VMhostname = ‘*’

ForEach ($VMhostname in (Get-VMHost -name $VMhostname)| sort)
{

Write-Host $VMhostname

}

Write-Host “Please enter the ESXi Hostname where your target VM resides:” -ForegroundColor Yellow -NoNewline
$VMhostname = Read-Host

######################################

$Datastore = ‘*’

ForEach ($Datastore in (Get-Datastore -name $Datastore)| sort)
{

Write-Host $Datastore

}

Write-Host “From the list provided – Please enter the VMFS datastore where the RDM pointer files will reside:” -ForegroundColor Yellow -NoNewline
$Datastore = Read-Host

######################################

$VM = ‘*’

ForEach ($VM in (Get-VM -name $VM)| sort)
{
Write-Host $VM
}

Write-Host “From the list provided – Please enter the VM Name where the RDM volumes shall be created on:” -ForegroundColor Yellow -NoNewline
$VM = Read-Host

##############
Write-Host “ESXi Hostname you have chosen: ” -ForegroundColor Yellow
Write-Host “$VMhostname” -ForegroundColor Green
Write-Host “VMFS you have chosen: ” -ForegroundColor Yellow
Write-Host “$Datastore” -ForegroundColor Green
Write-Host “Vitual Machine you have chosen: ” -ForegroundColor Yellow
Write-Host “$VM” -ForegroundColor Green

################
## ACLX T0:L0 ##
################
$LUN_0 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L0”} | Select ConsoleDeviceName,runtimename
$LUN_0 = $LUN_0 | Select ConsoleDeviceName
$LUN_0 = $LUN_0 -replace “@{ConsoleDeviceName=”, “”
$LUN_0 = $LUN_0 -replace “}”, “”
$LUN_0
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_0 -DataStore $Datastore

#####################
## Gatekeepers x10 ##
#####################
$LUN_1 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L1”} | Select ConsoleDeviceName,runtimename
$LUN_1 = $LUN_1 | Select ConsoleDeviceName
$LUN_1 = $LUN_1 -replace “@{ConsoleDeviceName=”, “”
$LUN_1 = $LUN_1 -replace “}”, “”
$LUN_1
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_1 -DataStore $Datastore

$LUN_2 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L2”} | Select ConsoleDeviceName,runtimename
$LUN_2 = $LUN_2 | Select ConsoleDeviceName
$LUN_2 = $LUN_2 -replace “@{ConsoleDeviceName=”, “”
$LUN_2 = $LUN_2 -replace “}”, “”
$LUN_2
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_2 -DataStore $Datastore

$LUN_3 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L3”} | Select ConsoleDeviceName,runtimename
$LUN_3 = $LUN_3 | Select ConsoleDeviceName
$LUN_3 = $LUN_3 -replace “@{ConsoleDeviceName=”, “”
$LUN_3 = $LUN_3 -replace “}”, “”
$LUN_3
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_3 -DataStore $Datastore

$LUN_4 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L4”} | Select ConsoleDeviceName,runtimename
$LUN_4 = $LUN_4 | Select ConsoleDeviceName
$LUN_4 = $LUN_4 -replace “@{ConsoleDeviceName=”, “”
$LUN_4 = $LUN_4 -replace “}”, “”
$LUN_4
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_4 -DataStore $Datastore

$LUN_5 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L5”} | Select ConsoleDeviceName,runtimename
$LUN_5 = $LUN_5 | Select ConsoleDeviceName
$LUN_5 = $LUN_5 -replace “@{ConsoleDeviceName=”, “”
$LUN_5 = $LUN_5 -replace “}”, “”
$LUN_5
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_5 -DataStore $Datastore

$LUN_6 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L6”} | Select ConsoleDeviceName,runtimename
$LUN_6 = $LUN_6 | Select ConsoleDeviceName
$LUN_6 = $LUN_6 -replace “@{ConsoleDeviceName=”, “”
$LUN_6 = $LUN_6 -replace “}”, “”
$LUN_6
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_6 -DataStore $Datastore

$LUN_7 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L7”} | Select ConsoleDeviceName,runtimename
$LUN_7 = $LUN_7 | Select ConsoleDeviceName
$LUN_7 = $LUN_7 -replace “@{ConsoleDeviceName=”, “”
$LUN_7 = $LUN_7 -replace “}”, “”
$LUN_7
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_7 -DataStore $Datastore

$LUN_8 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L8”} | Select ConsoleDeviceName,runtimename
$LUN_8 = $LUN_8 | Select ConsoleDeviceName
$LUN_8 = $LUN_8 -replace “@{ConsoleDeviceName=”, “”
$LUN_8 = $LUN_8 -replace “}”, “”
$LUN_8
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_8 -DataStore $Datastore

$LUN_9 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L9”} | Select ConsoleDeviceName,runtimename
$LUN_9 = $LUN_9 | Select ConsoleDeviceName
$LUN_9 = $LUN_9 -replace “@{ConsoleDeviceName=”, “”
$LUN_9 = $LUN_9 -replace “}”, “”
$LUN_9
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_9 -DataStore $Datastore

$LUN_10 = Get-SCSILun -VMhost $VMhostname -LunType Disk | Where-Object {$_.runtimename -like “vmhba0:C0:T0:L10”} | Select ConsoleDeviceName,runtimename
$LUN_10 = $LUN_10 | Select ConsoleDeviceName
$LUN_10 = $LUN_10 -replace “@{ConsoleDeviceName=”, “”
$LUN_10 = $LUN_10 -replace “}”, “”
$LUN_10
New-HardDisk -VM $VM -DiskType RawPhysical -DeviceName $LUN_10 -DataStore $Datastore

##############
### VERIFY ###
##############
##Finding RDMs Using PowerCLI:##
# Detailed #
# Get-VM | Get-HardDisk -DiskType “RawPhysical” | Select Parent,Name,DiskType,ScsiCanonicalName,DeviceName,CapacityGB | fl
# Brief #
# Get-ScsiLun -VMHost $VMhostname -LunType disk
# NAA #
# Get-ScsiLun -VMHost $VMhostname -LunType disk | select CanonicalName

### Get IP Address for ViClient to check GUI ###
# Get-VMHost -Name $VMhostname | Get-VMHostNetworkAdapter