ViPR Controller – Ingest V(x)Block UCS Boot Volumes

The following provides guidance in a scenario where your ESXi host boots from SAN, for example this is the standard configuration for Cisco UCS blades included in a Vblock/VxBlock. These boot volumes will most likely have been created outside of ViPR Controller, thus it will be a requirement to perform an ingestion of these volumes prior to performing any host migration procedures, for example moving a UCS ESXi blade to another cluster in vCenter.

Note: by not performing the ingestion will result in the removal of the Boot volume from the ESXi host masking if you are initiating the migration process utilizing ViPR commands (more on this later).

Note: To avoid ViPR provisioning issues ensure the ESXi BOOT volume masking views have _NO_VIPR appended to their exclusive mask name, this will prevent ViPR from using the exclusive export mask when adding A NEW ESXi host to a cluster:

BootVolumeMaskingViewName_NO_VIPR

Continue reading

Cisco UCS – Determining ESXi FNIC&ENIC via PowerCLI

The following script allows the user to retrieve a listing of Network (ENIC) & Storage (FNIC) firmware drivers installed on Cisco UCS blades at a per vSphere cluster level. You may download the ‘Cisco_FNIC_ENIC.ps1‘ script here: Cisco_FNIC_ENIC.ps1 (Remove the .doc extension).

The script will begin by prompting you to enter the vCenter IP Address, username and password. A list of all the available clusters residing in vCenter will be returned. Followed by a prompt to enter the vSphere cluster name, from the cluster defined the script will retrieve a per ESXi listing of ENIC&FNIC firmware levels. The script will firstly prompt the user to enable SSH on all the hosts in the cluster:

UCS_FNIC_ENIC1

UCS_FNIC_ENIC2

 

Once you have completed the tasks on the hosts that required SSH Access, you may then return to the running script and type option ‘y’ in order to again disable SSH on all the hosts in the specified cluster:

UCS_FNIC_ENIC3

PowerCLI Script:

#######################################
# Confirm CISCO FNIC & ENIC Drivers
# Date: 2016-07-01
# Created by: David Ring
#######################################

###### vCenter Connectivity Details ######

Write-Host “Please enter the vCenter Host IP Address:” -ForegroundColor Yellow -NoNewline

$VMHost = Read-Host

Write-Host “Please enter the vCenter Username:” -ForegroundColor Yellow -NoNewline

$User = Read-Host

Write-Host “Please enter the vCenter Password:” -ForegroundColor Yellow -NoNewline

$Pass = Read-Host

Connect-VIServer -Server $VMHost -User $User -Password $Pass

###### Please enter the Cluster to check CISCO Versions #######

Write-Host “Clusters Associated with this vCenter:” -ForegroundColor Green

$VMcluster = ‘*’

ForEach ($VMcluster in (Get-Cluster -name $VMcluster)| sort)

{
Write-Host $VMcluster
}

Write-Host “Please enter the Cluster to lookup CISCO FNIC & ENIC Drivers:” -ForegroundColor Yellow -NoNewline

$VMcluster = Read-Host

###### Enabling SSH ######

Write-Host “Do you need to Enable SSH on the Cluster ESXi Hosts? ” -ForegroundColor Yellow -NoNewline

Write-Host ” Y/N:” -ForegroundColor Red -NoNewline

$SSHEnable = Read-Host

if ($SSHEnable -eq “y”) {

Write-Host “Enabling SSH on all hosts in your specified cluster:” -ForegroundColor Green

Get-Cluster $VMcluster | Get-VMHost | ForEach {Start-VMHostService -HostService ($_ | Get-VMHostService | Where {$_.Key -eq “TSM-SSH”})}

}

###### Confirm Driver Versions ######

Write-Host “Confirm CISCO FNIC & ENIC Drivers” -ForegroundColor Green

$hosts = Get-Cluster $VMcluster | Get-VMHost

forEach ($vihost in $hosts)

{

Write-Host -ForegroundColor Magenta “Gathering Driver versions on” $vihost

$esxcli = get-vmhost $vihost | Get-EsxCli

$esxcli.software.vib.list() | Where { $_.Name -like “net-enic”} | Select @{N=”VMHost”;E={$ESXCLI.VMHost}}, Name, Version

$esxcli.software.vib.list() | Where { $_.Name -like “scsi-fnic”} | Select @{N=”VMHost”;E={$ESXCLI.VMHost}}, Name, Version

}

###### Disabling SSH ######

Write-Host “Ready to Disable SSH? ” -ForegroundColor Yellow -NoNewline

Write-Host ” Y/N:” -ForegroundColor Red -NoNewline

$SSHDisable = Read-Host

if ($SSHDisable -eq “y”) {

Write-Host “Disabling SSH” -ForegroundColor Green

Get-Cluster $VMcluster | Get-VMHost | ForEach {Stop-VMHostService -HostService ($_ | Get-VMHostService | Where {$_.Key -eq “TSM-SSH”}) -Confirm:$FALSE}

}

 

Useful References:

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1027206

http://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-b-series-blade-servers/115764-ucs-san-tshoot-00.html

Vblock – Advanced Management Pod Second Generation (AMP-2)

Vblock Advanced Management Pod (AMP)

The AMP consists of the management components of a Vblock system, these management components are self-Contained and provide Out-of-Band Management for the entire Vblock Infrastructure. The servers and storage(in the case of AMP-2HA) that make up the AMP host all of the Vblock management applications in a dedicated environment separate from that of the production environment. This type of separation allows for the Vblock to operate even in the event of an AMP failure scenario.

Management Software stack:
Code versions and exact details of the AMP software management components are deterministic on the RCM (Release Code Matrix) level used to configure the Vblock. Here is an example of some of the core AMP management components:
– VMware vCenter Server Suite
– Unisphere for VNX|VMAX (RecoverPoint and VPLEX Optional)
– XtremIO XMS
– PowerPath Licensing Appliance
– EMC ESRS Appliances
– Cisco DCNM
– VCE Vision Intelligent Operations
– Cisco Nexus 1000v

AMP-2 Hardware
There are 3 AMP-2 models associated with the Vblock 340,540&740: AMP-2P | AMP-2RP | AMP-2HA

AMP-2P (“Physical”) – One Cisco UCS C220 dedicated Server to run the management workload.
AMP-2RP (“Redundant Physical”) – Two Cisco UCS C220 servers supports application and hardware redundancy.

The AMP-2HA includes 2 or 3 Cisco ‘C’ series servers for compute along with the highly available VNXe3200 storage array where the VM storage resides providing a redundant out-of-band management environment:
AMP-2HA BASE (“High Availability”) – Two Cisco UCS C220|240 servers and shared storage presented by a EMC VNXe3200 storage array.
AMP-2HA PERFORMANCE (“High Availability”) – Three Cisco UCS C220|240 servers and additional ‘FAST VP’ VNXe3200 storage.

Taking a look at the AMP-2HA Hardware stack:
The second generation of the Advanced Management Platform (AMP-2HA) is the high availability model of AMP-2HA that centralizes management components of the Vblock System and delivers out-of-band management.

Vblock 340: AMP-2HA ‘Base’ Hardware stack
2x C220 M3 SFF Server (1RU Per server)
2x 32 GB Flex Flash SD Card Modules (ESXi install location)
2x CPU Per server(2.4Ghz Ivy Bridge 6Cores Per CPU, 7.2GT/s QPI)
128GB RAM Per C220 (8x 16GB DDR3)
2x Cisco Nexus 3064-T Switch for management networking
VNXe 3200 Storage Array (25-Drive DPE loaded with 3x 100GB FAST Cache Drives & 11x 600GB 10K SAS 2.5″ Drives)

Vblock 340: AMP-2HA ‘Performance’ Hardware stack
3x C220 M3 SFF Server (1RU)
2x 32 GB Flex Flash SD Card Modules (ESXi install location)
2x CPU Per server(2.4Ghz Ivy Bridge 6Cores Per CPU, 7.2GT/s QPI)
128GB RAM Per C220 (8x 16GB DDR3)
VNXe 3200 Storage Array (25-Drive DPE loaded with 3x 100GB FAST Cache Drives, 6x 100GB FAST VP Drives & 11x 600GB 10K SAS 2.5″ Drives)

Vblock 540&740: AMP-2HA ‘Base’ Hardware stack
2x C240 M3 SFF Server (2RU Per Server)
2x 32 GB Flex Flash SD Card Modules (ESXi install location)
2x CPU Per server(2.4Ghz Ivy Bridge 6Cores Per CPU, 7.2GT/s QPI)
128GB RAM Per C240 (8x 16GB DDR3)
VNXe 3200 Storage Array (25-Drive DPE loaded with 3x 100GB FAST Cache Drives & 11x 600GB 10K SAS 2.5″ Drives)

Vblock 540&740: AMP-2HA ‘Performance’ Hardware stack
3
x C240 M3 SFF Server (2RU)
2x 32 GB Flex Flash SD Card Modules (ESXi install location)
2x CPU Per server(2.4Ghz Ivy Bridge 6C)
128GB RAM Per C240 (8x 16GB DDR3)
VNXe 3200 Storage Array (25-Drive DPE loaded with 3x 100GB FAST Cache Drives, 6x 100GB FAST VP Drives & 11x 600GB 10K SAS 2.5″ Drives)

VxBlock 340,540 & 740 –  ‘The AMP2-HA Performance’ model is the default for VMware NSX deployments allowing each of the 3 NSX controllers to be dedicated to an ESXi host for redundancy and scalability.

References:
http://www.vce.com/asset/documents/vblock-340-gen3-2-architecture-overview.pdf
http://www.vce.com/asset/documents/vblock-540-gen2-0-architecture-overview.pdf
http://www.vce.com/asset/documents/vblock-740-gen5-0-architecture-overview.pdf

http://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/ucs-c-series-rack-servers/C240M3_LFF_SpecSheet.pdf
http://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/ucs-c-series-rack-servers/C220M3_SFF_SpecSheet.pdf

EMC VMAX – SMI-S Configuration & Discovery

The following are some configuration notes for configuring the ‘VMAX Management HOST’ for communication via SMI-S for purposes such as ‘ViPR’ or with ‘VCE Vision’. Before proceeding ensure you have presented and configured the ‘VMAX Management HOST’ with gatekeeper volumes from the VMAX to allow for in-band communication over Fibre Channel. EMC SMI-S provider is included as part of ‘Solutions Enabler with SMIS’ install package which can be downloaded from ‘support.emc.com’.

Begin by installing SMI-S Provider, ensuring you select the ‘Array provider’ (Windows does not require Host provider) and chose the option for SMISPROVIDER_COMPONENT:
VisionVMAX1

From the windows services.msc console check that both the ‘ECOM’ and ‘storsrvd’ services are set to automatic and in a running state:
VisionVMAX2
Check that EMC storsrvd daemon is installed and running from a Windows cmd prompt using stordaemon.exe:
stordaemon install storsrvd -autostart
stordaemon start storsrvd
stordaemon.exe list

VisionVMAX3
Or using the SC (service control) command you can query/start/config the ECOM and storsrvd services:
sc query ECOM.exe
sc query storsrvd
sc start ECOM.exe
sc start storsrvd
sc config ECOM.exe start=auto
sc config storsrvd start=auto

VisionVMAX4

Run netstat -a and check the host is listening on ports 5988 5989:
VisionVMAX5

UPDATE ENVIRONMENT VARIABLES:
Add the SYMCLI installation directory path (DRIVE:\Program Files\EMC\ECIM\ECOM\BIN) to the list of system paths:
VisionVMAX2a
Or use the windows CLI to add the SYMCLI and ECOM directories to the PATH environment variable:

setx /M PATH "%PATH%;C:\Program Files\EMC\SYMCLI\bin;C:\Program Files\EMC\ECIM\ECOM\bin"

Perform a Symcfg Discover followed by Symcfg List to ensure communication is present between the VMAX and the VMAX management server.

If experiencing issues such as the ECOM service failing to start it is worth rebooting the mgmt server at this stage.

ECOM SERVER: Adding A NEW SMI-S Provider User 
Provided all the validations are successful then proceed to login to the ECOM server and create the user you would like to use for connectivity:
Open https://localhost:5989/ecomconfig
Login with default credentials of: admin / #1Password
VisionVMAX6a

Select the option to add a new user and create the Vision user with administrator role and scope local:
Visionvmax7ab
VisionVMAX8ab

Windows Firewall
If the Windows firewall is enabled then rules will need to be created to allow ECOM ports TCP 5988&5989 and SLP port UDP 427. For example using the windows command line netsh to create rules for SLP and ECOM:

netsh advfirewall firewall add rule name="SLP" dir=in protocol=UDP localport=427 action=allow

netsh advfirewall firewall add rule name=”ECOM” dir=in protocol=TCP localport=5988-5989 action=allow

netsh advfirewall firewall show rule name=SLP
netsh advfirewall firewall show rule name=ECOM

VisionVMAX11

Confirm VMAX Connectivity VIA SMI-S (TestSMIProvider)
Open a Windows cmd prompt session as admin user, if the environment variable has not been set then you will need to ‘cd’ to c:\Program Files\EMC\ECIM\ECOM\BIN. Type TestSMIProvider.exe at the prompt, from here chose all defaults except for the Vision user and password created through the ECOM console:

VisionVMAX9

At the prompt type ‘dv’ (display version info) command to confirm connectivity between the VMAX and SMI-S:
VisionVMAX10

For any troubleshooting please refer to: ‘C:\Program Files\EMC\ECIM\ECOM\log’

Note: When configuring Vision please ensure to use the ‘VMAX Management HOST’ IP address for all VMAX entries in the Vblock.xml configuration file.

The Vblock vCake

Created by Cliff Cahill @CliffCahill

In conjunction with the announcement of the ‘Vblock 740’ @VCE’s latest addition to the Vblock range Cliff kindly put his creative and baking skills to good effect by creating the ‘Vblock 740 vCake’. The Vblock System 740 is VCE’s flagship converged infrastructure, an IT infrastructure built on industry-leading technology combining Network and Compute components from Cisco, Storage in the form of the EMC VMAX³ and virtualization from VMWare – all market leaders in their respective technology sectors.

The steps below outline the process that was followed in order to deliver the Vblock vCake:

1. Logical Configuration Summary(LCS) & Bill Of Materials(BOM)
2. Physical Build Commences
3. Configuring the Advanced Management Platform (AMP)
4. Configuring the Cisco Network Components (MDS, NEXUS) & Compute Components (Cisco UCS)
5. Configuring the EMC VMAX³ & VMWare Virtualization Infrastructure (ESXi, vCenter, VUM)
6. Logical Configuration QA
7. Vblock Sent to Distribution
8. Deployment and Implentation Begins
9. Deployment and Implentation Knowledge Transfer
10. Vblock Enters Production

1. Logical Configuration Summary and Bill of Materials
The vArchitect has taken the customers requirements and expertly sized the Vblock to deliver optimum performance for the customers mission critical applications. The bill of materials has been completed and the Technical Program Team has qualified and validated the solution design. The Logical Configuration Survey has been signed off with the customer and the Vblock has been added to the Production Schedule. All components are laid out in preparation for integration:

vCake1

2. Physical Build Commences
Next our manufacturing team place each part of the Vblock platform into its correct location within the cabinet:
vCake3

The Cisco UCS, fabric interconnects, network switches and EMC VMAX³ are now installed:
vCake4

Then the power outlet units (POU’s) and all network and power cabling get installed and connected to the appropriate ports based on a Master Port Map:
vCake7

3. Configuring the Advanced Management Platform (AMP)
Next the Vblock platform goes through the Logical integration phase where it is expertly configured by the LB Team to meet specific customer requirements, beginning with the AMP configuration which runs the software that manages the platform (Cisco C220 servers and EMC VNXe storage array):
vCake11

4. Configuring the Cisco Network Components (MDS, NEXUS) & Compute Components (Cisco UCS)
Code Upgrades are complete based on the fully tested and validated RCM. Initial and advanced scripting executed:
vCake10

5. Configuring the EMC VMAX³ & VMware Virtualization infrastructure (ESXi, vCenter, VUM)
The EMC VMAX³ Bin File has been loaded and the VMWare ESXi clusters are provisioned via UIM:
vCake12

6. Logical Configuration QA
Logical Configuration Complete, QA Done and VCE Vision verifies compliance with RCM. Hand Off Email Sent:
vCake13

7. Vblock Sent to Distribution
The Vblock is down in distribution and ready for dispatch. Customer agreed delivery date is achieved. Vblock is covered to ensure immaculate delivery to its proud new owner:
vCake14

8. Deployment and Implementation Begins
Vblock arrives at the customer site as a piece of truly converged infrastructure and the Deployment and Implementation phase commences:
photo 1
photo 2

9. Deployment and Implentation Knowledge Transfer
The Vblock has been integrated into the customer environment and Deployment & Implementation knowledge transfer begins:
photo 3

10. Vblock Enters Production
The Vblock is now in operation – End User is already after taking a few compute slices:
photo 4
photo 5

Vblock Infrastructure Platform is Transforming IT
vCake Final

BOM for Biscuit Cake
400g digestives
2 Crunchies
450g milk choc
90g butter
5tbsp double cream
1tbsp golden syrup

BOM for Ganache:
180ml double cream
28g butter
227g dark choc

Logical Build Instructions for vCake
Biscuit Cake
Melt milk choc with butter over pot of simmering water.
Break up biscuits and crush crunchies in a large bowl.
Once milk choc melted, cool slightly and then add cream and golden syrup.
Mix using a whisk until completely mixed through.
Add to digestives and make sure every biscuit is covered.
Put it in to a prelared tim lined with cling film and refrigerate.

Ganache
Break Dark chocolate into Single Squares and add to Glass Bowl
Slowly Bring cream and butter to the boil and poor over dark chocolate.
Allow to sit for a couple of minutes and then mix thoroughly until choc completely melted.
Allow to set until desired consistency has been reached.

EMC XtremIO – Welcome to Vblock

Vblock Specialized System for Extreme Applications

This Vblock as the name suggests is VCE’s solution targeted at Extreme Applications such as VDI and High performance database applications. Using the XtremIO all-flash enterprise storage array this Vblock delivers outstanding performance with consistent sub-millisecond response times and excellent real-time inline deduplication providing the ability to remove redundant information and thus lowering the amount of capacity required. If your workload has a requirement for low latency, a high number of IOPs and is random in nature then this is an optimal solution for your application.

There are two flavors of the Specialized Vblock: a single-cabinet system and a dual-cabinet system. The single-cab configuration contains a single XtremIO X-Brick while the dual-cab offering presents a two X-Brick configuration.
XTVblockComponents

The Cisco C220 servers and the VNXe are used for application management workloads, while the UCS Servers and XtremIO array will drive the production workloads. VMware vSphere 5.x provides the virtualization layer bringing great benefits from VMware VAAI Integration allowing for the offload of provisioning tasks such as instant cloning of VMs. Vblock Specialized System for Extreme Applications includes a pair of Nexus 3064-T switches to support the management connectivity for all components of the Vblock and the Cisco Nexus 5548UP switches configured as a VPC pair will provide 10 GbE and Fibre Channel over Ethernet (FCoE) connectivity for the production workload. For the Dual-Cabinet Vblock 2 X-Brick system, the backend is boosted by two 18xPort infiniband switches enabling high speed Remote Direct Memory Access (RDMA) communication between the four X-Brick nodes in the cluster. The EMC VNXe3150 for the single-cabinet solution or VNXe3300 for the dual-cabinet are used to house the storage requirements of the management VMs. The Vblock Specialized System as you can gather from the components included is a no single point of failure fully redundant and extremely performant platform.

XIO Table

Some Figures
Vblock Specialized System for Extreme Applications Single/Dual cabinet Vblocks can support the following VDI environments:

XTVblockVDI

Single Rack system
♦ The Single Rack system can host approximately 3500 virtual desktops with one X-Brick, providing 7.5TB of usable storage / 37.5TB(5:1 Deduplication).
♦ 1x xbrick= 150K fully random 4K IOPS @ 50% read/50% write (250K IOPS @ 100% read) Latency <1ms

1RXAPP

Two Rack system
♦ The Dual Rack system can host approximately 7000 virtual desktops with two X-Bricks, providing 15TB of usable storage / 75TB(5:1 Deduplication).
♦ 2x xbrick= 300K fully random 4K IOPS @ 50% read/50% write (500K IOPS @ 100% read) Latency <1ms

2RXAPP

Summary
The Vblock Specialized System for Extreme Applications is a pre-configured, pre-tested all Flash system which can meet extremely high levels of performance, particularly for random I/O workloads that require low latency.

Thanks to Shree Das and Pankesh Mehta for providing content.

Useful Links
http://xtremio.com/vblock
http://www.vce.com/products/specialized/extreme-apps
http://virtualgeek.typepad.com/virtual_geek/2013/11/xtremio-taking-the-time-to-do-it-right.html
http://www.thulinaround.com/2013/11/14/peeling-back-the-layers-of-xtremio-what-is-an-x-brick/

XT1