DellEMC VxBlock – Cisco UCS Ethernet Adapter Policy

The following details the creation of a VxBlock Cisco UCS ‘Ethernet Adapter Policy’ for a compute B-Series blade running VMware ESXi. This policy is leveraged by a service-profile which gets associated with their respective compute blades.

Note: This guidance is based on vSphere 6.0

UCS Manager GUI

  1. From the UCS Manager GUI click on ‘Servers’ and navigate to Policies->root->Adapter Policies.
  2. Chose the option to ‘Add’ a new Ethernet Adapter Policy and provide a name & description (e.g VMQ-Default).ucs_eth_adpt_policy3
  3. Under Resources enter the following settings:

Continue reading

V(x)Block – AMP VUM & SQL Active Directory Integration

When a VxBlock is shipped from the factory all Windows & SQL user/db accounts are setup as local accounts, due to obvious reasons (customer AD does not exist in factory!). This post details the steps to integrate a VUM VM & SQL with Active Directory and change the local WIN&SQL accounts to AD accounts, along with modifying the SQL DB permissions to an assigned AD account.

At a high level these are the prerequisite steps:

– Change DNS values on the Windows VUM VM (if different from LCS stated values).
– Join Windows VUM VM to AD.
– Reboot VUM VM.
– Snapshot VUM VM (precautionary step).
– Add domain\svc_vum to local admin group of the VUM VM.

Use the following procedure to configure domain service accounts for the VUM Server and services & configure SQL Server access permissions on a VxBlock based EHC deployment:

Continue reading

ViPR Controller – Ingest V(x)Block UCS Boot Volumes

The following provides guidance in a scenario where your ESXi host boots from SAN, for example this is the standard configuration for Cisco UCS blades included in a Vblock/VxBlock. These boot volumes will most likely have been created outside of ViPR Controller, thus it will be a requirement to perform an ingestion of these volumes prior to performing any host migration procedures, for example moving a UCS ESXi blade to another cluster in vCenter.

Note: by not performing the ingestion will result in the removal of the Boot volume from the ESXi host masking if you are initiating the migration process utilizing ViPR commands (more on this later).

Note: To avoid ViPR provisioning issues ensure the ESXi BOOT volume masking views have _NO_VIPR appended to their exclusive mask name, this will prevent ViPR from using the exclusive export mask when adding A NEW ESXi host to a cluster:

BootVolumeMaskingViewName_NO_VIPR

Continue reading

Cisco UCS – Determining ESXi FNIC&ENIC via PowerCLI

The following script allows the user to retrieve a listing of Network (ENIC) & Storage (FNIC) firmware drivers installed on Cisco UCS blades at a per vSphere cluster level. You may download the ‘Cisco_FNIC_ENIC.ps1‘ script here: Cisco_FNIC_ENIC.ps1 (Remove the .doc extension).

The script will begin by prompting you to enter the vCenter IP Address, username and password. A list of all the available clusters residing in vCenter will be returned. Followed by a prompt to enter the vSphere cluster name, from the cluster defined the script will retrieve a per ESXi listing of ENIC&FNIC firmware levels. The script will firstly prompt the user to enable SSH on all the hosts in the cluster:

UCS_FNIC_ENIC1

UCS_FNIC_ENIC2

 

Once you have completed the tasks on the hosts that required SSH Access, you may then return to the running script and type option ‘y’ in order to again disable SSH on all the hosts in the specified cluster:

UCS_FNIC_ENIC3

PowerCLI Script:

#######################################
# Confirm CISCO FNIC & ENIC Drivers
# Date: 2016-07-01
# Created by: David Ring
#######################################

###### vCenter Connectivity Details ######

Write-Host “Please enter the vCenter Host IP Address:” -ForegroundColor Yellow -NoNewline

$VMHost = Read-Host

Write-Host “Please enter the vCenter Username:” -ForegroundColor Yellow -NoNewline

$User = Read-Host

Write-Host “Please enter the vCenter Password:” -ForegroundColor Yellow -NoNewline

$Pass = Read-Host

Connect-VIServer -Server $VMHost -User $User -Password $Pass

###### Please enter the Cluster to check CISCO Versions #######

Write-Host “Clusters Associated with this vCenter:” -ForegroundColor Green

$VMcluster = ‘*’

ForEach ($VMcluster in (Get-Cluster -name $VMcluster)| sort)

{
Write-Host $VMcluster
}

Write-Host “Please enter the Cluster to lookup CISCO FNIC & ENIC Drivers:” -ForegroundColor Yellow -NoNewline

$VMcluster = Read-Host

###### Enabling SSH ######

Write-Host “Do you need to Enable SSH on the Cluster ESXi Hosts? ” -ForegroundColor Yellow -NoNewline

Write-Host ” Y/N:” -ForegroundColor Red -NoNewline

$SSHEnable = Read-Host

if ($SSHEnable -eq “y”) {

Write-Host “Enabling SSH on all hosts in your specified cluster:” -ForegroundColor Green

Get-Cluster $VMcluster | Get-VMHost | ForEach {Start-VMHostService -HostService ($_ | Get-VMHostService | Where {$_.Key -eq “TSM-SSH”})}

}

###### Confirm Driver Versions ######

Write-Host “Confirm CISCO FNIC & ENIC Drivers” -ForegroundColor Green

$hosts = Get-Cluster $VMcluster | Get-VMHost

forEach ($vihost in $hosts)

{

Write-Host -ForegroundColor Magenta “Gathering Driver versions on” $vihost

$esxcli = get-vmhost $vihost | Get-EsxCli

$esxcli.software.vib.list() | Where { $_.Name -like “net-enic”} | Select @{N=”VMHost”;E={$ESXCLI.VMHost}}, Name, Version

$esxcli.software.vib.list() | Where { $_.Name -like “scsi-fnic”} | Select @{N=”VMHost”;E={$ESXCLI.VMHost}}, Name, Version

}

###### Disabling SSH ######

Write-Host “Ready to Disable SSH? ” -ForegroundColor Yellow -NoNewline

Write-Host ” Y/N:” -ForegroundColor Red -NoNewline

$SSHDisable = Read-Host

if ($SSHDisable -eq “y”) {

Write-Host “Disabling SSH” -ForegroundColor Green

Get-Cluster $VMcluster | Get-VMHost | ForEach {Stop-VMHostService -HostService ($_ | Get-VMHostService | Where {$_.Key -eq “TSM-SSH”}) -Confirm:$FALSE}

}

 

Useful References:

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1027206

http://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-b-series-blade-servers/115764-ucs-san-tshoot-00.html

EMC UIM/P – Editing The Database

Thank you @CliffCahill for providing this trick!

Ensure to back up the UIM/P(Unified Infrastructure Manager for Provisioning) DB before you begin. The following provides detailed steps on how to modify IP settings for ESXi host service offerings stored in the UIM DB.

Login to the UIM CLI via putty.
#To login to uim voyencedb database:
su – pgdba
psql voyencedb uim

To pull back all ESXi O/S settings:
select *, from ossettings;

To update the gateway for all service offerings / ESXi host:
update ossettings set gateway = ‘10.10.1.254’;

To update the IP address on individual ESXi host – id is listed when you run “select” command:
update ossettings set ip_address = ‘10.10.1.10’ where id = 2338;
update ossettings set ip_address = ‘10.10.1.11’ where id = 2302;

Vblock – Advanced Management Pod Second Generation (AMP-2)

Vblock Advanced Management Pod (AMP)

The AMP consists of the management components of a Vblock system, these management components are self-Contained and provide Out-of-Band Management for the entire Vblock Infrastructure. The servers and storage(in the case of AMP-2HA) that make up the AMP host all of the Vblock management applications in a dedicated environment separate from that of the production environment. This type of separation allows for the Vblock to operate even in the event of an AMP failure scenario.

Management Software stack:
Code versions and exact details of the AMP software management components are deterministic on the RCM (Release Code Matrix) level used to configure the Vblock. Here is an example of some of the core AMP management components:
– VMware vCenter Server Suite
– Unisphere for VNX|VMAX (RecoverPoint and VPLEX Optional)
– XtremIO XMS
– PowerPath Licensing Appliance
– EMC ESRS Appliances
– Cisco DCNM
– VCE Vision Intelligent Operations
– Cisco Nexus 1000v

AMP-2 Hardware
There are 3 AMP-2 models associated with the Vblock 340,540&740: AMP-2P | AMP-2RP | AMP-2HA

AMP-2P (“Physical”) – One Cisco UCS C220 dedicated Server to run the management workload.
AMP-2RP (“Redundant Physical”) – Two Cisco UCS C220 servers supports application and hardware redundancy.

The AMP-2HA includes 2 or 3 Cisco ‘C’ series servers for compute along with the highly available VNXe3200 storage array where the VM storage resides providing a redundant out-of-band management environment:
AMP-2HA BASE (“High Availability”) – Two Cisco UCS C220|240 servers and shared storage presented by a EMC VNXe3200 storage array.
AMP-2HA PERFORMANCE (“High Availability”) – Three Cisco UCS C220|240 servers and additional ‘FAST VP’ VNXe3200 storage.

Taking a look at the AMP-2HA Hardware stack:
The second generation of the Advanced Management Platform (AMP-2HA) is the high availability model of AMP-2HA that centralizes management components of the Vblock System and delivers out-of-band management.

Vblock 340: AMP-2HA ‘Base’ Hardware stack
2x C220 M3 SFF Server (1RU Per server)
2x 32 GB Flex Flash SD Card Modules (ESXi install location)
2x CPU Per server(2.4Ghz Ivy Bridge 6Cores Per CPU, 7.2GT/s QPI)
128GB RAM Per C220 (8x 16GB DDR3)
2x Cisco Nexus 3064-T Switch for management networking
VNXe 3200 Storage Array (25-Drive DPE loaded with 3x 100GB FAST Cache Drives & 11x 600GB 10K SAS 2.5″ Drives)

Vblock 340: AMP-2HA ‘Performance’ Hardware stack
3x C220 M3 SFF Server (1RU)
2x 32 GB Flex Flash SD Card Modules (ESXi install location)
2x CPU Per server(2.4Ghz Ivy Bridge 6Cores Per CPU, 7.2GT/s QPI)
128GB RAM Per C220 (8x 16GB DDR3)
VNXe 3200 Storage Array (25-Drive DPE loaded with 3x 100GB FAST Cache Drives, 6x 100GB FAST VP Drives & 11x 600GB 10K SAS 2.5″ Drives)

Vblock 540&740: AMP-2HA ‘Base’ Hardware stack
2x C240 M3 SFF Server (2RU Per Server)
2x 32 GB Flex Flash SD Card Modules (ESXi install location)
2x CPU Per server(2.4Ghz Ivy Bridge 6Cores Per CPU, 7.2GT/s QPI)
128GB RAM Per C240 (8x 16GB DDR3)
VNXe 3200 Storage Array (25-Drive DPE loaded with 3x 100GB FAST Cache Drives & 11x 600GB 10K SAS 2.5″ Drives)

Vblock 540&740: AMP-2HA ‘Performance’ Hardware stack
3
x C240 M3 SFF Server (2RU)
2x 32 GB Flex Flash SD Card Modules (ESXi install location)
2x CPU Per server(2.4Ghz Ivy Bridge 6C)
128GB RAM Per C240 (8x 16GB DDR3)
VNXe 3200 Storage Array (25-Drive DPE loaded with 3x 100GB FAST Cache Drives, 6x 100GB FAST VP Drives & 11x 600GB 10K SAS 2.5″ Drives)

VxBlock 340,540 & 740 –  ‘The AMP2-HA Performance’ model is the default for VMware NSX deployments allowing each of the 3 NSX controllers to be dedicated to an ESXi host for redundancy and scalability.

References:
http://www.vce.com/asset/documents/vblock-340-gen3-2-architecture-overview.pdf
http://www.vce.com/asset/documents/vblock-540-gen2-0-architecture-overview.pdf
http://www.vce.com/asset/documents/vblock-740-gen5-0-architecture-overview.pdf

http://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/ucs-c-series-rack-servers/C240M3_LFF_SpecSheet.pdf
http://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/ucs-c-series-rack-servers/C220M3_SFF_SpecSheet.pdf