DellEMC EHC – ViPR vPool Volume Migration

This post details the procedure and EHC validation steps to execute in order to move an EHC ViPR provisioned volume to a different ViPR virtual pool leveraging the ViPR Change Virtual Pool services.

High level steps that will need to be followed:

  • Create a new ViPR vPool with all identical properties except for one that you are changing (Best to use the duplicate button and then modify the copy).
  • Migrate ViPR Volume(s) to the newly created vPool using ViPR workflow.
  • Delete old ViPR vPool.
  • Rename the newly created ViPR vPool to the ‘Old pool name’

Renaming the new vPool with the old name results in a successful EHC StaaS provisioning task utilizing the original SRP. Continue reading

Dell EMC ViPR 3.6 – VMAX Export Path Adjustment

A very useful new feature in ViPR 3.6 is the ability to increase/decrease the number of fibre channel paths on an existing VPLEX or VMAX export (host/cluster). The following example showcases the ability to increase the path count for a VMAX backed ViPR Export Group.

Note: If you want to adjust the path for an export where there are pre-existing masking
views for a host or cluster in the export group, first ingest any volumes exported to
the host. Example ViPR Ingest Procedure.

As can be seen from the following screen captures the export group consists of 2x ESXi hosts with each host having 2x initiators and each host presently configured for 1x FC path per initiator, as part of this example we will double the total fc paths per host from 2 to 4.

Export Group named ‘EHC’ details:

ViPRExpandPaths0

Continue reading

DellEMC ViPR 3.6 – Install&Config for VMAX AFA

This post details the wizard driven installation steps to install ViPR Controller and configure for a VMAX AFA system.

Recommendation when deploying on a vSphere cluster is to deploy the ViPR Controller on a minimal of a 3 node ESXi DRS cluster, and to set an anti-affinity rule among the
ViPR Controller nodes to, “Separate Virtual Machines,” on available ESXi nodes.

Begin by downloading the ViPR Controller packages from support.emc.com

ViPR 3.6 Install & Config1

This ova will deploy three VMs in a 2+1 redundant fashion allowing for the failure of a single controller without affecting availability. There is also a 3+2 ova available. Continue reading

EHC 4.1: ESXi Host Migration Process (vCenter Host Migration with ViPR Export Groups)

This procedure details the steps to execute in order to remove an ESXi host from a source ViPR Export Group and add to an existing target Export Group.

Note: this procedure applies to ViPR 3.0 and below, ViPR 3.6 introduces a more automated procedure which I intend to cover in a future post.

Host Removal – remove ESXi Host from a ViPR Export Group

Please ensure there are no EHC StaaS or VIPR native tasks being executed while performing the following procedure.

Note: Ensure the version of SMI-S complies with the EHC ESSM stated version.

The following steps detail the procedure for removing a host from a vSphere ESXi cluster in vCenter and utilizing the ViPR CLI to remove the same host from the cluster Export Group. Continue reading

ViPR Controller – Ingest V(x)Block UCS Boot Volumes

The following provides guidance in a scenario where your ESXi host boots from SAN, for example this is the standard configuration for Cisco UCS blades included in a Vblock/VxBlock. These boot volumes will most likely have been created outside of ViPR Controller, thus it will be a requirement to perform an ingestion of these volumes prior to performing any host migration procedures, for example moving a UCS ESXi blade to another cluster in vCenter.

Note: by not performing the ingestion will result in the removal of the Boot volume from the ESXi host masking if you are initiating the migration process utilizing ViPR commands (more on this later).

Note: To avoid ViPR provisioning issues ensure the ESXi BOOT volume masking views have _NO_VIPR appended to their exclusive mask name, this will prevent ViPR from using the exclusive export mask when adding A NEW ESXi host to a cluster:

BootVolumeMaskingViewName_NO_VIPR

Continue reading

ViPR Controller: Exporting VMAX3/AFA LUN fails with Error 12000

This ‘Error 12000’ maybe encountered while exporting a VMAX 3/AFA LUN from ViPR Controller as a shared datastore to a specific vSphere ESXi Cluster (ViPR shared export mask). The reason for the failure is because ViPR either attempts to add the new shared LUN to independent exclusive ESXi Masking Views or to a manually created shared cluster masking view:

This issue arises in scenarios where for example the ESXi hosts already have independent Masking Views created without the NO_VIPR suffix in the Masking view name and/or an ESXi Cluster Masking View (Tenant Pod in EHC terms) has been created outside of ViPR control.

Resolution:

In the case of VMAX ensure only one shared cluster Masking View (MV) exists for the tenant cluster (utilizing cascaded initiator groups) and is under ViPR management control – if the Cluster MV was created manually (for example VxBlock factory) then create a small volume for this manually created MV directly from Unisphere/Symcli and then perform a ViPR ingestion of this newly created volume, this will result in the MV coming under ViPR Management.

In the case of a VxBlock (including Cisco UCS blades) all hosts in the cluster must contain exclusive masking views for their respective boot volumes and these exclusive masking views MUST have a NO_VIPR suffix.

You may ask the question why each host has its own dedicated masking view?: Think Vblock/VxBlock with UCS, where each UCS ESXi blade server boots from a SAN-attached boot volume presented from the VMAX array (Vblock/VxBlock 700 series = VMAX). Further detail can be found here on how specific functioning Masking Views are configured on a Vblock/VxBlock:

vmax-masking-views-for-esxi-boot-and-shared-cluster-volumes

Key point: dedicated exclusive Masking views are required for VMware ESXi boot volumes and MUST have a NO_VIPR suffix in addition to Cluster Masking Views for shared vmfs datastores being under ViPR Control. Please reference the following post for guidance in relation to Boot Volumes exclusive masking views and how to ingest these in ViPR:

ViPR Controller – Ingest V(x)Block UCS Boot Volumes

In the case of ViPR in this scenario it is best to ingest the boot volumes as per the guidance above and then perform the export of a shared volume which will result in ViPR skipping over the exclusive masking views ( _NO_VIPR appended to their exclusive mask name) and ViPR either creating or utilizing (in the case of an existing ViPR export mask) a ViPR controlled shared cluster Masking View.

Note:  if you have circumvented this error by manually creating the shared Cluster Masking View (through Unisphere/SYMCLI) in advance of the first cluster wide ViPR export please ingest this Masking View in order to bring it under ViPR control as per above guidance else you will experience issues later (for example adding new ESXi hosts to the cluster).

ViPR Controller -Configuring AD Authentication

The default built-in administrative accounts may not be granular enough to meet your business needs, if this is the case then adding an authentication provider such as Active Directory which we highlight as part of this configuration allows you to assign users or groups to specific roles.

The example configuration provided here was part of an Enterprise Hybrid Cloud solution. Continue reading