Dell EMC ViPR 3.6 – VMAX Export Path Adjustment

A very useful new feature in ViPR 3.6 is the ability to increase/decrease the number of fibre channel paths on an existing VPLEX or VMAX export (host/cluster). The following example showcases the ability to increase the path count for a VMAX backed ViPR Export Group.

Note: If you want to adjust the path for an export where there are pre-existing masking
views for a host or cluster in the export group, first ingest any volumes exported to
the host. Example ViPR Ingest Procedure.

As can be seen from the following screen captures the export group consists of 2x ESXi hosts with each host having 2x initiators and each host presently configured for 1x FC path per initiator, as part of this example we will double the total fc paths per host from 2 to 4.

Export Group named ‘EHC’ details:

ViPRExpandPaths0

Continue reading

DellEMC ViPR 3.6 – Install&Config for VMAX AFA

This post details the wizard driven installation steps to install ViPR Controller and configure for a VMAX AFA system.

Recommendation when deploying on a vSphere cluster is to deploy the ViPR Controller on a minimal of a 3 node ESXi DRS cluster, and to set an anti-affinity rule among the
ViPR Controller nodes to, “Separate Virtual Machines,” on available ESXi nodes.

Begin by downloading the ViPR Controller packages from support.emc.com

ViPR 3.6 Install & Config1

This ova will deploy three VMs in a 2+1 redundant fashion allowing for the failure of a single controller without affecting availability. There is also a 3+2 ova available. Continue reading

EHC 4.1: ESXi Host Migration Process (vCenter Host Migration with ViPR Export Groups)

This procedure details the steps to execute in order to remove an ESXi host from a source ViPR Export Group and add to an existing target Export Group.

Note: this procedure applies to ViPR 3.0 and below, ViPR 3.6 introduces a more automated procedure which I intend to cover in a future post.

Host Removal – remove ESXi Host from a ViPR Export Group

Please ensure there are no EHC StaaS or VIPR native tasks being executed while performing the following procedure.

Note: Ensure the version of SMI-S complies with the EHC ESSM stated version.

The following steps detail the procedure for removing a host from a vSphere ESXi cluster in vCenter and utilizing the ViPR CLI to remove the same host from the cluster Export Group. Continue reading

ViPR Controller – Ingest V(x)Block UCS Boot Volumes

The following provides guidance in a scenario where your ESXi host boots from SAN, for example this is the standard configuration for Cisco UCS blades included in a Vblock/VxBlock. These boot volumes will most likely have been created outside of ViPR Controller, thus it will be a requirement to perform an ingestion of these volumes prior to performing any host migration procedures, for example moving a UCS ESXi blade to another cluster in vCenter.

Note: by not performing the ingestion will result in the removal of the Boot volume from the ESXi host masking if you are initiating the migration process utilizing ViPR commands (more on this later).

Note: To avoid ViPR provisioning issues ensure the ESXi BOOT volume masking views have _NO_VIPR appended to their exclusive mask name, this will prevent ViPR from using the exclusive export mask when adding A NEW ESXi host to a cluster:

BootVolumeMaskingViewName_NO_VIPR

Continue reading

ViPR Controller: Exporting VMAX3 LUN fails with Error 12000

This ‘Error 12000’ maybe encountered while exporting a VMAX3 LUN from ViPR Controller as a shared datastore to a specific ESXi Cluster. The reason for the failure is because ViPR attempts to add the new shared LUN to independent ESXi Masking Views, due to the non presence of a shared Cluster Masking View:

This issue arises in scenarios where for example the ESXi hosts already have independent Masking Views created, but no dedicated ESXi Cluster Masking View (Tenant Pod in EHC terms) has been created.

You may ask the question why each host has its own dedicated masking view?: Think Vblock/VxBlock with UCS, where each UCS ESXi blade server boots from a SAN-attached boot volume presented from the VMAX array (Vblock/VxBlock 700 series = VMAX). Further detail can be found here on how specific functioning Masking Views are configured on a Vblock/VxBlock:

vmax-masking-views-for-esxi-boot-and-shared-cluster-volumes

Key point: dedicated exclusive Masking views are required for VMware ESXi boot volumes in addition to Cluster Masking Views for shared vmfs datastores. Please reference the following post for guidance in relation to Boot Volumes exclusive masking views and how to ingest these in ViPR:

ViPR Controller – Ingest V(x)Block UCS Boot Volumes

In the case of ViPR in this scenario it is best to ingest the boot volumes as per the guidance above and then perform the export of a shared volume which will result in ViPR skipping over the exclusive masking views ( _NO_VIPR appended to their exclusive mask name) and ViPR creating a ViPR controlled shared cluster Masking View.

Reference KB: https://support.emc.com/kb/494288

Note:  if you have circumvented this error by manually creating the Cluster Masking View in advance of the first cluster wide ViPR export please ingest this Masking View to bring it under ViPR control else you may experience issues later ( if for example adding new ESXi hosts to the cluster).

ViPR Controller -Configuring AD Authentication

The default built-in administrative accounts may not be granular enough to meet your business needs, if this is the case then adding an authentication provider such as Active Directory which we highlight as part of this configuration allows you to assign users or groups to specific roles.

The example configuration provided here was part of an Enterprise Hybrid Cloud solution. Continue reading

EMC ViPR Controller: Storage-Related Ports

The following list and diagram details the respective storage-related TCP ports that ViPR uses to communicate with a particular EMC Storage product. If  a firewall is present between the ViPR Controller and the associated Storage system then it may require a specific ruleset to allow the required communication between the ViPR controller and the array.

  • VMAX Block (via SMI-S) – 5989
  • VMAX File – 443
  • VNX Block (via SMI-S) – 5989
  • VNX File – 443
  • VPLEX MGMT Controller – 443
  • RecoverPoint – 7225
  • Isilon – 8080
  • ScaleIO – 22
  • XtremIO XMS – 443

ViPR - Storage Connectivity Ports

 

Reference:  EMC® ViPR® Controller Version 2.4 Security Configuration Guide