Dell EMC ViPR 3.6 – VMAX Export Path Adjustment

A very useful new feature in ViPR 3.6 is the ability to increase/decrease the number of fibre channel paths on an existing VPLEX or VMAX export (host/cluster). The following example showcases the ability to increase the path count for a VMAX backed ViPR Export Group.

Note: If you want to adjust the path for an export where there are pre-existing masking
views for a host or cluster in the export group, first ingest any volumes exported to
the host. Example ViPR Ingest Procedure.

As can be seen from the following screen captures the export group consists of 2x ESXi hosts with each host having 2x initiators and each host presently configured for 1x FC path per initiator, as part of this example we will double the total fc paths per host from 2 to 4.

Export Group named ‘EHC’ details:

ViPRExpandPaths0

Continue reading

Dell EMC – Enterprise Hybrid Cloud (EHC) 4.1.2

EHC LOGO

EHC (Enterprise Hybrid Cloud) which is the DellEMC turnkey VMware (vRealize Suite) based cloud offering has made great strides in recent releases transforming the cloud strategy for CPSD CI/HCI, namely EHC on VxBlock/VxRack Flex/VxRail (Engineered, manufactured, managed, supported and sustained as one product) and this release is no different with massive improvements around automation especially in relation to both the Install and Upgrade process associated with the EHC platform on VxRail leading to far quicker and more seamless deployments of the EHC stack. Enhanced multi-site capabilities, DRaaS offering on VxRail, support for EHC on VxRack SDDC and factory delivered EHC on VxRack Flex w/NSX are some of the biggest milestones with this release. Continue reading

DellEMC ViPR 3.6 – Install&Config for VMAX AFA

This post details the wizard driven installation steps to install ViPR Controller and configure for a VMAX AFA system.

Recommendation when deploying on a vSphere cluster is to deploy the ViPR Controller on a minimal of a 3 node ESXi DRS cluster, and to set an anti-affinity rule among the
ViPR Controller nodes to, “Separate Virtual Machines,” on available ESXi nodes.

Begin by downloading the ViPR Controller packages from support.emc.com

ViPR 3.6 Install & Config1

This ova will deploy three VMs in a 2+1 redundant fashion allowing for the failure of a single controller without affecting availability. There is also a 3+2 ova available. Continue reading

EHC 4.1: ESXi Host Migration Process (vCenter Host Migration with ViPR Export Groups)

This procedure details the steps to execute in order to remove an ESXi host from a source ViPR Export Group and add to an existing target Export Group.

Note: this procedure applies to ViPR 3.0 and below, ViPR 3.5 introduces a more automated procedure which I intend to cover in a future post.

Host Removal – remove ESXi Host from a ViPR Export Group

Please ensure the current ViPR configuration is in a known good state before proceeding and that the ViPR database is edited where required to bring it in synch with the live environment. Contact DellEMC support to assist with any ViPR database remediations required.

Note: Ensure the version of SMI-S complies with the EHC ESSM stated version.

The following steps detail the procedure for removing a host from a vSphere ESXi cluster in vCenter and utilizing the ViPR CLI to remove the same host from the cluster Export Group. Continue reading

RP4VM – iSCSI Configuration to Support ESXi Splitter to vRPA Communication

Each ESXi Host in a vSphere cluster hosting the RP4VM vRPA’s (virtual appliances running on ESXi hosts) require the iSCSI channel to allow communication between the ESXi hosts kernel embedded I/O splitter and the vRPA’s, utilizing an iSCSI software adapter running on the ESXi hosts. Thus a software iSCSI adapter and associated VMkernel ports need to be configured on every ESXi node hosting the RP4VM vRPA’s.

This post provides an example of the iSCSI configuration required when using a VMware vSphere distributed switch (VDS). This example displays how to configure the iSCSI settings required for RP4VM via the vSphere web client.

The below steps provide an example on how to create additional port groups on a VDS, create the VMkernel adapters, add the software iSCSI adapter and bind the VMkernel Port Groups to the ESXi iSCSI Adapter along with associated config such as MTU and uplink order.

Note: An upcoming release of RP4VM 5.X introduces a new TCP/IP based communication path for the splitter, which will eliminate the need to configure the iSCSI based software initiator (More on this later).

RP4VMiSCSIdvs0a Continue reading