RP4VM – iSCSI Configuration to Support ESXi Splitter to vRPA Communication

Each ESXi Host in a vSphere cluster hosting the RP4VM vRPA’s (virtual appliances running on ESXi hosts) require the iSCSI channel to allow communication between the ESXi hosts kernel embedded I/O splitter and the vRPA’s, utilizing an iSCSI software adapter running on the ESXi hosts. Thus a software iSCSI adapter and associated VMkernel ports need to be configured on every ESXi node hosting the RP4VM vRPA’s.

This post provides an example of the iSCSI configuration required when using a VMware vSphere distributed switch (VDS). This example displays how to configure the iSCSI settings required for RP4VM via the vSphere web client.

The below steps provide an example on how to create additional port groups on a VDS, create the VMkernel adapters, add the software iSCSI adapter and bind the VMkernel Port Groups to the ESXi iSCSI Adapter along with associated config such as MTU and uplink order.

Note: An upcoming release of RP4VM 5.X introduces a new TCP/IP based communication path for the splitter, which will eliminate the need to configure the iSCSI based software initiator (More on this later).

RP4VMiSCSIdvs0a Continue reading

V(x)Block – AMP VUM & SQL Active Directory Integration

When a VxBlock is shipped from the factory all Windows & SQL user/db accounts are setup as local accounts, due to obvious reasons (customer AD does not exist in factory!). This post details the steps to integrate a VUM VM & SQL with Active Directory and change the local WIN&SQL accounts to AD accounts, along with modifying the SQL DB permissions to an assigned AD account.

At a high level these are the prerequisite steps:

– Change DNS values on the Windows VUM VM (if different from LCS stated values).
– Join Windows VUM VM to AD.
– Reboot VUM VM.
– Snapshot VUM VM (precautionary step).
– Add domain\svc_vum to local admin group of the VUM VM.

Use the following procedure to configure domain service accounts for the VUM Server and services & configure SQL Server access permissions on a VxBlock based EHC deployment:

Continue reading

VxRail – Welcome Enterprise Hybrid Cloud (EHC)

Introducing ‘EHC on VxRail’!

VxRAILEHC

Introduced as part of the EHC 4.1.1 release (Code named “Challenger”) – GA March ’17

Simple, Easy, Start Small, Scalable!

VxRAIL HCI Appliance now has the option of including the ‘Enterprise Hybrid Cloud 4.1.1’ platform as part of an automated greenfield installation. Thousands of hours of engineering, designing, testing and validation has gone into this release with a laser focus on delivering an automated ‘EHC on VxRail’ onsite installation. Engineered on the VxRail P470(F) Appliances, ‘EHC on VxRail’ offers a lighter weight and smaller footprint hybrid cloud solution from where you can start small and scale exponentially. Continue reading

ViPR Controller – Ingest V(x)Block UCS Boot Volumes

The following provides guidance in a scenario where your ESXi host boots from SAN, for example this is the standard configuration for Cisco UCS blades included in a Vblock/VxBlock. These boot volumes will most likely have been created outside of ViPR Controller, thus it will be a requirement to perform an ingestion of these volumes prior to performing any host migration procedures, for example moving a UCS ESXi blade to another cluster in vCenter.

Note: by not performing the ingestion will result in the removal of the Boot volume from the ESXi host masking if you are initiating the migration process utilizing ViPR commands (more on this later).

Note: To avoid ViPR provisioning issues ensure the ESXi BOOT volume masking views have _NO_VIPR appended to their exclusive mask name, this will prevent ViPR from using the exclusive export mask when adding A NEW ESXi host to a cluster:

BootVolumeMaskingViewName_NO_VIPR

Continue reading

vRealize Automation 7.1 – Create a Blueprint, Configure Catalog Item & Provision VM (Part 6)

vRealize Automation 7.1 – Create a Service & Entitlement (Part 5)

1. Create a ‘vSphere’ Machine Blueprint

VMware Machine Blueprint Definition: A blueprint that contains a machine component specifies the workflow used to provision a machine and includes information such as CPU, memory, and storage. Machine blueprints specify the workflow used to provision a machine and include additional provisioning information such as the locations of required disk images or virtualization platform objects. Blueprints also specify policies such as the lease period and can include networking and security components such as security groups, policies, or tags. Blueprints can be specific to a business group or shared among groups in a tenant, depending on the entitlements that are configured for the published blueprint. More detail on Blueprints here.

To create the Blueprint Login as a user who has the ‘Infrastructure Architect’ role assigned (Machine Blueprint capabilities):

vracreatebp1b Continue reading