VxRail – VVD 4.3 (Notes)

Dell EMC VxRail and VMware Validated Design (VVD) brief notes:

vvdonvxrail_vvdlogo

  • For a list of the VVD 4.3 BOM see here.
  • VVD certification offers guidance on how to deploy the vRealize Suite (vRealize Automation, vRealize Operations Manager, vRealize Business for Cloud and vRealize Log Insight) and NSX on VxRail.
  • VxRail & VVD BOM alignment. VVD 4.3 includes vSphere 6.5U2 – requires VxRail code versions 4.5 which include 6.5U2. VxRail 4.5.225 or later recommended.
  • Two workload domains ‘management’ & ‘vi'(includes shared edge and compute cluster).
  • VxRail E or P series are the recommended models for the management domains.
  • More on VxRail Sizing here.
  • VxRail ESXi clusters remain fully life cycle managed by VxRail Manager(updates ESXi, vSAN, Firmware, and the VIBs/Agents). vCenter and PSCs can be upgraded leveraging VMware Update Manager.
  • vSAN is the primary storage used for the Management domains.
  • 3x deployment options including DR and Stretch Clustering:
  • Multi-AZ design with vSAN stretched cluster supported with up to 5ms RTT latency between Availability Zones and 10Gbps b/w.
  • Multi-Region design supports latency between regions up to 150ms.
  • Dell EMC community VVD on VxRail landing page.
  • vRealize Suite Lifecycle Manager (vRSLCM) utilized for the automated deployment of the vRealize Suite products namely vRealize Automation, vRealize Operations Manager, vRealize Business for Cloud and vRealize Log Insight.
  • vCenters and their associated PSCs reside in the management domain. VVD design requires that both the management & workload vCenters and PSCs must be deployed in the management domain VxRail.
  • DUAL Region VVD configurations have a single SSO Domain spanning 2 sites (consisting of 4X VxRails).
  • VVD supports a VADP compatible backup solution for integration with vSphere, for example Dell EMC Avamar or Networker.
  • VMware Site Recovery Manager and vSphere Replication to perform site protection and recovery of the Cloud Management Platform that consists of vRealize Automation with embedded vRealize Orchestrator, vRealize Business, vRealize Operations Manager analytics cluster, and vRealize Suite Lifecycle Manager. VVD 4.3 Site Protection and Recovery guide can be found here.
  • Non-VVD vRealize Automation endpoints are supported natively by vRA, see VVD documentation guidance.

 

 

VxRail – Visio Stencils

Thanks to our prod marketing team for creating these VxRail stencils which can be used in Microsoft Visio. The zip download below contains images of all VxRail series appliances both 13&14G E|G|P|V| S, detailing front, rear, left and right elevations.

VxRail_Visios_13G_and_14G.zip

Note: these are the latest as of Dec 2018. Please refer to the VxRail enablement center to check for most recent dowloads. If you are working on VVD related solutions then it is also worth checking out these:  Visio Stencils for the VMware Validated Designs

List of VxRail stencils included:

vxrailstencils1 Continue reading

vRSLCM – Add a vROps Remote Collector

One of the cool things about vRealize Suite Lifecycle Manager is the capability to add components to an existing LCM configured vROps. In this example it is demonstrated how to add a Remote Collector (adding a data node is another option).

The versions used in this example are:

  • vRSLCM 2.0
  • vROps 6.7

Before proceeding with making any configuration changes it is best to validate the current setup. Check for things such as:

Continue reading

VMware ESXi – How to Determine Build Number via CLI

One quick method to discover your ESXi build number and version is to connect via ssh and run the following command from the ESXi console:

vmware -vl 

esxibuild

The output displays a build number 8294256 and also returns the Update level U2.

You may also lookup your ESXi build number and version from the following KB:

Build numbers and versions of VMware ESXi/ESX (2143832)

See also: VMware VCSA – How to Determine Build Number

VMware vRSLCM 2.0 – Import Brownfield Environment

vRealize Suite Lifecycle Manager (vRSLCM) v2.0 is a centralized way to manage the vRealize suite of products from a single pane of glass, allowing users to perform tasks such as installation, configuration, content management, integrated market place, upgrades, patching, certificate management, support matrix lookup among other admin tasks.. more details can be found here on the What’s New video for vRSLCM 2.0.

This post details how to import an existing brownfield installation of the vRealize Suite into vRSLCM 2.0, these products were originally deployed as per the VVD 4.3 architecture, namely:

  • vRealize Automation
  • vRealize Operations
  • vRealize Log Insight
  • vRealize Business for Cloud

The following steps detail how to onboard these vRealize products into LCM, allowing the user to leverage all these management capabilities that come with vRSLCM 2.0.

This example is based on logging into a newly deployed vRSLCM 2.0, begin by logging in with default username ‘admin@localhost’ and default password ‘vmware’:

vrslcm import brownfield1 Continue reading

VxRail – Basic vSAN UI Overview

The following post provides a basic UI overview of how the VxRail Appliance disk groups are configured and viewed from both the VxRail Manager and the vSphere Client.

The VxRail Appliance vSAN disk groups are configured in one of two ways:
Hybrid – single SSD disk for caching and one or more HDD disks for capacity.
All-flash – single SSD disk for caching and one or more SSD disks for capacity.
The amount of storage available to the vSAN datastore is based on the capacity drives.

A VxRail node allows for multiple disk groups (please refer to official VxRail docs for specifics, as the quantity of disk groups differ per VxRail model) which in turn provides multiple cache drives per node thus potentially improving performance. In this example each VxRail Appliance node has two All-flash disk groups, each node in the cluster is required to have the same storage configuration.

vSphere Client UI

From the vSphere client click on the cluster and navigate to the ‘configure – vSAN – General’ from here we can see that the vSAN cluster for this VxRail appliance comprises of 24 disks in total (4x 13G servers).

vxrailvsanui1

‘vSAN – Disk Management’ displays both the Disk Groups and the disks associated with each Disk Group. Taking the example below of the first ESXi host in the cluster, we can see the VxRail node has a total of 6 disks contributing storage to the vSAN cluster, comprising of two disk groups with three disks in each disk group. Continue reading