VVD on VxRail – Shared Edge & Compute Domain (External vCenter Installation)

Note: this is an example for reference only please use the VxRail ‘External vCenter’ installation procedures provided by Dell EMC.

The following will require an understanding of the VVD on VxRAIL design for which all associated documentation can be found here:

VMware Validated Design 4.2 on VxRail Deployment Guides

This post details how to install a VxRail appliance leveraging an external vCenter server, for example this is the case when you wish to deploy a VVD on VxRail solution where the second VxRail instance used for the Shared Edge and Compute Domain (Workload Domain) requires the use of an external vCenter (vCenter is deployed on the Management Domain VxRail Appliance). This process is strictly for the workload domain  and differs from the procedure used to deploy the first Management Domain VxRail Appliance. For more information on the architecture of VVD on VxRail please check out this cool lightboard video from Jason:

VVD for SDDC on VxRail Lightboard Overview

To begin you will need to have the Shared Edge and Compute Domain (Workload Domain) vCenter deployed on the management VxRAIL Appliance. Continue reading

VMware vSAN – Enable Provision of VM SWAP Files as Thin

The following steps detail how to enable vSAN to provision the Virtual Machine Swap files as thin. vSAN Sparse Swap Files allow you to conserve space in the vSAN datastore by only consuming vSAN storage as the swap gets used i.e instead of leaving as default thick swap files which means for example when you create a VM with 32GB of memory this results in the creation of a 32GB virtual swap file (.vswp) unless of course you have configured reservations. You can see how quickly your vSAN datastore could fill if there are many hundreds of VM’s created in this default manner.

Steps to to enable provisioning of virtual machine swap files on vSAN as thin:

From the vSphere Client

1. In the Navigator, click Hosts and Clusters and expand the entire cluster tree.
2. Select the first host in the clsuter.
3. Click the Configure tab->System->Advanced System Settings.
4. Click the Edit button.vsanvswpthin0 Continue reading

vCenter VCSA 6.5 – Repoint to New PSC

Please also see related post:

VCSA 6.0U2 Lookup SSO Domain Name & Site Name

The following details how to repoint a vCenter VCSA using an external PSC within the same site to a different PSC FQDN. One reason for completing such a task is in the event of your active PSC failing you can simply repoint VC to a second PSC in the same site. The reason I completed such a task was part of a VxRAIL VVD deployment leveraging a load balanced PSC config which required to repoint the VxRAIL management VC to the LB FQDN.

If you wish to confirm the current configuration status of PSC & VC simply navigate via the Web Client:

Home->Admin->System Config->Nodes->Objects

vcsarepoint0 Continue reading

VxRail – Welcome Enterprise Hybrid Cloud (EHC)

Introducing ‘EHC on VxRail’!

VxRAILEHC

Introduced as part of the EHC 4.1.1 release (Code named “Challenger”) – GA March ’17

Simple, Easy, Start Small, Scalable!

VxRAIL HCI Appliance now has the option of including the ‘Enterprise Hybrid Cloud 4.1.1’ platform as part of an automated greenfield installation. Thousands of hours of engineering, designing, testing and validation has gone into this release with a laser focus on delivering an automated ‘EHC on VxRail’ onsite installation. Engineered on the VxRail P470(F) Appliances, ‘EHC on VxRail’ offers a lighter weight and smaller footprint hybrid cloud solution from where you can start small and scale exponentially. Continue reading

EMC Isilon -Shutdown via CLI

The following procedure uses Isilon CLI commands to shut down the entire cluster.

Cluster Shutdown Procedure:
1. SSH connect to node1 of the cluster and log in as the root user.
2. From the command prompt, type the following commands in this order:
isi_for_array isi_flush
isi config
shutdown all

If a specific node does not power down follow these steps to shut it down:
1. SSH to the node.
2. Log in as root and type:  shutdown -p now