VCF ON VXRAIL – REDUCE A CLUSTER (REMOVE NODE)
In a previous post I provided an example walk-through of the process for adding a new host to an existing VCF on VxRail ‘VI Workload Domain (WLD)’: VCF on VxRail – Expand […]
Virtualization & Storage
In a previous post I provided an example walk-through of the process for adding a new host to an existing VCF on VxRail ‘VI Workload Domain (WLD)’: VCF on VxRail – Expand […]
In a previous post I provided an example walk-through of the process for adding a new host to an existing VCF on VxRail ‘VI Workload Domain (WLD)’:
VCF on VxRail – Expand a Cluster (Add Node)
In this post I will provide an example walk-through of the process for removing a host from a VCF on VxRail ‘VI Workload Domain (WLD)’. There are various reasons you may wish to perform this task but one that stands out is to re-use the host to bolster resources as needed in another ‘VI Workload Domain (WLD)’ cluster.
Note: Please ensure you have sufficient resource remaining after the removal of host to accommodate the existing workloads on the ‘VI Workload Domain (WLD)’ cluster from which the host is being removed (ensure vSAN has the required members to support protection policies). This is an example for reference only please use the VxRail installation procedures provided by Dell EMC.
At a high level there are two key integrated procedures to follow, leveraging:
This example begins with a 4 node VCF on VxRail ‘VI Workload Domain’ and we reduce the host count by one host.
Logging into vCenter we can see the existing 4 node cluster, this post will show the removal of host ‘vcfesxi08’ :
From the SDDC Manager dashboard we can also confirm the current VI WLD configuration which consists of 4 hosts. Inventory->Workload Domains->View Details->VI WLD->Hosts:
From the SDDC Manager dashboard navigate to Inventory->Workload Domains->View Details->VI WLD (viwld01)->Clusters->VI WLD (viwld01):
From the Hosts tab select the host to be removed and click the Remove Selected Hosts button:
Confirm the removal of host from cluster:
From the Tasks pane we can view the list of steps the workflow is executing and track progress of the workflow:
Monitor log:
/var/log/vmware/vcf/operationsmanager/operationsmanager.log
From the Hosts tab we can now see that host vcfesxi08 has been successfully removed from the ‘VI WLD’ cluster:
To begin the VI WLD host removal we leverage the VxRail Manager plugin in vCenter to reduce the VxRail Cluster.
Click on the VxRail cluster in vCenter from where we are removing a host, click on the Hosts tab from here you can view the host has been placed in maintenance mode by the SSDC Manager workflow which allows VxRail manager to proceed with the task of removing host:
Right click the host to be removed, which is vcfesxi08 in this example, select VxRail->Remove VxRail Host:
Enter the vSphere credentials, click verify and on successful verification click on the APPLY button which in turn initiates the VxRail REMOVE HOSTS workflow:
Monitoring progress of the workflow from vCenter:
Host vcfesxi08 successfully removed from the VxRail cluster. The cluster ‘vcfviwld’ now has 3 hosts:
NOTE: After removing a host from a vSAN cluster you may receive the following error on the remaining ESXi hosts in the cluster:
‘Host cannot communicate with all other nodes in virtual SAN enabled cluster’
This cosmetic (do ensure your vSAN is in a healthy status) message requires you to restart the VPXA management service in order to clear the message. Workaround is detailed in the following VMware KB: https://kb.vmware.com/s/article/2143214
Simply SSH to one of the hosts in the cluster and restart VPXA:
# /etc/init.d/vpxa restart
The host will now require to undergo a RASR reset if you now wish to re-purpose the node for another VxRail cluster.
Hope that helps!
Ramblings by Keith Lee
Discussions about all things VxRail.
Random Technology thoughts from an Irish Virtualization Geek (who enjoys saving the world in his spare time).
Musings of a VMware Cloud Geek
Converged and Hyper Converged Infrastructure
'Scamallach' - Gaelic for 'Cloudy' ...
Storing data and be awesome
Best Practices et alia
Every Cloud Has a Tin Lining.
1 Comment »