VCF On VxRAIL 4.2 – Walkthrough (Part2: VI Workload Domain)
Note: this is an example for reference only please use the VxRail installation procedures & services provided by Dell. This is the second in a series of posts covering VCF On […]
Virtualization & Storage
Note: this is an example for reference only please use the VxRail installation procedures & services provided by Dell. This is the second in a series of posts covering VCF On […]
Note: this is an example for reference only please use the VxRail installation procedures & services provided by Dell.
This is the second in a series of posts covering VCF On VxRail 4.2. The following detailed post provides an example walkthrough covering the process of deploying a VI Workload Domain with a second VDS dedicated to NSX-T traffic:
Versions used in this example are VCF 4.2 & VxRail 7.0.131. Release Notes:
The end result of this example build is that of a single site VCF on VxRail solution including a single NSX-T VI Workload Domain (thanks to @HeagaSteve for the architectural diagram):
As of VCF 4.0.1 there is now an option to deploy a second VDS to support NSX-T traffic. This traffic separation option is available for both the Management (via Input XLS ‘vSphere Distributed Switch Profile’) and VI WLD (utilizing a script). This second VDS is owned and managed by VCF. This example will showcase the deployment of a second VDS as part of an NSX-T based VI WLD deployment. For the VxRail bring-up 2 pNICs from the NDC will be selected followed by selecting 2 pNICs from the PCIe card for the dedicated NSX-T VDS during the SDDC process of adding a VxRail cluster to a VI WLD. The following diagram (kudos Cathal Prendeville) depicts the end result; 1x System VDS, 1x NSX-T VDS, 4 pNICs:
vSphere Distributed Switch = Two (2) / Physical NICs = Four (4) Primary vDS – cork-vi01-cl01-vds01 (Created by VxRail Manager) – System Traffic (Management, vSAN, vMotion) e.g. vmnic0,vmnic1 Secondary vDS – cork-vi01-cl01-nsxt-vds01 (Created by SDDC Manager .py script) – Overlay Traffic (Host, Edge and Uplinks) e.g. vmnic4,vmnic5 |
Ensure to complete the following from within SDDC Manager UI:
The following input values are required as per the process of creating a VI WLD from the SDDC Manager UI:
After the task of creating the VI WLD completes you will notice the WLD appears to be in an ‘Activating’ state, this is to be expected at this juncture.
The VxRail bring-up of the VI WLD Cluster is similar to a standard VxRail bring-up with EXTERNAL vCenter being deployed by SDDC Manager as per above. Tasks include:
Begin by downloading the multi-dvs python script from within the SDDC Manager UI ‘Developer Center’, followed by using SCP to copy the script over to SDDC Manager /home/vcf/ directory and then performing an unzip action:
Next execute the python scrip from within the SDDC Manager terminal entering the required SSO Creds:
Now follow the prompts to enter all the required details for an NSX-T VI WLD deployment, which includes validation and execution with success and monitoring via SDDC UI and cli:
The NSX-T VI WLD is now deployed and active:
Form vCenter we can now see the 2x VDS architecture:
Finally logging into NSX-T Manager we can review the details of the deployment:
In Part3 I will walkthrough the deployment of an NSX-T Edge Cluster.
Note: this is an example for reference only please use the VxRail installation procedures provided by Dell EMC.
Thanks for reading!
Ramblings by Keith Lee
Discussions about all things VxRail.
Random Technology thoughts from an Irish Virtualization Geek (who enjoys saving the world in his spare time).
Musings of a VMware Cloud Geek
Converged and Hyper Converged Infrastructure
'Scamallach' - Gaelic for 'Cloudy' ...
Storing data and be awesome
Best Practices et alia
Every Cloud Has a Tin Lining.
1 Comment »