For VxRail 4.7 & 7.0 please see earlier posts:

VxRail 4.7 – Install Notes

VxRail 7.0 – Install Notes

Note: this is an example for reference only please use the VxRail installation procedures provided by Dell.

Prerequisites for installation include:

  • Create all relevant ESXi, VC and VxRAIL manager DNS entries on AD/DNS server.
  • Verify the customer DNS server can resolve the following components, test both forward and reverse lookups using nslookup:
    • ESXi Mgmt
    • VxRail Manager
    • vCenter
  • Ping all the above components in order to rule out any duplicate IP address conflicts.
  • Verify NTP is working correctly, the following are some useful links: https://kb.vmware.com/s/article/1035833 & https://kb.vmware.com/s/article/1003736
    • Ensure date is in sync:
      • ESXi set date and time:
        • esxcli system time get 
        • esxcli system time set d|–day -H|–hour -m|–min -M|–month -s|–sec -y|–year
        • esxcli system time set -d 19 -H 12 -m 30 -M 04 -y 2023
      • vCSA & VxRail Manager – view & set date and time:
        • # date
        • # date -s “19 APR 2023 12:30:00”
  • You can check the installed VxRail code on the ESXi host:
    • esxcli software vib list | grep marvin
  • IPv6 multicast is used by the VxRail Manager loudmouth service and is required for automatic node discovery. VxRail nodes can be added manually in the event of IPv6 network limitations. This discovery network is visible only to the VxRail nodes and depends on IPv6 multicasting services to be configured on the adjacent ‘top-of-rack’ switches for node discovery purposes.
    • TOR01# configure terminal
    • TOR01(config)# interface vlan 3939
    • TOR01(conf-if-vl-3939)# description "VxRail Internal VLAN"
    • TOR01(conf-if-vl-3939)# no shutdown
    • TOR01(conf-if-vl-3939)# ipv6 mld snooping querier
    • TOR01(conf-if-vl-3939)# copy running-configuration startup-configuration
    • TOR01(conf-if-vl-3939)# exit
    • TOR01(config)# exit
    • TOR01# show interface vlan 3939
    • TOR01# show ipv6 mld snooping summary
  • On the first host of the cluster set the VxRail Manager VM IP via IDRAC/DCUI (vxrail-idrac-esxi-shell-access-via-dcui):
    • vxrail-primary --config --vxrail-address 10.1.0.20 --vxrail-netmask 255.255.255.0 --vxrail-gateway 10.1.0.254 --no-roll-back --verbose
  • Check if VxRail Manager VM is running:
    • esxcli vm process list
  • Enter the console (ALT+F1) and change the VLAN ID for the “VM Network”. As per this example the ‘VM Network’ portgroup is tagged with VLAN ID 100:
    • esxcli network vswitch standard portgroup list
    • esxcli network vswitch standard portgroup set -p "VM Network" -v 100
  • Restart loudmouth on each ESXi host:
    •  /etc/init.d/loudmouth restart
  • Restart loudmouth on the VxRail Manager vm:
    • systemctl restart vmware-loudmouth
  • Power cycle the VxRail Manager VM:
    • vim-cmd vmsvc/getallvms
    • vim-cmd vmsvc/power.shutdown vmid
    • vim-cmd vmsvc/power.getstate vmid
    • vim-cmd vmsvc/power.on vmid
    • vim-cmd vmsvc/power.getstate vmid

Check that VxRail Manager can discover available ESXi hosts; login to the VxRail Manager vm via SSH and run the following commands:

  • /usr/lib/vmware-loudmouth/bin/loudmouthc query | grep -o "applianceID=..............." | sort
  • /usr/lib/vmware-loudmouth/bin/loudmouthc query | egrep -o "EMC........-..|..............-..-.." | sed 's/ {"//g' | sort
  • curl --unix-socket /var/lib/vxrail/nginx/socket/nginx.sock -H "Content-Type: application/json" http://127.0.0.1/rest/vxm/internal/do/v1/host/query -d '{"query":"{availableHosts{moid ,name ,hardware{sn psnt} ,config{network{vnic{device, ipv6}}}}}"}' |json_pp

Switch Validation

Confirm NIC’s are up:

  • esxcli network nic list
  • esxcli network vswitch standard list

Confirm the mac address of VxRail Manager is showing up on the switch. You can find the VxRail Manager mac address from the ESXi console & compare to mac table on switch:

  • vim-cmd vmsvc/get.guest vmid | more
  • show mac address-table
  • Ensure your switch ports are configured identically for all VxRail port connections.
  • Ensure each port has the required VLANs configured and trunked to all ports. In this example ESXi mgmt 100, vMotion 101, vSAN 102 and default Private Mgmt Network 3939.
  • In a dual switch configuration ensure that the VLANs are configured on the ISL/VPC peer link.
  •  Sample DELL S4148F OS10 VLAN 100 port config:
!
interface virtual-network100
 description virtual-network100
 no shutdown
 ip address 10.1.0.253/24
 ip virtual-router address 10.1.0.254
!
!
virtual-network 100
 member-interface ethernet1/1/1 vlan-tag 100
 member-interface ethernet1/1/2 vlan-tag 100
 member-interface ethernet1/1/3 vlan-tag 100
 member-interface ethernet1/1/4 vlan-tag 100
 
 vlti-vlan 100
 !
 vxlan-vni 100
!

ENABLE LLDP IN IDRAC provides addtional host details other than just mac address such as VxRail model & service tag, when running ‘show lldp neighbors’ on switch:

Note On Cabling: For VxRail nodes with RJ45 NICs use Cat 6 or higher Ethernet cables. In the case of nodes with SFP+, SFP28, or QSFP28 NICs for Twinax connectivity use supported Twinax Direct-Attach-Copper cables with SFP+, SFP28, or QSFP28 connectors and in the case of FC use supported fiber cables SFP+, SFP28, QSFP28 optical transceivers.

Configure VxRail (Step-By-Step)

VxRail Software used in this example: 8.0.100 build 21666983

CURL Command to monitor install progress from VxRail Manager SSH console:

curl -kX GET --user administrator@vsphere.local https://127.0.0.1/rest/vxm/v1/system/initialize/status | python3 -m json.tool

API to monitor Progress:

Post Install Validation

Here are some quick checks after a successful bring-up of VxRail. These checks are confirmed by logging into vCenter Server HTML client:

Note: SLES15 SP4 is the OS used for VxRail Manager. cat /etc/os-release

Check Logs

Monitor VxRail dayone.log , short.term.log & firstboot.log:

  • dayone.log – detailed information in relation to initial first run configuration. (/var/log/microservice_log/dayone.log)
  • short.term.log – micro services related information. (/var/log/microservice_log/short.term.log)
  • firstboot.log – detailed information in relation to micro services boot up during initial configuration. (/var/log/firstboot.log)

cd /var/log/microservice_log/

# cat dayone.log
# tail -f dayone.log
# tail -f dayone.log | grep ERROR
# tail -n 100 dayone.log
# more dayone.log

Confirm DNS entries in resolv.conf & DB:

cat /etc/resolv.conf

curl -X GET --unix-socket /var/lib/vxrail/nginx/socket/nginx.sock http://localhost/rest/vxm/internal/configservice/v1/configuration/keys/system_dns -H "accept: application/json" -H "Content-Type: application/json"

NTP Validation:

timeout 2 bash -c "</dev/udp/10.104.0.254/123"; echo $?

Returns 0

In terms of viewing a “Configuration complete!” equivalent message in the logs the string is :
/var/log/microservice_log # cat dayone.log | grep "notify {'level': 'workflow', 'state': 'COMPLETED', 'progress': 100}"

RADAR VERSION:

/mystic/radar/radar --version

If VxRail Deployment Wizard fails to load or Vmware marvin service fails to start:

On the vxrail manager in /var/log/messages there are events indicating a duplicate IPv6 address is detected & kubernates fails to deploy:

cat /var/log/messages | grep "IPv6 duplicate address"

systemctl status runjars.service

systemctl status vmware-marvin.service

Modify the IPV6 address:

vi /etc/sysconfig/network/ifcfg-eth0

IPADDR0=’fd39:3939:3939:3939::180

Save & exit :wq!

systemctl restart network

Reboot VXRM.

After ~10 minutes check that kubernates “helium” pods havd been deployed:

# kubectl get pods

# kubectl get pods -A

# kubectl get pods -o wide

# kubectl get deployments

Confirm services are active or start if not:

systemctl status runjars.service

systemctl status vmware-marvin.service

systemctl start vmware-marvin
systemctl start runjars

Leave a comment