VxRail – Basic vSAN Overview
The following post provides a basic overview of how the VxRail Appliance disk groups are configured and viewed from both the VxRail Manager and the vSphere Client. The VxRail Appliance […]
Virtualization & Storage
The following post provides a basic overview of how the VxRail Appliance disk groups are configured and viewed from both the VxRail Manager and the vSphere Client. The VxRail Appliance […]
The following post provides a basic overview of how the VxRail Appliance disk groups are configured and viewed from both the VxRail Manager and the vSphere Client.
The VxRail Appliance vSAN disk groups are configured in one of two ways:
Hybrid – single SSD disk for caching and one or more HDD disks for capacity. The SSD cache device allocates 70% to reads & 30% to writes (destaged to the capacity tier at 30% full). If there is a read cache miss then it is serviced by the capacity tier.
All-flash – single SSD disk for caching and one or more SSD disks for capacity.
The amount of storage available to the vSAN datastore is based on the capacity drives. The SSD cache device allocates 100% to writes (destaged to capacity tier at 30% full), reads are always serviced by flash drives either by cache or if destaged then serviced by the capacity tier.
A VxRail node allows for multiple disk groups (please refer to official VxRail docs for specifics, as the quantity of disk groups differ per VxRail model) which in turn provides multiple cache drives per node thus potentially improving performance (IOPs, latency), but note there is memory overhead with additional disk groups, please reference KB 2113954. In this example each VxRail Appliance node has two All-flash disk groups, each node in the cluster is required to have the same storage configuration which complies with VMware best practice.
Note: Per VMware vSAN maximums each disk group can have a max of 7 capacity drives and each host can have max of 5 disk groups, thus leading to a max of 35 capacity drives per host.
RAID: Along with RAID1 (Synchronous Mirroring) the erasure coding feature provides the capability of configuring a vSAN Policy to use RAID5/6. Erasure coding is only available with all-flash systems.
Jumbo Frames: is supported, lower CPU cycles and greater IOPs have been seen with certain workloads after configuring jumbo frames MTU 9000 across the stack.
From the vSphere client click on the cluster and navigate to the ‘configure – vSAN – General’ from here we can see that the vSAN cluster for this VxRail appliance comprises of 24 disks in total (4x 13G servers).
‘vSAN – Disk Management’ displays both the Disk Groups and the disks associated with each Disk Group. Taking the example below of the first ESXi host in the cluster, we can see the VxRail node has a total of 6 disks contributing storage to the vSAN cluster, comprising of two disk groups with three disks in each disk group.
Each disk group includes one 800GB high-endurance SSD flash-cache drive and two 1.92GB SSDs acting as capacity drives.
From the disk group example above you can view the associated disk details from the VxRail Manager UI; navigate to ‘HEALTH – Physical’ and in this case select the first ESXi host, click on the disk slots to display details such as health, GUID, model, capacity and also there are options to toggle disk LED or to replace the disk. The image below highlights the 800GB high-endurance SSD flash-cache drive located in slot 0 which is part of the same disk group viewed above in the vSphere client:
The next two images from the VxRail Manager display the two 1.92GB SSDs acting as capacity drives part of the same vSAN disk group:
vdq is also a useful tool to leverage from the ESXi console and will return vSAN disk group details. The vdq -i command run on the same first ESXi host (as per the example above) in the vSAN cluster returns the associated disk group details:
vdq -q returns more specific individual disk details for the ESXi host, you will note it returns details of the satadom used for the ESXi install and labelled as "Ineligible for use by VSAN":
[root@vvdmgmtesx01:~] vdq -q
[
{
"Name" : "naa.58ce38ee2015b1b9",
"VSANUUID" : "52a372f8-ee28-b7f1-0160-9f87a80c2d4f",
"State" : "In-use for VSAN",
"Reason" : "None",
"IsSSD" : "1",
"IsCapacityFlash": "0",
"IsPDL" : "0",
},
{
"Name" : "naa.58ce38ee2017c371",
"VSANUUID" : "52430432-21bf-06f1-86ac-b25e83e33f3d",
"State" : "In-use for VSAN",
"Reason" : "None",
"IsSSD" : "1",
"IsCapacityFlash": "1",
"IsPDL" : "0",
},
{
"Name" : "t10.ATA_____SATADOM2DML_3SE__________________________TW00T4M4482937BG2028",
"VSANUUID" : "",
"State" : "Ineligible for use by VSAN",
"Reason" : "Has partitions",
"IsSSD" : "1",
"IsCapacityFlash": "0",
"IsPDL" : "0",
},
{
"Name" : "naa.58ce38ee2017d5c5",
"VSANUUID" : "522996ed-7e9f-6628-c724-3fbfdd0bcc95",
"State" : "In-use for VSAN",
"Reason" : "None",
"IsSSD" : "1",
"IsCapacityFlash": "1",
"IsPDL" : "0",
},
{
"Name" : "naa.58ce38ee2017ec75",
"VSANUUID" : "522894d0-2635-266e-46c8-104b35227712",
"State" : "In-use for VSAN",
"Reason" : "None",
"IsSSD" : "1",
"IsCapacityFlash": "1",
"IsPDL" : "0",
},
{
"Name" : "naa.58ce38ee2017eba1",
"VSANUUID" : "528b30a3-272a-8947-94db-1932da12d1c0",
"State" : "In-use for VSAN",
"Reason" : "None",
"IsSSD" : "1",
"IsCapacityFlash": "1",
"IsPDL" : "0",
},
{
"Name" : "naa.58ce38ee2015b1ad",
"VSANUUID" : "5256bca2-fbef-3e9d-b82f-1c57b94be093",
"State" : "In-use for VSAN",
"Reason" : "None",
"IsSSD" : "1",
"IsCapacityFlash": "0",
"IsPDL" : "0",
},
Ramblings by Keith Lee
Discussions about all things VxRail.
Random Technology thoughts from an Irish Virtualization Geek (who enjoys saving the world in his spare time).
Musings of a VMware Cloud Geek
Converged and Hyper Converged Infrastructure
'Scamallach' - Gaelic for 'Cloudy' ...
Storing data and be awesome
Best Practices et alia
Every Cloud Has a Tin Lining.