EMC NAS Plug-In For vSphere VAAI (VNXe Example)
The ‘EMC NAS Plug-in’ is required in order to enable VAAI (vSphere APIs for Array Integration) operations on ‘NFS Datastores’ on an ESXi 5.x host. If you are not familiar […]
Virtualization & Storage
The ‘EMC NAS Plug-in’ is required in order to enable VAAI (vSphere APIs for Array Integration) operations on ‘NFS Datastores’ on an ESXi 5.x host. If you are not familiar […]
The ‘EMC NAS Plug-in’ is required in order to enable VAAI (vSphere APIs for Array Integration) operations on ‘NFS Datastores’ on an ESXi 5.x host. If you are not familiar with VAAI; the purpose of enabling the VAAI API is to offload certain storage related I/O tasks to the storage array. As a result this will reduce the I/O requirement on the ESXi hosts and their associated networks. Instead of the ESXi host using resources to send I/O across the network for such tasks as Storage vMotion or cloning a VM, the hypervisor will now just send the NFS related commands that are required for the storage array to perform the necessary data movement. For block based storage arrays the VAAI primitives are available as default on the ESXi host and no plug-in is required.
Installation Of The NAS Plug-In On ESXi 5.x
1. Upload the .zip install package (EMCNasPlugin-1.0-11.zip) to the ESXi datastore.
2. Open an SSH Session to the ESXi host and change directory to the location of the install package:
# cd /vmfs/volumes/
If you need to list the name of your datastore:
/vmfs/volumes # ls -l
/vmfs/volumes # cd /vmfs/volumes/DatastoreName/
ls again to confirm the .zip package is present.
3. Ensure the NAS Plug-In is VMwareAccepted:
/vmfs/volumes/DatastoreName # esxcli software sources vib list -d file:///vmfs/volumes/DatastoreName/EMCNasPlugin-1.0-11.zip
Acceptance Level: VMwareAccepted
4. Run the installation:
/vmfs/volumes/DatastoreName # esxcli software vib install -n EMCNasPlugin -d file:///vmfs/volumes/DatastoreName/EMCNasPlugin-1.0-11.zip
Installation Result: completed successfully
Reboot Required: true
VIBs Installed: EMC_bootbank_EMCNasPlugin_1.0-11
5. Reboot the ESXi host and confirm the EMCEMCNasPlugin vib is loaded:
~ # esxcli software vib list | grep EMCNasPlugin
VAAI Example: ‘Full File Clone’ Primitive Operation With VNXe
‘Full File Clone’ is one of the VAAI NAS primitives which is used to copy or migrate data within the same physical array (Block equivalent is known as XCOPY). In this example we are using a ‘VNXe 3150’ with two NFS Datastores presented to one ESXi 5.5 host with the NAS Plug-In installed (VAAI enabled) and another ESXi 5.5 host without the NAS Plug-In installed (VAAI disabled).
Running a Storage vMotion from the NFS01 datastore to NFS02 on the ESXi host with VAAI enabled generates zero network traffic:
Running a Storage vMotion from the NFS01 datastore to NFS02 on the ESXi host without VAAI enabled maxes out the 1Gig ethernet link on the Host:
This is a rather simple example but it displays how the primitive operates by offloading the I/O tasks to the VNXe array.
Note: If you are accessing the NFS datastore directly via the datastore browser for Copy/Paste functionality then you will not see any benefit from VAAI. This is because the datastore browser has its own API and does not use the internal VMkernel Data Mover or VAAI.
VNXe CPU performance stats during the first SVMotion with VAAI enabled displays approximately 20% Storage Processor utilization and without VAAI enabled you can see CPU % at approx 70% util:
VNXe Network performance stats display no network traffic with VAAI enabled and without VAAI both read and write for SPA use approx 70MB of bandwidth each:
Note: For the ‘Full File Clone’ primitive to perform the offload during an SVMotion the VM needs to be powered off for the duration of the SVMotion.
See also Cormac Hogan’s blog post: VAAI Comparison – Block versus NAS
Ramblings by Keith Lee
Discussions about all things VxRail.
Random Technology thoughts from an Irish Virtualization Geek (who enjoys saving the world in his spare time).
Musings of a VMware Cloud Geek
Converged and Hyper Converged Infrastructure
'Scamallach' - Gaelic for 'Cloudy' ...
Storing data and be awesome
Best Practices et alia
Every Cloud Has a Tin Lining.
Great article. I have just started working with EMC products. I am currently migrating from VX to VNXe3200. You definitely see the high network bandwidth on the Storage vMotions from NX to VNXe. I thought Storage vMotions from VNXe VMware datastoreA to datastoreB would be much faster though. I am currently SVM a 500GB VM (no Snapshots) and it is 82% complete after 1.5 hours. It is a RAID 5 SAS with 2 disk Fact Cache. The storage Pool is composed of 20 820GB@10K. I am seeing average max disk bandwidth of 200000Kb/s and Disk IOPS of about 1600 IO/s. AT first glance I am thinking the system tech’ should of created 1 very large StoragePool to get better striping performance. Else it is simply the RAID 5 performance and SAS 10K disks. Would you have any input?
Thanks
DRUMDUDESAN
Thanks DRUMDUDESAN!
Given the config you have described – 20 x10K SAS – then it is likely you have reached the limit in terms of IOPS – 1600
All the best!