EMC VNXe 3200 – Configuration Steps Via UEMCLI (Part1)

There are some minor cli changes with VNXe MCX which I will document as part of this series, for VNXe GEN1 please refer to these earlier posts:
EMC VNXe Gen1 Configuration Using Unisphere CLI

The initial configuration steps outlined in Part1 :

Accept End User License Agreement
Change the Admin Password
Apply License File
Commit the IO Modules
Perform a Health Check
Code Upgrade
Create A New User
Change the Service Password
Enable SSH

Accept End User License Agreement
uemcli -d 192.168.1.50 -u Local/admin -p Password123# /sys/eula set -agree yes

Change the Admin Password
uemcli -d 192.168.1.50 -u Local/admin -p Password123# /user/account show
uemcli -d 192.168.1.50 -u Local/admin -p Password123# /user/account -id user_admin set -passwd NewPassword -oldpasswd Password123#
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /user/account show

Reference ‘Help’ for any assistance:
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword / -help

Apply License File
Firstly gather the Serial Number of the VNXe:
uemcli -d 192.168.1.50 -u Local/admin -p Password123# /sys/general show -detail
Then browse to the EMC registration site, entering the VNXe S/N to retrieve the associated lic file:
https://support.emc.com/servicecenter/registerProduct/

Upload the acquired license file:
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword -upload -f C:\Users\david\Downloads\FL100xxx00005_29-July-2015_exp.lic license
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /sys/lic show

Commit the IO Modules
The following commits all uncommitted IO modules:
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /env/iomodule commit

Display a list of system IO modules:
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /env/iomodule show

Perform a Health Check
It is good practice to perform a Health Check in advance of a code upgrade:
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /sys/general healthcheck

Code Upgrade
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /sys/soft/ver show
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword -upload -f “C:\Users\david\Downloads\VNXe 2.4.3.21980\VNXe-MR4SP3.1-upgrade-2.4.3.21980-RETAIL.tgz.bin.gpg” upgrade
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /sys/soft/ver show
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /sys/soft/upgrade create -candId CAND_1
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /sys/soft/upgrade show
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /sys/general healthcheck

Note: Please see a more detailed overview of the upgrade process in a previous post:
https://davidring.ie/2015/03/02/emc-vnxe-code-upgrade/

Create A New User
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /user/account create -name david -type local -passwd DavidPassword -role administrator
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /user/account show

The role for the new account can be:
• administrator — Administrator
• storageadmin — Storage Administrator
• operator — Operator (view only)

Change the Service Password
The Service password is used for performing service actions on the VNXe.
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /service/user set -passwd newPassword -oldpasswd Password123#

Enable SSH
uemcli -d 192.168.1.50 -u service -p NewPassword /service/ssh set -enabled yes

EMC VNXe – Shutdown Procedure


Shutdown via UNISPHERE using the Service Account

VNXe-Shutdown

Please read all notes provided in the ‘More Information..’ section highlighted in the above image before proceeding with shutdown.

Shutdown process as documented in the ‘More Information..’ section:
1. From Unisphere, select Settings > Service System.
2. Enter the Service password to access the Service System page.
3. Under Service Actions, select Shut Down System.
4. Click Execute service action to shut down the storage processors (SPs).
5. In the Service Confirmation dialog box, click OK.
6. Check the status of the shutdown process by looking at the SP LED indicators. The shutdown process is complete when all the Storage Processor Power LEDs are flashing green, the SP Status Fault LED is solid amber, the network management port LEDs are on, and all other Storage Processor LEDs are off.


Shutdown via SSH using the Service Account:

Shutdown command:
svc_shutdown –system-halt

service@(none) spb:~> svc_shutdown --system-halt
###############################################################################
WARNING: This action will shut down the system and you will have to manually
bring it back up afterwards.
###############################################################################
Enter "yes" if want to proceed with this action: yes
Normal Mode
1
1
Peer shutdown now in progress
System shutdown now in progress

EMC VNXe 3200 – MCx Drive Mobility

Related post: ‘EMC VNX – MCx Hot Sparing Considerations’

MCx code has brought many new features including the revolutionary ‘Multicore Raid’ which includes the ‘Drive Mobility’ feature. Drive Mobility (also referred to as Portable Drives) allows for the physical relocation of drives within the same VNXe, this provides the flexibility to relocate a drive to another slot within the same DAE or to another DAE either on the same BUS or to another BUS. This option allows us to modify the storage layout of a VNXe which may be very useful for example if additional DAEs are purchased and/or for performance reasons a re-balance of certain drives across DAEs or to a different BUS is required. Another reason as outlined in the related post highlighted above is when a drive failure occurs and you wish to move the spared drive to the failed drive slot location once the rebuild has completed.

The Drive relocation can be executed online without any impact, provided the Drive is relocated within the 5 minute window allowed before the VNXe flags the missing drive as a failure and invokes a spare drive. No other drive within the RAID N+1 configuration can be moved at the same time, moving another drive at the same time if exceeding the RAID N+1 configuration, for example moving more than 1 drive at a time in a RAID-5 configuration may result in a Data Unavailable(DU) situation and/or data corruption. Once the drive is physically removed from its slot then a 5 minute timer kicks in, if the drive is not successfully relocated to another slot within the system by the end of this 5 minute window then a spare drive is invoked to permanently replace the pulled drive. During the physical relocation process of a single drive within a pool the health status of the pool shall display ‘degraded’ until such time as the drive has been successfully relocated to another slot at which time the pool shall return to a healthy state with no permanent sparing or data loss occurring due to the fact that a single drive in a RAID N+1 configuration was moved within the 5 minute relocation window allocated. At this stage a second drive from within the same pool can be moved, continuing this process until you achieve the desired drive layout.

You may wonder how this Drive Mobility is possible: With MCx when a Raid Group is created the drives within the Raid Group get recorded using the drives serial numbers rather than using the drives physical B_E_D location which was the FLARE approach. This new MCx approach of using the drives serial number is known as VD (Virtual Drive) and allows the drive to be moved to any slot within the VNXe as the drive is not mapped to a specific physical location but instead is recorded based on the drives serial number.

Note: System drives DPE_0_0 – 0_3 are excluded from any Drive Mobility:
VNXe-Mobility-Blog5

Example Drive Relocation
For this example the drive located in SLOT-5 of the DPE will be physically removed and placed in SLOT-4 on the same DPE.

VNXe-Mobility-Blog0

Examine the health status of the Drive in SLOT-5 prior to the relocation:
uemcli -d VNXe_IP -u Local/admin -p Password /env/disk -id dpe_disk_5 -detail

VNXe-Mobility-Blog1

After the relocation of the drive (to SLOT-4 in this example) UNISPHERE will temporarily display a warning:
VNXe-Mobility-Blog2

At this stage it is good practice to perform some checks on the Drive(SLOT-4), Pool and System:
uemcli -d VNXe_IP -u Local/admin -p Password /stor/config/pool show -detail
uemcli -d VNXe_IP -u Local/admin -p Password /sys/general healthcheck
uemcli -d VNXe_IP -u Local/admin -p Password /env/disk -id dpe_disk_4 -detail

VNXe-Mobility-Blog3

Returning to UNISPHERE after performing the checks and you will notice all warnings have disappeared:
VNXe-Mobility-Blog4

At this stage it is safe to proceed with the next move!

EMC VNXe – Code Upgrade

Before proceeding with any upgrade of code on the VNXe please reference the target code release notes on https://support.emc.com/. The VNXe landing page: http://emc.com/vnxesupport will provide you with all the relevant material and downloads for your upgrade.
VNXe_Code1

Code Upgrade Via UEMCLI
It is important there are no configurations on the VNXe taking place while an upgrade is in progress either through Unisphere or UEMCLI. For details around NDU or otherwise please ensure you reference the software candidate release notes. For single SP VNXe3100/3150 systems the array will be inaccessible during the system restart so best to plan to upgrade during a maintenance window.

1. Check the current version of code:
uemcli -d mgmt_ip -u Local/admin -p Password123# /sys/soft/ver show

ID = INST_1
Type = installed
Version = 2.4.2.21519
Release date = 2013-12-05 19:01:50

2. It is good practice to run a health check and resolve any issues prior to system upgrades:
uemcli -d mgmt_ip -u Local/admin -p Password123# /sys/general healthcheck

Operation completed successfully.

3. Upload the upgrade candidate software, in this case the upgrade candidate is Version 2.4.3.21980 of the VNXe Operating Environment. The VNXe OE upgrade files use an encrypted binary file format (.gpg files):
uemcli -d mgmt_ip -u Local/admin -p Password123# -upload -f “path:\VNXe-MR4SP3.1-upgrade-2.4.3.21980-RETAIL.tgz.bin.gpg” upgrade

Uploaded 784.54 MB of 784.54 MB [ 100.0% ] -PROCESSING-
Operation completed successfully.

4. Confirm the presence of the candidate file on the VNXe:
uemcli -d mgmt_ip -u Local/admin -p Password123# /sys/soft/ver show

ID = CAND_1
Type = candidate
Version = 2.4.3.21980
Release date = 2014-10-10 19:35:27
Image type = software

5. Perform the upgrade:
uemcli -d mgmt_ip -u Local/admin -p Password123# /sys/soft/upgrade create -candId CAND_1

Operation completed successfully.

6. Monitor the upgrade session (takes approx 1Hr to complete):
uemcli -d mgmt_ip -u Local/admin -p Password123# /sys/soft/upgrade show

Status = running
Creation time = 2015-02-09 19:44:51
Elapsed time = 8m 09s
Estimated time left = 10m 00s
Progress = Task 21 of 40 (reboot_peer_sp_if_required)

EMC VNXe – Gen1&2 Backup Script (Powershell & UEMCLI)

——————————————————————-
Reference:: EMC® VNXe® Unisphere® Command Line Interface User Guide:

Collect -Config: Create a snapshot of the current system configuration and save it to a file. It captures all of the data necessary to recreate the current configuration on a new or reinitialized system. It does not capture log files or other types of diagnostic data.

Collect -serviceInfo: Collect information about the system and save it to a .tar file. Service providers can use the collected information to analyze the system.
——————————————————————-

Using native UEMCLI commands integrated with Powershell this script automates the process of backing up the current VNXe configuration along with the latest system log files. You will just need to Complete a few simple user entries:
◊ Backup Directory
◊ Mgmt IP Address
◊ Service Password

The script will automatically create a sub-directory in the backup location provided. For example if you input a backup directory of C:\VNXe this will result in a backup location of C:\VNXe\timeDate

Example Script Execution
VNXe_Backup1

The backup directory location will automatically open on completion of the script:
VNXe_Backup2

Download: VNXe_Backup.ps1 and remove the .doc extension! Or in full text format below:

 
#################################
#
# Reference: VNXe UEMCLI Docs
# Script:VNXe BACKUPS
# Date: 2015-02-10 17:30:00										 			 
#
# Version Update:                                         
# 1.0 David Ring            	
#					 
#################################

######## Banner ########
Write-Host " "
Write-Host "#########################################################"
Write-Host "#######       VNXe Config and LOGS Backup        ########"
Write-Host "#########################################################"
Write-Host " "


##### Backup Location #####
$BackupLocation = Read-Host "Backup Location:(A sub-dir with the current Time & Date will be created):"
$BackupLocation = (join-path -Path $BackupLocation -ChildPath "$(date -f HHmmddMMyyyy)")	
IF(!(Test-Path "$BackupLocation")){new-item "$BackupLocation" -ItemType directory | Out-Null}
$BackupLocation =  "`"$BackupLocation`""

Write-Host "Backup Location Entered:" $BackupLocation

Start-Sleep -s 3

########################
### VNXe GEN1 Backup ###
########################
$VNXe = Read-Host 'VNXe 3150/3300 Present? y/n:'
if ($VNXe -eq "y") {
$VNXeIP = Read-Host 'VNXe IP Address:'
$VNXePW = Read-Host 'VNXe Service Password:'
Write-Host " "
Write-Host "########################################"
Write-Host "#######  VNXe 3150/3300 Backup  ########"
Write-Host "########################################"
Write-Host " "
Write-Host "VNXe IP Address:" $VNXeIP
Write-Host "VNXe Service Password:" $VNXePW
Write-Host " "
Start-Sleep -s 3
$VNXeConfig = (uemcli.exe -d $VNXeIP -u service -p $VNXePW -download -d $BackupLocation config)
Write-Host "### VNXe Config Backup Complete. ###"
Write-Host " "
Write-Host "### Now Generating VNXe Log Files! ###"
$VNXeConfig = (uemcli.exe -d $VNXeIP -u service -p $VNXePW /service/system collect -serviceInfo)
$VNXeConfig = (uemcli.exe -d $VNXeIP -u service -p $VNXePW -download -d $BackupLocation serviceInfo)
Write-Host " "
Write-Host "##################################################"
Write-Host "#######    VNXe GEN1 Backup Complete      ########"
Write-Host "##################################################"
Write-Host " "
}


########################
### VNXe GEN2 Backup ###
########################
$VNXe = Read-Host 'VNXe 3200 Present? y/n:'
if ($VNXe -eq "y") {
$VNXeIP = Read-Host 'VNXe IP Address:'
$VNXePW = Read-Host 'VNXe Service Password:'
Write-Host " "
Write-Host "####################################"
Write-Host "########  VNXe 3200 Backup  ########"
Write-Host "####################################"
Write-Host " "
Write-Host "VNXe IP Address:" $VNXeIP
Write-Host "VNXe Service Password:" $VNXePW
Write-Host " "
Start-Sleep -s 3
$VNXeConfig = (uemcli.exe -d $VNXeIP -u service -p $VNXePW /service/system collect -config -showPrivateData)
$VNXeConfig = (uemcli.exe -d $VNXeIP -u service -p $VNXePW -download -d $BackupLocation config)
Write-Host "### VNXe Config Backup Complete. ###"
Write-Host " "
Write-Host "### Now Generating VNXe Log Files! ###"
$VNXeLOGS = (uemcli.exe -d $VNXeIP -u service -p $VNXePW /service/system collect -serviceInfo)
$VNXeLOGS = (uemcli.exe -d $VNXeIP -u service -p $VNXePW -download -d $BackupLocation serviceInfo)
Write-Host " "
Write-Host "##################################################"
Write-Host "#######     VNXe GEN2 Backup Complete     ########"
Write-Host "##################################################"
Write-Host " "
}

Start-Sleep -s 3

$BackupLocation = $BackupLocation -replace '"', ""
invoke-item $BackupLocation

Read-Host "Confirm Presence of 'Config File' and 'LOG Files's' in the Backup Directory!"

######################## END ########################

EMC VNXe Configuration Using Unisphere CLI (Part 3)

Part 1
Part 2

This is the third part in the series on configuring the VNXe via command line. Here I will detail the Steps involved in adding VMware ESXi hosts, presenting NFS datastores to these hosts and setting access rights.

  • Add VMware ESXi Hosts
  • Add VNXe NFS Volumes to VMware ESXi Hosts
  • Setting Access Rights

Note: VMWare networking and Nexus port channels must be configured at this stage. See below for example NEXUS VPC configs.

Add VMware ESXi Hosts
Using the ESXi mgmt address to add two ESXi hosts as follows:
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /virt/vmw/esx create -addr 192.168.105.10 -username root -passwd Password

uemcli -d 10.0.0.1 -u Local/admin -p Password123# /virt/vmw/esx create -addr 192.168.105.11 -username root -passwd Password

Expected Output:
ID = 1005
ID = 1007

Operation completed with partial success.
The create, refresh, or set operation has started. It will continue to add or update ESX host information in the background.

It takes approximately ~2 minutes to add each host after receiving the output above. View details of ESXi hosts connected:
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /remote/host show
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /virt/vmw/esx show

Output:
ESXi01 ID = 1005
Name = ESXi01
Address = 192.168.106.10,192.168.105.10,192.168.102.101
OS type = esx

ESXi02 ID = 1007
Name = ESXi02
Address = 192.168.106.11,192.168.105.11,192.168.102.102
OS type = esx

Three IP addresses are returned, in this case there is one IP address each for Mgmt, VMotion and NFS traffic. We are only concerned with applying access permissions at the NFS level. In this example the NFS addresses are 192.168.102.101&102.

Checking in the GUI we can confirm the hosts were added successfully:
VNXe_ESXi1

Add VNXe NFS Volumes to VMware ESXi Hosts & Set Access
We firstly need to gather the Network File System (NFS) ID’s:
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /stor/prov/vmware/nfs show
Output:
NFS_1: ID = app_1
Name = AMP-NFS-01

NFS_2: ID = app_2
Name = AMP-NFS-02

Add NFS Volumes APP_1&2 to hosts using the VNXe ID of the hosts (1005,1007) and assign root access to only the NFS Vmkernel address of the ESXi hosts:

  • ESXi01[1005] vmKernel NFS Port Group IP 192.168.102.101
  • ESXi02[1007] vmKernel NFS Port Group IP 192.168.102.102
  • uemcli -d 10.0.0.1 -u Local/admin -p Password123# /stor/prov/vmware/nfs -id app_1 set -defAccess na -rootHosts 1005[192.168.102.101],1007[192.168.102.102]
    uemcli -d 10.0.0.1 -u Local/admin -p Password123# /stor/prov/vmware/nfs -id app_2 set -defAccess na -rootHosts 1005[192.168.102.101],1007[192.168.102.102]

    Display the ESXi hosts connected to the VNXe NFS volumes and their respective access rights:
    uemcli -d 10.0.0.1 -u Local/admin -p Password123# /stor/prov/vmware/nfs show -detail
    Output:
    NFS_1: ID = app_1
    Name = AMP-NFS-01
    Server = file_server_0
    Storage pool = AMP-NFS
    Size = 2199023255552 (2.0T)
    Root hosts = 1005[192.168.102.101], 1007[192.168.102.102]

    NFS_2: ID = app_2
    Name = AMP-NFS-02
    Server = file_server_0
    Storage pool = AMP-NFS
    Size = 2199023255552 (2.0T)
    Root hosts = 1005[192.168.102.101], 1007[192.168.102.102]

    VNXe_ESXi2

    VNXe_ESXi3

    Example Cisco Nexus VPC Configs with VNXe 10Gig Interfaces

    VPC_VNXe
    NEXUS SWITCH ‘A’ VPC CONFIG :
    interface Ethernet1/25
    description VNXe3300-SPA-Port1-10Gbe
    switchport mode trunk
    switchport trunk allowed vlan 102
    spanning-tree port type edge trunk
    flowcontrol receive on
    channel-group 41 mode active

    interface Ethernet1/26
    description VNXe3300-SPB-Port1-10Gbe
    switchport mode trunk
    switchport trunk allowed vlan 102
    spanning-tree port type edge trunk
    flowcontrol receive on
    channel-group 42 mode active

    interface port-channel41
    description Port_Channel_To VNXe_SPA-10Gbe-Ports
    switchport mode trunk
    switchport trunk allowed vlan 102
    spanning-tree port type edge trunk
    flowcontrol receive on
    no negotiate auto
    vpc 41

    interface port-channel42
    description Port_Channel_To VNXe_SPB_10Gbe-Ports
    switchport mode trunk
    switchport trunk allowed vlan 102
    spanning-tree port type edge trunk
    flowcontrol receive on
    no negotiate auto
    vpc 42

    NEXUS SWITCH ‘B’ VPC CONFIG:
    interface Ethernet1/25
    description VNXe3300-SPA-Port2-10Gbe
    switchport mode trunk
    switchport trunk allowed vlan 102
    spanning-tree port type edge trunk
    flowcontrol receive on
    channel-group 41 mode active

    interface Ethernet1/26
    description VNXe3300-SPB-Port2-10Gbe
    switchport mode trunk
    switchport trunk allowed vlan 102
    spanning-tree port type edge trunk
    flowcontrol receive on
    channel-group 42 mode active

    interface port-channel41
    description Port_Channel_To VNXe_SPA_10Gbe-Ports
    switchport mode trunk
    switchport trunk allowed vlan 102
    spanning-tree port type edge trunk
    flowcontrol receive on
    no negotiate auto
    vpc 41

    interface port-channel42
    description Port_Channel_To VNXe_SPB_10Gbe-Ports
    switchport mode trunk
    switchport trunk allowed vlan 102
    spanning-tree port type edge trunk
    flowcontrol receive on
    vpc 42

    Note the VNXe NFS interface ‘if_0’ must have the corresponding VLAN id configured:
    uemcli -d 10.0.0.1 -u Local/admin -p Password123# /net/if -id if_0 set -vlanId 102
    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /net/if show

    Output:
    ID = if_0
    Port = eth10_SPA
    VLAN ID = 102
    VNXe_ESXi6

    EMC NAS Plug-In For vSphere VAAI (VNXe Example)

    The ‘EMC NAS Plug-in’ is required in order to enable VAAI (vSphere APIs for Array Integration) operations on ‘NFS Datastores’ on an ESXi 5.x host. If you are not familiar with VAAI; the purpose of enabling the VAAI API is to offload certain storage related I/O tasks to the storage array. As a result this will reduce the I/O requirement on the ESXi hosts and their associated networks. Instead of the ESXi host using resources to send I/O across the network for such tasks as Storage vMotion or cloning a VM, the hypervisor will now just send the NFS related commands that are required for the storage array to perform the necessary data movement. For block based storage arrays the VAAI primitives are available as default on the ESXi host and no plug-in is required.

    Installation Of The NAS Plug-In On ESXi 5.x
    1. Upload the .zip install package (EMCNasPlugin-1.0-11.zip) to the ESXi datastore.
    2. Open an SSH Session to the ESXi host and change directory to the location of the install package:
    # cd /vmfs/volumes/
    If you need to list the name of your datastore:
    /vmfs/volumes # ls -l
    /vmfs/volumes # cd /vmfs/volumes/DatastoreName/
    ls again to confirm the .zip package is present.
    3. Ensure the NAS Plug-In is VMwareAccepted:
    /vmfs/volumes/DatastoreName # esxcli software sources vib list -d file:///vmfs/volumes/DatastoreName/EMCNasPlugin-1.0-11.zip
    Acceptance Level: VMwareAccepted
    4. Run the installation:
    /vmfs/volumes/DatastoreName # esxcli software vib install -n EMCNasPlugin -d file:///vmfs/volumes/DatastoreName/EMCNasPlugin-1.0-11.zip
    Installation Result: completed successfully
    Reboot Required: true
    VIBs Installed: EMC_bootbank_EMCNasPlugin_1.0-11

    5. Reboot the ESXi host and confirm the EMCEMCNasPlugin vib is loaded:
    ~ # esxcli software vib list | grep EMCNasPlugin

    VAAI Example: ‘Full File Clone’ Primitive Operation With VNXe
    ‘Full File Clone’ is one of the VAAI NAS primitives which is used to copy or migrate data within the same physical array (Block equivalent is known as XCOPY). In this example we are using a ‘VNXe 3150’ with two NFS Datastores presented to one ESXi 5.5 host with the NAS Plug-In installed (VAAI enabled) and another ESXi 5.5 host without the NAS Plug-In installed (VAAI disabled).

    NAS_VAAI0

    Running a Storage vMotion from the NFS01 datastore to NFS02 on the ESXi host with VAAI enabled generates zero network traffic:

    NAS_VAAI1

    Running a Storage vMotion from the NFS01 datastore to NFS02 on the ESXi host without VAAI enabled maxes out the 1Gig ethernet link on the Host:

    NAS_VAAI2

    This is a rather simple example but it displays how the primitive operates by offloading the I/O tasks to the VNXe array.

    Note: If you are accessing the NFS datastore directly via the datastore browser for Copy/Paste functionality then you will not see any benefit from VAAI. This is because the datastore browser has its own API and does not use the internal VMkernel Data Mover or VAAI.

    VNXe CPU performance stats during the first SVMotion with VAAI enabled displays approximately 20% Storage Processor utilization and without VAAI enabled you can see CPU % at approx 70% util:

    NAS_VAAI3

    VNXe Network performance stats display no network traffic with VAAI enabled and without VAAI both read and write for SPA use approx 70MB of bandwidth each:
    NAS_VAAI4

    Note: For the ‘Full File Clone’ primitive to perform the offload during an SVMotion the VM needs to be powered off for the duration of the SVMotion.

    See also Cormac Hogan’s blog post: VAAI Comparison – Block versus NAS