EMC VNXe 3200 – MCx Drive Mobility

Related post: ‘EMC VNX – MCx Hot Sparing Considerations’

MCx code has brought many new features including the revolutionary ‘Multicore Raid’ which includes the ‘Drive Mobility’ feature. Drive Mobility (also referred to as Portable Drives) allows for the physical relocation of drives within the same VNXe, this provides the flexibility to relocate a drive to another slot within the same DAE or to another DAE either on the same BUS or to another BUS. This option allows us to modify the storage layout of a VNXe which may be very useful for example if additional DAEs are purchased and/or for performance reasons a re-balance of certain drives across DAEs or to a different BUS is required. Another reason as outlined in the related post highlighted above is when a drive failure occurs and you wish to move the spared drive to the failed drive slot location once the rebuild has completed.

The Drive relocation can be executed online without any impact, provided the Drive is relocated within the 5 minute window allowed before the VNXe flags the missing drive as a failure and invokes a spare drive. No other drive within the RAID N+1 configuration can be moved at the same time, moving another drive at the same time if exceeding the RAID N+1 configuration, for example moving more than 1 drive at a time in a RAID-5 configuration may result in a Data Unavailable(DU) situation and/or data corruption. Once the drive is physically removed from its slot then a 5 minute timer kicks in, if the drive is not successfully relocated to another slot within the system by the end of this 5 minute window then a spare drive is invoked to permanently replace the pulled drive. During the physical relocation process of a single drive within a pool the health status of the pool shall display ‘degraded’ until such time as the drive has been successfully relocated to another slot at which time the pool shall return to a healthy state with no permanent sparing or data loss occurring due to the fact that a single drive in a RAID N+1 configuration was moved within the 5 minute relocation window allocated. At this stage a second drive from within the same pool can be moved, continuing this process until you achieve the desired drive layout.

You may wonder how this Drive Mobility is possible: With MCx when a Raid Group is created the drives within the Raid Group get recorded using the drives serial numbers rather than using the drives physical B_E_D location which was the FLARE approach. This new MCx approach of using the drives serial number is known as VD (Virtual Drive) and allows the drive to be moved to any slot within the VNXe as the drive is not mapped to a specific physical location but instead is recorded based on the drives serial number.

Note: System drives DPE_0_0 – 0_3 are excluded from any Drive Mobility:
VNXe-Mobility-Blog5

Example Drive Relocation
For this example the drive located in SLOT-5 of the DPE will be physically removed and placed in SLOT-4 on the same DPE.

VNXe-Mobility-Blog0

Examine the health status of the Drive in SLOT-5 prior to the relocation:
uemcli -d VNXe_IP -u Local/admin -p Password /env/disk -id dpe_disk_5 -detail

VNXe-Mobility-Blog1

After the relocation of the drive (to SLOT-4 in this example) UNISPHERE will temporarily display a warning:
VNXe-Mobility-Blog2

At this stage it is good practice to perform some checks on the Drive(SLOT-4), Pool and System:
uemcli -d VNXe_IP -u Local/admin -p Password /stor/config/pool show -detail
uemcli -d VNXe_IP -u Local/admin -p Password /sys/general healthcheck
uemcli -d VNXe_IP -u Local/admin -p Password /env/disk -id dpe_disk_4 -detail

VNXe-Mobility-Blog3

Returning to UNISPHERE after performing the checks and you will notice all warnings have disappeared:
VNXe-Mobility-Blog4

At this stage it is safe to proceed with the next move!

EMC VNXe – Code Upgrade

Before proceeding with any upgrade of code on the VNXe please reference the target code release notes on https://support.emc.com/. The VNXe landing page: http://emc.com/vnxesupport will provide you with all the relevant material and downloads for your upgrade.
VNXe_Code1

Code Upgrade Via UEMCLI
It is important there are no configurations on the VNXe taking place while an upgrade is in progress either through Unisphere or UEMCLI. For details around NDU or otherwise please ensure you reference the software candidate release notes. For single SP VNXe3100/3150 systems the array will be inaccessible during the system restart so best to plan to upgrade during a maintenance window.

1. Check the current version of code:
uemcli -d mgmt_ip -u Local/admin -p Password123# /sys/soft/ver show

ID = INST_1
Type = installed
Version = 2.4.2.21519
Release date = 2013-12-05 19:01:50

2. It is good practice to run a health check and resolve any issues prior to system upgrades:
uemcli -d mgmt_ip -u Local/admin -p Password123# /sys/general healthcheck

Operation completed successfully.

3. Upload the upgrade candidate software, in this case the upgrade candidate is Version 2.4.3.21980 of the VNXe Operating Environment. The VNXe OE upgrade files use an encrypted binary file format (.gpg files):
uemcli -d mgmt_ip -u Local/admin -p Password123# -upload -f “path:\VNXe-MR4SP3.1-upgrade-2.4.3.21980-RETAIL.tgz.bin.gpg” upgrade

Uploaded 784.54 MB of 784.54 MB [ 100.0% ] -PROCESSING-
Operation completed successfully.

4. Confirm the presence of the candidate file on the VNXe:
uemcli -d mgmt_ip -u Local/admin -p Password123# /sys/soft/ver show

ID = CAND_1
Type = candidate
Version = 2.4.3.21980
Release date = 2014-10-10 19:35:27
Image type = software

5. Perform the upgrade:
uemcli -d mgmt_ip -u Local/admin -p Password123# /sys/soft/upgrade create -candId CAND_1

Operation completed successfully.

6. Monitor the upgrade session (takes approx 1Hr to complete):
uemcli -d mgmt_ip -u Local/admin -p Password123# /sys/soft/upgrade show

Status = running
Creation time = 2015-02-09 19:44:51
Elapsed time = 8m 09s
Estimated time left = 10m 00s
Progress = Task 21 of 40 (reboot_peer_sp_if_required)

EMC VNXe Configuration Using Unisphere CLI (Part 3)

Part 1
Part 2

This is the third part in the series on configuring the VNXe via command line. Here I will detail the Steps involved in adding VMware ESXi hosts, presenting NFS datastores to these hosts and setting access rights.

  • Add VMware ESXi Hosts
  • Add VNXe NFS Volumes to VMware ESXi Hosts
  • Setting Access Rights

Note: VMWare networking and Nexus port channels must be configured at this stage. See below for example NEXUS VPC configs.

Add VMware ESXi Hosts
Using the ESXi mgmt address to add two ESXi hosts as follows:
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /virt/vmw/esx create -addr 192.168.105.10 -username root -passwd Password

uemcli -d 10.0.0.1 -u Local/admin -p Password123# /virt/vmw/esx create -addr 192.168.105.11 -username root -passwd Password

Expected Output:
ID = 1005
ID = 1007

Operation completed with partial success.
The create, refresh, or set operation has started. It will continue to add or update ESX host information in the background.

It takes approximately ~2 minutes to add each host after receiving the output above. View details of ESXi hosts connected:
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /remote/host show
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /virt/vmw/esx show

Output:
ESXi01 ID = 1005
Name = ESXi01
Address = 192.168.106.10,192.168.105.10,192.168.102.101
OS type = esx

ESXi02 ID = 1007
Name = ESXi02
Address = 192.168.106.11,192.168.105.11,192.168.102.102
OS type = esx

Three IP addresses are returned, in this case there is one IP address each for Mgmt, VMotion and NFS traffic. We are only concerned with applying access permissions at the NFS level. In this example the NFS addresses are 192.168.102.101&102.

Checking in the GUI we can confirm the hosts were added successfully:
VNXe_ESXi1

Add VNXe NFS Volumes to VMware ESXi Hosts & Set Access
We firstly need to gather the Network File System (NFS) ID’s:
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /stor/prov/vmware/nfs show
Output:
NFS_1: ID = app_1
Name = AMP-NFS-01

NFS_2: ID = app_2
Name = AMP-NFS-02

Add NFS Volumes APP_1&2 to hosts using the VNXe ID of the hosts (1005,1007) and assign root access to only the NFS Vmkernel address of the ESXi hosts:

  • ESXi01[1005] vmKernel NFS Port Group IP 192.168.102.101
  • ESXi02[1007] vmKernel NFS Port Group IP 192.168.102.102
  • uemcli -d 10.0.0.1 -u Local/admin -p Password123# /stor/prov/vmware/nfs -id app_1 set -defAccess na -rootHosts 1005[192.168.102.101],1007[192.168.102.102]
    uemcli -d 10.0.0.1 -u Local/admin -p Password123# /stor/prov/vmware/nfs -id app_2 set -defAccess na -rootHosts 1005[192.168.102.101],1007[192.168.102.102]

    Display the ESXi hosts connected to the VNXe NFS volumes and their respective access rights:
    uemcli -d 10.0.0.1 -u Local/admin -p Password123# /stor/prov/vmware/nfs show -detail
    Output:
    NFS_1: ID = app_1
    Name = AMP-NFS-01
    Server = file_server_0
    Storage pool = AMP-NFS
    Size = 2199023255552 (2.0T)
    Root hosts = 1005[192.168.102.101], 1007[192.168.102.102]

    NFS_2: ID = app_2
    Name = AMP-NFS-02
    Server = file_server_0
    Storage pool = AMP-NFS
    Size = 2199023255552 (2.0T)
    Root hosts = 1005[192.168.102.101], 1007[192.168.102.102]

    VNXe_ESXi2

    VNXe_ESXi3

    Example Cisco Nexus VPC Configs with VNXe 10Gig Interfaces

    VPC_VNXe
    NEXUS SWITCH ‘A’ VPC CONFIG :
    interface Ethernet1/25
    description VNXe3300-SPA-Port1-10Gbe
    switchport mode trunk
    switchport trunk allowed vlan 102
    spanning-tree port type edge trunk
    flowcontrol receive on
    channel-group 41 mode active

    interface Ethernet1/26
    description VNXe3300-SPB-Port1-10Gbe
    switchport mode trunk
    switchport trunk allowed vlan 102
    spanning-tree port type edge trunk
    flowcontrol receive on
    channel-group 42 mode active

    interface port-channel41
    description Port_Channel_To VNXe_SPA-10Gbe-Ports
    switchport mode trunk
    switchport trunk allowed vlan 102
    spanning-tree port type edge trunk
    flowcontrol receive on
    no negotiate auto
    vpc 41

    interface port-channel42
    description Port_Channel_To VNXe_SPB_10Gbe-Ports
    switchport mode trunk
    switchport trunk allowed vlan 102
    spanning-tree port type edge trunk
    flowcontrol receive on
    no negotiate auto
    vpc 42

    NEXUS SWITCH ‘B’ VPC CONFIG:
    interface Ethernet1/25
    description VNXe3300-SPA-Port2-10Gbe
    switchport mode trunk
    switchport trunk allowed vlan 102
    spanning-tree port type edge trunk
    flowcontrol receive on
    channel-group 41 mode active

    interface Ethernet1/26
    description VNXe3300-SPB-Port2-10Gbe
    switchport mode trunk
    switchport trunk allowed vlan 102
    spanning-tree port type edge trunk
    flowcontrol receive on
    channel-group 42 mode active

    interface port-channel41
    description Port_Channel_To VNXe_SPA_10Gbe-Ports
    switchport mode trunk
    switchport trunk allowed vlan 102
    spanning-tree port type edge trunk
    flowcontrol receive on
    no negotiate auto
    vpc 41

    interface port-channel42
    description Port_Channel_To VNXe_SPB_10Gbe-Ports
    switchport mode trunk
    switchport trunk allowed vlan 102
    spanning-tree port type edge trunk
    flowcontrol receive on
    vpc 42

    Note the VNXe NFS interface ‘if_0’ must have the corresponding VLAN id configured:
    uemcli -d 10.0.0.1 -u Local/admin -p Password123# /net/if -id if_0 set -vlanId 102
    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /net/if show

    Output:
    ID = if_0
    Port = eth10_SPA
    VLAN ID = 102
    VNXe_ESXi6

    EMC NAS Plug-In For vSphere VAAI (VNXe Example)

    The ‘EMC NAS Plug-in’ is required in order to enable VAAI (vSphere APIs for Array Integration) operations on ‘NFS Datastores’ on an ESXi 5.x host. If you are not familiar with VAAI; the purpose of enabling the VAAI API is to offload certain storage related I/O tasks to the storage array. As a result this will reduce the I/O requirement on the ESXi hosts and their associated networks. Instead of the ESXi host using resources to send I/O across the network for such tasks as Storage vMotion or cloning a VM, the hypervisor will now just send the NFS related commands that are required for the storage array to perform the necessary data movement. For block based storage arrays the VAAI primitives are available as default on the ESXi host and no plug-in is required.

    Installation Of The NAS Plug-In On ESXi 5.x
    1. Upload the .zip install package (EMCNasPlugin-1.0-11.zip) to the ESXi datastore.
    2. Open an SSH Session to the ESXi host and change directory to the location of the install package:
    # cd /vmfs/volumes/
    If you need to list the name of your datastore:
    /vmfs/volumes # ls -l
    /vmfs/volumes # cd /vmfs/volumes/DatastoreName/
    ls again to confirm the .zip package is present.
    3. Ensure the NAS Plug-In is VMwareAccepted:
    /vmfs/volumes/DatastoreName # esxcli software sources vib list -d file:///vmfs/volumes/DatastoreName/EMCNasPlugin-1.0-11.zip
    Acceptance Level: VMwareAccepted
    4. Run the installation:
    /vmfs/volumes/DatastoreName # esxcli software vib install -n EMCNasPlugin -d file:///vmfs/volumes/DatastoreName/EMCNasPlugin-1.0-11.zip
    Installation Result: completed successfully
    Reboot Required: true
    VIBs Installed: EMC_bootbank_EMCNasPlugin_1.0-11

    5. Reboot the ESXi host and confirm the EMCEMCNasPlugin vib is loaded:
    ~ # esxcli software vib list | grep EMCNasPlugin

    VAAI Example: ‘Full File Clone’ Primitive Operation With VNXe
    ‘Full File Clone’ is one of the VAAI NAS primitives which is used to copy or migrate data within the same physical array (Block equivalent is known as XCOPY). In this example we are using a ‘VNXe 3150’ with two NFS Datastores presented to one ESXi 5.5 host with the NAS Plug-In installed (VAAI enabled) and another ESXi 5.5 host without the NAS Plug-In installed (VAAI disabled).

    NAS_VAAI0

    Running a Storage vMotion from the NFS01 datastore to NFS02 on the ESXi host with VAAI enabled generates zero network traffic:

    NAS_VAAI1

    Running a Storage vMotion from the NFS01 datastore to NFS02 on the ESXi host without VAAI enabled maxes out the 1Gig ethernet link on the Host:

    NAS_VAAI2

    This is a rather simple example but it displays how the primitive operates by offloading the I/O tasks to the VNXe array.

    Note: If you are accessing the NFS datastore directly via the datastore browser for Copy/Paste functionality then you will not see any benefit from VAAI. This is because the datastore browser has its own API and does not use the internal VMkernel Data Mover or VAAI.

    VNXe CPU performance stats during the first SVMotion with VAAI enabled displays approximately 20% Storage Processor utilization and without VAAI enabled you can see CPU % at approx 70% util:

    NAS_VAAI3

    VNXe Network performance stats display no network traffic with VAAI enabled and without VAAI both read and write for SPA use approx 70MB of bandwidth each:
    NAS_VAAI4

    Note: For the ‘Full File Clone’ primitive to perform the offload during an SVMotion the VM needs to be powered off for the duration of the SVMotion.

    See also Cormac Hogan’s blog post: VAAI Comparison – Block versus NAS

    EMC VNXe – Configuring E-Mail Alerts Via SMTP

    The VNXe can send E-mail alerts of system events to a specified IP address when it
    encounters alerts or error conditions.

    View and Configure SMTP Server Settings

    In order to View the IP addresses of the SMTP servers issue the following command:

    uemcli -d VNXe_IP -u Local/admin -p password /net/smtp show
    VNXE_SMTP1

    As you can see no IP address has been configured at present.

    The following command sets the IP address for the default SMTP server that the system will use:

    uemcli -d VNXe_IP -u Local/admin -p password /net/smtp -id default set -addr 192.168.101.110
    VNXE_SMTP4
    Note: The system uses the first IP address you specify.

    View Alert Settings

    View the settings for how the system handles alerts:
    uemcli -d VNXe_IP -u Local/admin -p password /event/alert/conf show
    VNXE_SMTP2

    As you can see no Alerts have been configured at present.

    Configure E-Mail Alert Settings

    Configure the settings for how the system handles alerts.

    -emailFromAddr Type the e-mail address the system will use as the FROM address. The addresses will appear in the FROM field of the recipient’s e-mail application.
    -emailToAddrs Type a comma-separated list of e-mail addresses the system will send alerts.
    -emailSeverity Specify the minimal severity of alerts the system will send as e-mails. Value is critical, error, warning, or info.

    The following command configures the alert settings:
    uemcli -d VNXe_IP -u Local/admin -p password /event/alert/conf set -emailToAddrs USER1@Domain.com,USER2@Domain.com -emailSeverity warning

    uemcli -d VNXe_IP -u Local/admin -p password /event/alert/conf showVNXE_SMTP5

    Configuring VIA UNISPHERE
    VNXE_SMTP6

    EMC VNXe Configuration Using Unisphere CLI (Part 2)

    This is the second part in the series on configuring the VNXe via command line. Here I will detail the Steps involved in creating both NFS and iSCSI datastores. The configuration steps outlined in Part2 will be the following:

    • LACP Configuration
    • Create the Network Interface for NFS
    • NFS Shared Folder Server Configuration
    • Create NFS datastores
    • Creating iSCSI Interfaces/Nodes/Datastores

    LACP Configuration

    Link aggregation lets you link physical ports on a SP to a single logical port. It is possible to use up to 4 ports on an SP. If your system has two SPs, and you link two physical ports, the same ports on both SPs are linked for redundancy. In this example, we will link port 2 and port 3, the system creates a link aggregation for these ports on SP A and a link aggregation on SP B. Each link aggregation is identified by an ID. Link aggregation has the following advantages:

    • Increased throughput since two physical ports are linked into one logical port.
    • Load balancing across linked ports
    • Redundant ports

    The following command shows the existing port settings:

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /net/port show

    Configure LACP for Ethernet Ports 2 and 3:

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /net/la create -ports eth2_SPA,eth3_SPA” -mtuSize 9000

    The following command shows the link aggregations on the system:

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /net/la show

    Create the Network Interface for NFS.

    This will create the Network interface for controlling access to the NFS file storage. You assign the interface to a Shared Folder Server (Next Step Below). iSCSI interfaces are used for controlling access to iSCSI storage and get assigned to the iSCSI nodes (Shown Below).

    The system configures each interface on an SP port. You have the option of indicating which SP the interface will use, either a physical port or a link aggregation port. You also have the option of specifying a virtual LAN (VLAN) ID, for communicating with VLAN networks. Each interface is identified by an ID.

    Create a network interface on the LACP we have created above that uses VLAN ID 100. The interface receives the ID if_0:

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /net/if create -port eth2_SPA -ipv4 static -addr 10.16.17.20 -netmask 255.255.255.0 -gateway 10.16.17.254

    The following command displays all interfaces on the system:

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /net/if show

    NFS Shared Folder Server Configuration

    Now we will create an NFS shared folder. Once we create the shared folder, we can create the NFS network shares and use the ID of the shared folder to associate it with a share.

    The following command creates a Shared Folder Server with these settings:

    • Name is NFS-SF
    • Associated to interface if_0
    • The server receives the ID file_server_0

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /net/nas/server create -name ” NFS-SF ” -enableNFS yes -if if_0

    Show details:

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /net/nas/server show

    Create NFS datastores

    Create an NFS Datastore and Assign to NFS Shared Server with these settings:

    • Named NFS-01
    • Use  Shared Folder Server file_server_0
    • Uses the VMWARE-NFS storage pool
    • NFS datastore size is 200 GB
    • Host Access is root (Read/write root access to primary storage)
    • 40G is the amount of protection storage to allocate for the NFS datastore
    • The protection size, entered for the -protSize qualifier, is automatically adjusted in proportion with changes to the size of the primary storage

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /stor/prov/vmware/nfs create -name ” NFS-01″ -server file_server_0 -pool VMWARE-NFS -cached no -size 200G -defAccess root -protSize 40G -autoProtAdjust yes

    View details:

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /stor/prov/vmware/nfs show

    Creating iSCSI Interfaces/Nodes/Datastores

    The following commands create the network interfaces used by the iSCSI nodes and uses VLAN ID 200. The interfaces receive the IDs if_2 and if_3 on both SPA and SPB respectfully:

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /net/if create -port eth10_SPA -vlanId 200 -ipv4 static -addr 10.16.17.21 -netmask 255.255.255.0 -gateway 10.16.17.254

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /net/if create -port eth10_SPB -vlanId 200 -ipv4 static -addr 10.16.17.22 -netmask 255.255.255.0 -gateway 10.16.17.254

    The following commands creates the first iSCSI node with these settings:

    • Alias is ISCSIA-21
    • Network interface if_2 assigned

    The iSCSI node receives ID iSCSI_node_0:

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /net/iscsi/node create -alias ISCSIA-21 -if if_2

    Create the second iSCSI node with these settings:

    • Alias is ISCSIB-22
    • Network interface if_3 assigned

    The iSCSI node receives ID iSCSI_node_1:

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /net/iscsi/node create -alias ISCSIB-22 -if if_3

    Lists all iSCSI nodes on the system:
    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /net/iscsi/node show

     Creating iSCSI Datastores

    Check ESXi Host vdiskhost ID’s to use in assigning the datastores:

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /remote/host show -detail

    We can now create the iSCSI datastores:

    Create iSCSI Datastores from the Performance Pool and assign to ESXi Hosts with ID’s 1001,1002:

    uemcli -d 10.0.0.1 -u local/admin -p Password#123 /stor/prov/vmware/vmfs create -name “iSCSI-LUN01” -node iscsi_node_0 -pool performance -size 200G -thin yes -vdiskHosts “1001,1002”

    Create iSCSI Datastores from the Capacity Pool and assign to ESXi Hosts with ID’s 1003,1004:

    uemcli -d 10.0.0.1 -u local/admin -p Password#123 /stor/prov/vmware/vmfs create -name “iSCSI-LUN02” -node iscsi_node_1 -pool capacity -size 200G -thin yes -vdiskHosts “1003,1004 “