EMC VNXe Configuration Using Unisphere CLI (Part 3)

Part 1
Part 2

This is the third part in the series on configuring the VNXe via command line. Here I will detail the Steps involved in adding VMware ESXi hosts, presenting NFS datastores to these hosts and setting access rights.

  • Add VMware ESXi Hosts
  • Add VNXe NFS Volumes to VMware ESXi Hosts
  • Setting Access Rights

Note: VMWare networking and Nexus port channels must be configured at this stage. See below for example NEXUS VPC configs.

Add VMware ESXi Hosts
Using the ESXi mgmt address to add two ESXi hosts as follows:
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /virt/vmw/esx create -addr 192.168.105.10 -username root -passwd Password

uemcli -d 10.0.0.1 -u Local/admin -p Password123# /virt/vmw/esx create -addr 192.168.105.11 -username root -passwd Password

Expected Output:
ID = 1005
ID = 1007

Operation completed with partial success.
The create, refresh, or set operation has started. It will continue to add or update ESX host information in the background.

It takes approximately ~2 minutes to add each host after receiving the output above. View details of ESXi hosts connected:
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /remote/host show
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /virt/vmw/esx show

Output:
ESXi01 ID = 1005
Name = ESXi01
Address = 192.168.106.10,192.168.105.10,192.168.102.101
OS type = esx

ESXi02 ID = 1007
Name = ESXi02
Address = 192.168.106.11,192.168.105.11,192.168.102.102
OS type = esx

Three IP addresses are returned, in this case there is one IP address each for Mgmt, VMotion and NFS traffic. We are only concerned with applying access permissions at the NFS level. In this example the NFS addresses are 192.168.102.101&102.

Checking in the GUI we can confirm the hosts were added successfully:
VNXe_ESXi1

Add VNXe NFS Volumes to VMware ESXi Hosts & Set Access
We firstly need to gather the Network File System (NFS) ID’s:
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /stor/prov/vmware/nfs show
Output:
NFS_1: ID = app_1
Name = AMP-NFS-01

NFS_2: ID = app_2
Name = AMP-NFS-02

Add NFS Volumes APP_1&2 to hosts using the VNXe ID of the hosts (1005,1007) and assign root access to only the NFS Vmkernel address of the ESXi hosts:

  • ESXi01[1005] vmKernel NFS Port Group IP 192.168.102.101
  • ESXi02[1007] vmKernel NFS Port Group IP 192.168.102.102
  • uemcli -d 10.0.0.1 -u Local/admin -p Password123# /stor/prov/vmware/nfs -id app_1 set -defAccess na -rootHosts 1005[192.168.102.101],1007[192.168.102.102]
    uemcli -d 10.0.0.1 -u Local/admin -p Password123# /stor/prov/vmware/nfs -id app_2 set -defAccess na -rootHosts 1005[192.168.102.101],1007[192.168.102.102]

    Display the ESXi hosts connected to the VNXe NFS volumes and their respective access rights:
    uemcli -d 10.0.0.1 -u Local/admin -p Password123# /stor/prov/vmware/nfs show -detail
    Output:
    NFS_1: ID = app_1
    Name = AMP-NFS-01
    Server = file_server_0
    Storage pool = AMP-NFS
    Size = 2199023255552 (2.0T)
    Root hosts = 1005[192.168.102.101], 1007[192.168.102.102]

    NFS_2: ID = app_2
    Name = AMP-NFS-02
    Server = file_server_0
    Storage pool = AMP-NFS
    Size = 2199023255552 (2.0T)
    Root hosts = 1005[192.168.102.101], 1007[192.168.102.102]

    VNXe_ESXi2

    VNXe_ESXi3

    Example Cisco Nexus VPC Configs with VNXe 10Gig Interfaces

    VPC_VNXe
    NEXUS SWITCH ‘A’ VPC CONFIG :
    interface Ethernet1/25
    description VNXe3300-SPA-Port1-10Gbe
    switchport mode trunk
    switchport trunk allowed vlan 102
    spanning-tree port type edge trunk
    flowcontrol receive on
    channel-group 41 mode active

    interface Ethernet1/26
    description VNXe3300-SPB-Port1-10Gbe
    switchport mode trunk
    switchport trunk allowed vlan 102
    spanning-tree port type edge trunk
    flowcontrol receive on
    channel-group 42 mode active

    interface port-channel41
    description Port_Channel_To VNXe_SPA-10Gbe-Ports
    switchport mode trunk
    switchport trunk allowed vlan 102
    spanning-tree port type edge trunk
    flowcontrol receive on
    no negotiate auto
    vpc 41

    interface port-channel42
    description Port_Channel_To VNXe_SPB_10Gbe-Ports
    switchport mode trunk
    switchport trunk allowed vlan 102
    spanning-tree port type edge trunk
    flowcontrol receive on
    no negotiate auto
    vpc 42

    NEXUS SWITCH ‘B’ VPC CONFIG:
    interface Ethernet1/25
    description VNXe3300-SPA-Port2-10Gbe
    switchport mode trunk
    switchport trunk allowed vlan 102
    spanning-tree port type edge trunk
    flowcontrol receive on
    channel-group 41 mode active

    interface Ethernet1/26
    description VNXe3300-SPB-Port2-10Gbe
    switchport mode trunk
    switchport trunk allowed vlan 102
    spanning-tree port type edge trunk
    flowcontrol receive on
    channel-group 42 mode active

    interface port-channel41
    description Port_Channel_To VNXe_SPA_10Gbe-Ports
    switchport mode trunk
    switchport trunk allowed vlan 102
    spanning-tree port type edge trunk
    flowcontrol receive on
    no negotiate auto
    vpc 41

    interface port-channel42
    description Port_Channel_To VNXe_SPB_10Gbe-Ports
    switchport mode trunk
    switchport trunk allowed vlan 102
    spanning-tree port type edge trunk
    flowcontrol receive on
    vpc 42

    Note the VNXe NFS interface ‘if_0’ must have the corresponding VLAN id configured:
    uemcli -d 10.0.0.1 -u Local/admin -p Password123# /net/if -id if_0 set -vlanId 102
    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /net/if show

    Output:
    ID = if_0
    Port = eth10_SPA
    VLAN ID = 102
    VNXe_ESXi6

    EMC VNXe – Troubleshooting NFS Connectivity

    Step1 Check the Health status of the Link Aggregation:

    uemcli -d 10.0.0.1 -u Local/admin -p Password123# /net/la show -detail

    1: ID = la0_SPA
    SP = SPA
    Ports = eth2_SPA,eth3_SPA
    Health state = OK (5)

    2: ID = la0_SPB
    SP = SPB
    Ports = eth2_SPB,eth3_SPB
    Health state = OK (5)

    Step2 Ensure the Network Interface for NFS is correctly configured:
    uemcli -d 10.0.0.1 -u Local/admin -p Password123# /net/if show -detail

    ID = if_2
    Port = eth2_SPA
    VLAN ID = 0
    IPv4 mode = static
    IPv4 address = 10.0.0.11
    IPv4 subnet mask = 255.255.255.0
    IPv4 gateway = 10.0.0.254
    MAC address = 08:00:00:00:00:00
    SP = SPA

    Step3 Check the Health Status and MTU Value set on the Ports:
    uemcli -d 10.0.0.1 -u Local/admin -p Password123# /net/port show

    ID = eth2_SPA
    Role = frontend
    SP = SPA
    Supported types = iscsi, net
    MTU size = 9000
    Speed = 1 Gbps
    Health state = OK (5)
    Aggregated port ID = la0_SPA

    ID = eth3_SPA
    Role = frontend
    SP = SPA
    Supported types = iscsi, net
    MTU size = 9000
    Speed = 1 Gbps
    Health state = OK (5)
    Aggregated port ID = la0_SPA

    ID = eth2_SPB
    Role = frontend
    SP = SPB
    Supported types = iscsi, net
    MTU size = 9000
    Speed = 1 Gbps
    Health state = OK (5)
    Aggregated port ID = la0_SPB

    ID = eth3_SPB
    Role = frontend
    SP = SPB
    Supported types = iscsi, net
    MTU size = 9000
    Speed = 1 Gbps
    Health state = OK (5)
    Aggregated port ID = la0_SPB

    For a more detailed analysis of Frontend|Backend:
    uemcli -d 10.0.0.1 -u Local/admin -p Password123# /net/port -role frontend show -detail

    Step4 Check if Jumbo MTU is set correctly on Cisco SW’s:
    SwitchA#show system mtu
    If change required then issue command: system mtu jumbo 9198
    Save and then reload

    Step5 Check Shared folder for enabled NFS and Interface ID
    uemcli -d 10.0.0.1 -u Local/admin -p Password123# /net/nas/server show

    ID = file_server_2
    Name = NFS_01
    Health state = OK (5)
    SP = SPA
    CIFS enabled = no
    NFS enabled = yes
    Interface = if_2

    Step6 Run a PING Test from the VNXe NFS Interface if_2 to ESXi NFS IP & VMKPing from ESXi NFS vmk to VNXe Server
    Gather ESXi Host details:
    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /remote/host show -detail

    ID = 1003
    Name = ESXi_01
    Type = host
    Address = 10.0.0.50
    OS type = esx

    Ping the vmkernel of the ESXi host to ensure proper connectivity:
    uemcli -d 10.0.0.1 -u Local/admin -p Password123# /net/util ping -srcIf if_2 -addr 10.0.0.50
    Operation completed successfully.
    Ping the NFS Server from the vmkernel of the ESXi host:
    vmkping -s 8972 -d 10.0.0.11

    Failure here will normally imply a networking configuration issue – Verify subnets, vlan’s and any firewall configs are correct.

    Step7 Check the status and details of the NFS Datastore
    uemcli -d 10.0.0.1 -u Local/admin -p Password123# /stor/prov/vmware/nfs show -detail
    ID = app_1
    Name = NFS-01
    Health state = OK (5)
    Health details = "The component is operating normally. No action is required."
    Server = file_server_2
    Storage pool = NFS-01
    Size = 858993459200 (800.0G)
    Size used = 241740808192 (225.1G)
    Maximum size = 4417404272640 (4.0T)
    Thin provisioning enabled = no
    Cached = no
    Current allocation = 858993459200 (800.0G)
    Protection size = 42949672960 (40.0G)
    Protection size used = 0
    Maximum protection size = 17592184995840 (16.0T)
    Protection current allocation = 0
    Auto-adjust protection size = yes
    Local path = /NFS_01
    Export path = /NFS_01
    Default access = na
    Root hosts = 1003[10.0.0.50]
    Replication destination = no
    Deduplication enabled = no
    Creation time = 2014-06-11 09:55:00
    Last modified time = 2014-06-11 09:55:00

    Step8 For any Firewall in-place ensure the following port is open:
    2049 – TCP/UDP – NFS – Required for NFS

    Step9 Add the NFS file system manually to the ESXi host:
    Log on to vCenter or the ESX host click on the ESXi host, from the Configuration tab choose Storage and Add storage:
    1. Enter the IP address of the NFS server – 10.0.0.11
    2. Enter the Folder name which is the mount point from VNXe – /NFS-01
    3. Enter the datastore name that vCenter/ESXi will use to present as – NFS-01

    EMC VNXe Configuration Using Unisphere CLI (Part 2)

    This is the second part in the series on configuring the VNXe via command line. Here I will detail the Steps involved in creating both NFS and iSCSI datastores. The configuration steps outlined in Part2 will be the following:

    • LACP Configuration
    • Create the Network Interface for NFS
    • NFS Shared Folder Server Configuration
    • Create NFS datastores
    • Creating iSCSI Interfaces/Nodes/Datastores

    LACP Configuration

    Link aggregation lets you link physical ports on a SP to a single logical port. It is possible to use up to 4 ports on an SP. If your system has two SPs, and you link two physical ports, the same ports on both SPs are linked for redundancy. In this example, we will link port 2 and port 3, the system creates a link aggregation for these ports on SP A and a link aggregation on SP B. Each link aggregation is identified by an ID. Link aggregation has the following advantages:

    • Increased throughput since two physical ports are linked into one logical port.
    • Load balancing across linked ports
    • Redundant ports

    The following command shows the existing port settings:

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /net/port show

    Configure LACP for Ethernet Ports 2 and 3:

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /net/la create -ports eth2_SPA,eth3_SPA” -mtuSize 9000

    The following command shows the link aggregations on the system:

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /net/la show

    Create the Network Interface for NFS.

    This will create the Network interface for controlling access to the NFS file storage. You assign the interface to a Shared Folder Server (Next Step Below). iSCSI interfaces are used for controlling access to iSCSI storage and get assigned to the iSCSI nodes (Shown Below).

    The system configures each interface on an SP port. You have the option of indicating which SP the interface will use, either a physical port or a link aggregation port. You also have the option of specifying a virtual LAN (VLAN) ID, for communicating with VLAN networks. Each interface is identified by an ID.

    Create a network interface on the LACP we have created above that uses VLAN ID 100. The interface receives the ID if_0:

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /net/if create -port eth2_SPA -ipv4 static -addr 10.16.17.20 -netmask 255.255.255.0 -gateway 10.16.17.254

    The following command displays all interfaces on the system:

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /net/if show

    NFS Shared Folder Server Configuration

    Now we will create an NFS shared folder. Once we create the shared folder, we can create the NFS network shares and use the ID of the shared folder to associate it with a share.

    The following command creates a Shared Folder Server with these settings:

    • Name is NFS-SF
    • Associated to interface if_0
    • The server receives the ID file_server_0

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /net/nas/server create -name ” NFS-SF ” -enableNFS yes -if if_0

    Show details:

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /net/nas/server show

    Create NFS datastores

    Create an NFS Datastore and Assign to NFS Shared Server with these settings:

    • Named NFS-01
    • Use  Shared Folder Server file_server_0
    • Uses the VMWARE-NFS storage pool
    • NFS datastore size is 200 GB
    • Host Access is root (Read/write root access to primary storage)
    • 40G is the amount of protection storage to allocate for the NFS datastore
    • The protection size, entered for the -protSize qualifier, is automatically adjusted in proportion with changes to the size of the primary storage

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /stor/prov/vmware/nfs create -name ” NFS-01″ -server file_server_0 -pool VMWARE-NFS -cached no -size 200G -defAccess root -protSize 40G -autoProtAdjust yes

    View details:

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /stor/prov/vmware/nfs show

    Creating iSCSI Interfaces/Nodes/Datastores

    The following commands create the network interfaces used by the iSCSI nodes and uses VLAN ID 200. The interfaces receive the IDs if_2 and if_3 on both SPA and SPB respectfully:

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /net/if create -port eth10_SPA -vlanId 200 -ipv4 static -addr 10.16.17.21 -netmask 255.255.255.0 -gateway 10.16.17.254

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /net/if create -port eth10_SPB -vlanId 200 -ipv4 static -addr 10.16.17.22 -netmask 255.255.255.0 -gateway 10.16.17.254

    The following commands creates the first iSCSI node with these settings:

    • Alias is ISCSIA-21
    • Network interface if_2 assigned

    The iSCSI node receives ID iSCSI_node_0:

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /net/iscsi/node create -alias ISCSIA-21 -if if_2

    Create the second iSCSI node with these settings:

    • Alias is ISCSIB-22
    • Network interface if_3 assigned

    The iSCSI node receives ID iSCSI_node_1:

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /net/iscsi/node create -alias ISCSIB-22 -if if_3

    Lists all iSCSI nodes on the system:
    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /net/iscsi/node show

     Creating iSCSI Datastores

    Check ESXi Host vdiskhost ID’s to use in assigning the datastores:

    uemcli -d 10.0.0.1 -u Local/admin -p Password#123 /remote/host show -detail

    We can now create the iSCSI datastores:

    Create iSCSI Datastores from the Performance Pool and assign to ESXi Hosts with ID’s 1001,1002:

    uemcli -d 10.0.0.1 -u local/admin -p Password#123 /stor/prov/vmware/vmfs create -name “iSCSI-LUN01” -node iscsi_node_0 -pool performance -size 200G -thin yes -vdiskHosts “1001,1002”

    Create iSCSI Datastores from the Capacity Pool and assign to ESXi Hosts with ID’s 1003,1004:

    uemcli -d 10.0.0.1 -u local/admin -p Password#123 /stor/prov/vmware/vmfs create -name “iSCSI-LUN02” -node iscsi_node_1 -pool capacity -size 200G -thin yes -vdiskHosts “1003,1004 “