Dell EMC ViPR 3.6 – VMAX Export Path Adjustment

A very useful new feature in ViPR 3.6 is the ability to increase/decrease the number of fibre channel paths on an existing VPLEX or VMAX export (host/cluster). The following example showcases the ability to increase the path count for a VMAX backed ViPR Export Group.

Note: If you want to adjust the path for an export where there are pre-existing masking
views for a host or cluster in the export group, first ingest any volumes exported to
the host. Example ViPR Ingest Procedure.

As can be seen from the following screen captures the export group consists of 2x ESXi hosts with each host having 2x initiators and each host presently configured for 1x FC path per initiator, as part of this example we will double the total fc paths per host from 2 to 4.

Export Group named ‘EHC’ details:

ViPRExpandPaths0

Continue reading

CISCO MDS 9148 – PortChannel Maximums

An issue I encountered recently was in relation to the number of FLOGIs achievable on a single CISCO 9148 MDS PortChannel running on NX-OS Release 5.2(8e):

Cisco MDS NX-OS Release 5.2(x) Maximum Configuration Limit FLOGIs per port channel = 114

In this example we had a UCS environment with greater than 114 Blades (128 Blades to be exact) FLOGI was successfully completing for 126 out of the 128 UCS B-Series Blades (surprising as the stated limit was 114).

MDS-UCS

Example command outputs while running code version 5.2(8e) and exceeding the stated maximum limit:

MDS-9K-A# show logging logfile | include FLOGI
2015 Nov 9 00:57:05.182 MDS-9K-A %FLOGI-1-MSG_FLOGI_REJECT_FCID_ERROR: %$V
SAN 10%$ [VSAN 10, Interface port-channel 10: mode[F]] (pwwn: 20:00:00:25:b5:05:
XX:XX) FLOGI rejected – FCID allocation failed with error 0x401b0000.

MDS-9K-A# show flogi internal info | i key|Interface | i key p 1
FLOGI rejected – FCID allocation failed with error 0x401b0000

MDS-9K-A# show flogi database interface port-channel 10
Total number of flogi = 127

There was two options available to remedy this problem: one was to split the port-channel, this configuration would create two port-channels on each SAN Fabric switch (4 port-channels total) and split the blades HBA’s between the two PortChannels (64 FLOGIs per PortChannel) giving us the ability to cater for our required 128 FLOGIs per FABRIC/switch or 256 in total across both FABRICs. In fact this solution would theoretically allow for 228 FLOGIs per switch divided equally across both port-channels per switch or a theoretical maximum of 456 FLOGIs across both FABRICs SwitchA&B.

The second option and my preferred route is to perform a code upgrade of the switch:

Cisco MDS NX-OS Release 6.2(x) Maximum Configuration Limit FLOGIs per port channel = 256

Output after upgrading to 6.2:
MDS-9K-A# show flogi database interface port-channel 10
Total number of flogi = 129
Success- all 128 HBA’s are successfully logging into MDS-A switch via a single PortChannel.

MDS-9K-A# show flogi database interface port-channel 11
Total number of flogi = 129
Success- all 128 HBA’s are successfully logging into MDS-B switch via a single PortChannel.

Also worth noting:

NX-OS 5.2.x
PortChannels and member ports in PortChannels = 16 PortChannels with 16 members ports in all PortChannels. For example you could have 16 PortChannels each with 1 member, 2 PortChannels with 8 members or 1 PortChannel with 16 members.
NX-OS 5.2 Limits

NX-OS 6.2.x
PortChannels and member ports in PortChannels = 48 PortChannels with a maximum of 16 members in each PortChannel.
NX-OS 6.2 Limits

EMC XtremIO – VPLEX BackEnd Connectivity Considerations

This post will detail some general best practices for VPLEX Back-end connectivity to an XtremIO storage array. As you will see from the diagrams below each VPLEX director should have redundant physical connectivity to the XtremIO storage array across Fabric-A & B. Due to the fact that XtremIO is an active/active array each X-Brick storage controller FC port has access to all provisioned storage on the array. In essence the objective here is to balance each VPLEX Director across each XtremIO Storage controller as evenly as possible thus avoiding any bottlenecks in connectivity between Back-end VPLEX to XtremIO front-end. The configuration examples provided (not limited to) are for the following scenarios:

EXAMPLE 1: Single VPLEX Engine & Single XtremIO X-Brick
EXAMPLE 2: Single VPLEX Engine & Dual XtremIO X-Brick
EXAMPLE 3: Dual VPLEX Engine & Dual XtremIO X-Brick
Further examples to follow…

EXAMPLE 1: Single VPLEX Engine & XtremIO X-Brick
For this example the single engine VPLEX System has 50% of the available back-end ports connected: E1-A1-FC00, E1-A1-FC01, E1-B1-FC00, and E1-B1-FC01. This allows for a 1:1 mapping of VPLEX->XtremIO ports which equates to a total bandwidth of 32Gb/s between VPLEX->XtremIO. This design meets VPLEX HA requirements as each VPLEX director is zoned to both XtremIO SCs. The design also allows for future scalability of the XtremIO cluster as the remaining VPLEX back-end ports, E1-A1-FC02, E1-A1-FC03, E1-B1-FC02, and E1-B1-FC03 can be used to upgrade to a Dual X-Brick XtremIO configuration at a later stage (next example).

VPLEX-XtremIO-Single-1 - New Page

Cisco MDS-SERIES Zoning Configuration

The configuration steps below will detail creating:
◆ Alias’s for VPLEX Back-end & XtremIO Front-end ports
◆ Creating Zones
◆ Creating Zonesets
◆ Activate & Commit Zoneset

VPLEX&XTREMIO-SINGLE

Example1: Download Zoning Configuration

EXAMPLE 2: Single VPLEX Engine & Dual XtremIO X-Brick
For this example the single engine VPLEX System has 100% of the available back-end ports connected: E1-A1-FC00, E1-A1-FC01, A1-FC02, A1-FC03 and E1-B1-FC00, E1-B1-FC01, E1-B1-FC02, E1-B1-FC03. This again allows for a 1:1 mapping of VPLEX->XtremIO ports which equates to a total bandwidth of 64Gb/s between VPLEX->XtremIO. Each VPLEX director is zoned to all 4 XtremIO SCs for maximum HA and bandwidth:

VPLEX-Single-XtremIO-Dual-1 - New Page

VPLEX-SINGLE&XTREMIO-DUAL

Example2: Download Zoning Configuration

EXAMPLE 3: Dual VPLEX Engine & Dual XtremIO X-Brick
For this example the Dual engine VPLEX System has 50% of the available back-end ports connected: E1-A1-FC00, E1-A1-FC01, E2-A1-FC00, E2-A1-FC01, and E1-B1-FC00, E1-B1-FC01, E2-B1-FC00, E2-B1-FC01. This again follows a 1:1 mapping of VPLEX->XtremIO ports which equates to a total bandwidth of 64Gb/s between VPLEX->XtremIO. By using 50% of the available ports on each VPLEX director in a dual-dual configuration allows for a future expansion of the XtremIO cluster to a Quad configuration and still maintain a 1:1 mapping from a Dual VPLEX solution. Each VPLEX director is zoned to all 4 XtremIO SCs for maximum HA and bandwidth:

VPLEX-Dual-XtremIO-Dual-1 - New Page

VPLEX&XTREMIO-DUAL

Example3: Download Zoning Configuration

XtremIO Initiator Groups
Each IG can access all of the created storage volumes within the array once the storage is mapped to the Initiator Group, there is no requirement as is the case with other arrays to configure groups of array ports/initiators/volumes such as masking views or storage groups, with XtremIO you simply create the IG and map the required volumes to that IG (see earlier post “Creating Initiator Groups and Mapping LUNs”)
Related XtremIO maximums v4.x:

◆ Maximum volumes per IG = 2048 (3.0.x the limit is 1024)
◆ Maximum volumes presented to VPLEX = 4096
◆ Maximum volumes per cluster = 8192
◆ Mapping limit for XtremIO is 16,384 volumes to Initiator Groups

In most cases a single XtremIO Initiator Group shall suffice for VPLEX->XtremIO connectivity due to the fact that you may present up to a maximum of 2048 volumes to a single IG, this equates to 2048 XtremIO system mappings (one volume map counts as one mapping). In the event that greater than 2048 volumes are required you may group all the back-end ports of VPLEX into 2 XtremIO Initiator Groups and map them to the volumes as follows: configure VPLEX Back-end ports FC00 and FC01 across all directors to one IG and ports FC02 and FC03 across all directors to a second IG, essentially this type of configuration would allow a VPLEX cluster to be seen as 2 independent hosts from the XtremIO cluster allowing the user to theoretically provision the maximum allowed 4096 volumes from one XtremIO cluster to one VPLEX cluster! At a high level this is an example configuration:

Initiator Group1: assign ports FC00 and FC01 across all directors to one IG. Map VPLEX Volumes 1-2048 to IG1 = 2048 Mappings

Initiator Group2: assign ports FC02 and FC03 across all directors to a second IG. Map VPLEX Volumes 2049-4096 to IG2 = 2048 Mappings

At this stage we now have 4096 Mappings and 4096 Volumes presented to VPLEX with 50% of volumes mapped to 50% of VPLEX BE ports FC00 and FC01 and the remaining 50% of volumes mapped to ports FC02 and FC03.

The above example allows to scale to the maximum limit of 4096 volumes but if there is no requirement to scale beyond 2048 volumes then a single XtremIO IG may contain all 32 VPLEX BE ports.

This recommended best practice configuration would apply to any combination of XtremIO X-Bricks to VPLEX Engines.

Note (VPLEX aside): Mappings per cluster = 16,384. The only way we would ever reach the maximum of 16,384 mapping is if volumes were shared across multiple initiator groups. Example maximum mapping configuration:
2048volumes assigned to IG1 & IG2 = 4096 mappings
2048volumes assigned to IG3 & IG4 = 4096 mappings
2048volumes assigned to IG5 & IG6 = 4096 mappings
2048volumes assigned to IG7 & IG8 = 4096 mappings
TOTAL = 8192volumes / 16,384 mappings

ViPR
While the following examples provide the associated Cisco zoning for their respective configurations, in the case where ViPR is part of the solution you may chose to either manually complete this task as per the example scripts below which is seamless to ViPR or let ViPR automatically handle this zoning configuration task. As an example if you chose to allow ViPR to automatically configure the Back-end zoning and you only require at the time to use 50% of the available VPLEX BE ports (depending on the VPLEX-XtremIO config and future scalability requirements) then you can manually tell ViPR to use only the FC00 & FC01 director ports, ViPR follows VPlex-XtremIO best practices and chooses ports based on maximum redundancy.
Where the number of X-Bricks is less than the number of VPLEX Engines ViPR will automatically zone 2 Paths Per VPlex Director allowing the flexibility to scale the number of X-Bricks as outlined in the examples below.
Another point to note is ViPR at present in the case of a VPLEX-XtremIO solution recognises only a single XtremIO IG thus a 2048 volume limit applies when using ViPR.

Useful References:
EMC® VPLEX SAN Connectivity Implementation Planning and Best Practices:
http://www.emc.com/collateral/technical-documentation/h13546-vplex-san-connectivity-best-practices.pdf

CISCO MDS – Useful ‘Show’ Commands

Cisco_MDS

CONFIG:
show startup-config
show running-config
show running-config diff
show run |include host|gateway|ntp|fcdomain|http|telnet|zoneset|vsan
show start |include host|gateway|ntp|fcdomain|http|telnet|zoneset|vsan
show install all status
show switchname
show wwn switch
show switch summary
show version
show cdp neighbors
show boot
show system internal flash
show snmp host
show ntp peers
show ssh server
show telnet server

Continue reading

Cisco MDS – Clear All Zoning Configuration

The steps below detail the process of clearing all zoning configurations on an MDS switch. Including: fcaliases, zonesets and zones. If you wish to remove individual zones from a zoneset then please see post here: How To Remove Zones from an Active Zoneset

Note: Ensure this is a standalone switch; if it is connected to other switches in the fabric then you may potentially affect the entire fabric.

1. Firstly determine the names of the active zonesets for each vsan. In this example we will clear down all the zoning associated with the zonset “vsan10_zs” on vsan 10:

MDS-9148# show zoneset active | inc zoneset
zoneset name vsan10_zs vsan 10

2. Next you will need to deactivate the zoneset “vsan10_zs” on vsan 10:

MDS-9148# conf t
MDS-9148(config)# no zoneset activate name vsan10_zs vsan 10
Enhanced zone session has been created. Please ‘commit’ the changes when done.
MDS-9148(config)# zone commit vsan 10
Commit operation initiated. Check zone status

3. Remove the zonset “vsan10_zs” from the configuration:

MDS-9148(config)# no zoneset name vsan10_zs vsan 10
MDS-9148(config)# zone commit vsan 10
MDS-9148(config)# show zoneset active
Zoneset not present

4. Clear all the zones from the database associated with vsan 10:

MDS-9148# clear zone database vsan 10
Enhanced zone session has been created. Please ‘commit’ the changes when done.
MDS-9148(config)# zone commit vsan 10
MDS-9148(config)# show zone
Zone not present
MDS-9148# show fcalias
Alias not present

5. Clear any device alias entries:
MDS-9148(config)# clear device-alias database
MDS-9148(config)# device-alias commit
MDS-9148(config)# show device-alias database

6. Exit config mode, save your running config to startup and reload the switch:

MDS-9148(config)# exit
MDS-9148# copy run startup-config
[########################################] 100%
Copy complete, now saving to disk (please wait)…
MDS-9148# reload
This command will reboot the system. (y/n)? [n] y

Then apply your new zoning configuration:
copy ftp://FTP-Server-IP/Zoning.cfg system:running-config

EXAMPLE

#### FABRIC A ####
show zoneset active | inc zoneset

conf t
no zoneset activate name vsan10_zs vsan 10
zone commit vsan 10

no zoneset name vsan10_zs vsan 10
zone commit vsan 10
show zoneset active

clear zone database vsan 10
zone commit vsan 10
show zone
show fcalias

clear device-alias database
device-alias commit
show device-alias database

exit
copy run startup-config

#### FABRIC B ####
show zoneset active | inc zoneset

conf t
no zoneset activate name vsan11_zs vsan 11
zone commit vsan 11

no zoneset name vsan10_zs vsan 11
zone commit vsan 11
show zoneset active

clear zone database vsan 11
zone commit vsan 11
show zone
show fcalias

clear device-alias database
device-alias commit
show device-alias database

exit
copy run startup-config

## Then apply your new zoning configuration: ##
copy ftp://10.10.10.1/ZoningA.cfg system:running-config
show zoneset active
copy run start
copy ftp://10.10.10.1/ZoningB.cfg system:running-config
show zoneset active
copy run start

If prompted for ‘vrf’ the default entry is ‘management’:
Enter vrf (If no input, current vrf ‘default’ is considered): management

Cisco MDS – How To Remove Zones from an Active Zoneset

1. Firstly we need to know the specific names of the Zones that we intend to delete. To gather the full list of zone members within a Zoneset run show zoneset vsan xx. The output will return all of the member names for the Zoneset, the output can be reduced if you know the naming conventions associated with the hosts; for example if the Zone names begin with V21212Oracle-1 then issuing the command show zoneset brief | include V21212Oracle-1 will return in this case all the Zones associated with Oracle-1:
RZ1

2. To View the active Zones for Oracle-1 within the Zonseset: show zoneset active | include V21212Oracle-1
RZ2

3. Example of Removing half the Zones (Paths) associated with host Oracle-1 from the active Zoneset name vsan10_zs:
config t
zoneset name vsan10_zs vsan 10
no member V21212Oracle-1_hba1-VMAX40K_9e0
no member V21212Oracle-1_hba1-VMAX40K_11e0
no member V21212Oracle-1_hba2-VMAX40K_7e0
no member V21212Oracle-1_hba2-VMAX40K_5e0

4. Re-activating the Zoneset vsan10_zs after the config changes of removing the specified Zoneset members:
zoneset activate name vsan10_zs vsan 10
zone commit vsan 10

5. Finally removing the Zones from the configuration:
no zone name V21212Oracle-1_hba1-VMAX40K_9e0 vsan 10
no zone name V21212Oracle-1_hba1-VMAX40K_11e0 vsan 10
no zone name V21212Oracle-1_hba2-VMAX40K_7e0 vsan 10
no zone name V21212Oracle-1_hba2-VMAX40K_5e0 vsan 10
zone commit vsan 10
end
copy run start

Confirm configuration contains the correct Active Zoning:
show zoneset brief | include V21212Oracle-1
show zoneset active | include V21212Oracle-1

RZ3

CISCO MDS – Verifying VNX & VMAX Connectivity

Somtimes you may have encountered an issue where a VNX/VMAX front-end port gets cabled to the wrong MDS Switch Port that you had designed for and in fact the port description that was applied to the MDS Port is incorrect. In this case it is invaluable to have a command to verify the VNX/VMAX port connected to the MDS. There are many situations where this command would be useful; for example you may not have the ability to do a physical check as the DC is remote and you need to ensure the interface description that you are assigning the MDS FC Port is correct and resulting zoning configurations are accurate. Confirming these connections remotely through a command on the MDS is very benneficial during these types of situations. Else you may end up zoning to ports you did not design for.

VMAX Example

So lets take an example: the design and expectation here is to have the VMAX Port 9G:1 connected to MDS FC Interface 2/37 and the interface was given the relevant description:

1

From this result we can see that the port was labeled as per design as VMAX 9G:1. Now we need to confirm this is the actual port connected to FC2/37.

To analyse the connectivity of a specific interface we first need to retrieve the FCID for this port:
show interface fc2/37

2

Now that we know the FCID is 0x010440, we can run our magic Cisco cmd to verify which VMAX FA Port is actually connected to the MDS Port FC2/37 :

show fcns database fcid 0x010440 detail vsan 10

Note: FCNS = Fibre Channel Name Server.
3

From the output we can confirm that there is a problem; the expected VMAX Port was 9G1 but in fact 7G1 is the VMAX port patched to FC2/37 (SYMMETRIX::000195701570::SAF- 7gB::FC::5876_229). Thus either we update the description of the interface or have the correct VMAX Port patched.

To modify the description:
interface fc2/37
Switchport description VMAX20K-7g1
no shutdown

VNX Example

Interface FC1/25 as per design is connected to Service Processor ‘A2’ front-end port 2:
VNX1

Running show interface fc1/25 in order to confirm port description and retrieve the FCID:
VNX3

Now that we know the FCID is 0x010500, we can query the FCNS database for details of what is connected at the other end of FC1/25:
show fcns database fcid 0x010500 detail vsan 10
VNX4
From the output we can confirm the correct port is connected from the VNX.

Another method of confirming the correct port is connected, is by gathering the WWPN from the VNX/VMAX port and then running the show flogi database interface fc 1/25 command on the MDS:
VNX2
VNX5

Reverse Lookup
From the VNX we can run a “naviseccli -h SP_IP port -list”:
VNX6

From the output we can see that SPA_6(Logical Port) is connected to the MDS interface WWN 20:19:54:7f:ee:e2:9e:f8.
Given this information we can lookup the Interface Port number by issuing: show fcs database | include 20:19:54:7f:ee:e2:9e:f8
VNX7
Thus we can conclude from this output that the VNX Physcial Port SPA:2_2 is connected to MDS Port FC1/25.

Note: If we want to look up the details of all the switch ports on the MDS this is the command:
show fcns database detail