Dell EMC ViPR 3.6 – VMAX Export Path Adjustment

A very useful new feature in ViPR 3.6 is the ability to increase/decrease the number of fibre channel paths on an existing VPLEX or VMAX export (host/cluster). The following example showcases the ability to increase the path count for a VMAX backed ViPR Export Group.

Note: If you want to adjust the path for an export where there are pre-existing masking
views for a host or cluster in the export group, first ingest any volumes exported to
the host. Example ViPR Ingest Procedure.

As can be seen from the following screen captures the export group consists of 2x ESXi hosts with each host having 2x initiators and each host presently configured for 1x FC path per initiator, as part of this example we will double the total fc paths per host from 2 to 4.

Export Group named ‘EHC’ details:

ViPRExpandPaths0

Continue reading

EHC 4.1: ESXi Host Migration Process (vCenter Host Migration with ViPR Export Groups)

This procedure details the steps to execute in order to remove an ESXi host from a source ViPR Export Group and add to an existing target Export Group.

Note: this procedure applies to ViPR 3.0 and below, ViPR 3.5 introduces a more automated procedure which I intend to cover in a future post.

Host Removal – remove ESXi Host from a ViPR Export Group

Please ensure the current ViPR configuration is in a known good state before proceeding and that the ViPR database is edited where required to bring it in synch with the live environment. Contact DellEMC support to assist with any ViPR database remediations required.

Note: Ensure the version of SMI-S complies with the EHC ESSM stated version.

The following steps detail the procedure for removing a host from a vSphere ESXi cluster in vCenter and utilizing the ViPR CLI to remove the same host from the cluster Export Group. Continue reading

ViPR Controller – Ingest V(x)Block UCS Boot Volumes

The following provides guidance in a scenario where your ESXi host boots from SAN, for example this is the standard configuration for Cisco UCS blades included in a Vblock/VxBlock. These boot volumes will most likely have been created outside of ViPR Controller, thus it will be a requirement to perform an ingestion of these volumes prior to performing any host migration procedures, for example moving a UCS ESXi blade to another cluster in vCenter.

Note: by not performing the ingestion will result in the removal of the Boot volume from the ESXi host masking if you are initiating the migration process utilizing ViPR commands (more on this later). Continue reading

ViPR Controller: Exporting VMAX3 LUN fails with Error 12000

This ‘Error 12000’ maybe encountered while exporting a VMAX3 LUN from ViPR Controller as a shared datastore to a specific ESXi Cluster. This issue arises in scenarios where for example the ESXi hosts already have independent Masking Views created, but no dedicated ESXi Cluster Masking View (Tenant Pod in EHC terms).

You may ask the question why each host has its own dedicated masking view?: Think Vblock/VxBlock with UCS, where each UCS ESXi blade server boots from a SAN-attached boot volume presented from the VMAX array (Vblock/VxBlock 700 series = VMAX). Further detail can be found here on how the specific functioning Masking Views are configured on a Vblock/VxBlock:

vmax-masking-views-for-esxi-boot-and-shared-cluster-volumes

Key point: dedicated Masking views are required for VMware ESXi boot volumes in addition to Cluster Masking Views for shared vmfs datastores. Continue reading

ViPR Controller -Configuring AD Authentication

The default built-in administrative accounts may not be granular enough to meet your business needs, if this is the case then adding an authentication provider such as Active Directory which we highlight as part of this configuration allows you to assign users or groups to specific roles.

The example configuration provided here was part of an Enterprise Hybrid Cloud solution. Continue reading

EMC XtremIO – VPLEX BackEnd Connectivity Considerations

This post will detail some general best practices for VPLEX Back-end connectivity to an XtremIO storage array. As you will see from the diagrams below each VPLEX director should have redundant physical connectivity to the XtremIO storage array across Fabric-A & B. Due to the fact that XtremIO is an active/active array each X-Brick storage controller FC port has access to all provisioned storage on the array. In essence the objective here is to balance each VPLEX Director across each XtremIO Storage controller as evenly as possible thus avoiding any bottlenecks in connectivity between Back-end VPLEX to XtremIO front-end. The configuration examples provided (not limited to) are for the following scenarios:

EXAMPLE 1: Single VPLEX Engine & Single XtremIO X-Brick
EXAMPLE 2: Single VPLEX Engine & Dual XtremIO X-Brick
EXAMPLE 3: Dual VPLEX Engine & Dual XtremIO X-Brick
Further examples to follow…

EXAMPLE 1: Single VPLEX Engine & XtremIO X-Brick
For this example the single engine VPLEX System has 50% of the available back-end ports connected: E1-A1-FC00, E1-A1-FC01, E1-B1-FC00, and E1-B1-FC01. This allows for a 1:1 mapping of VPLEX->XtremIO ports which equates to a total bandwidth of 32Gb/s between VPLEX->XtremIO. This design meets VPLEX HA requirements as each VPLEX director is zoned to both XtremIO SCs. The design also allows for future scalability of the XtremIO cluster as the remaining VPLEX back-end ports, E1-A1-FC02, E1-A1-FC03, E1-B1-FC02, and E1-B1-FC03 can be used to upgrade to a Dual X-Brick XtremIO configuration at a later stage (next example).

VPLEX-XtremIO-Single-1 - New Page

Cisco MDS-SERIES Zoning Configuration

The configuration steps below will detail creating:
◆ Alias’s for VPLEX Back-end & XtremIO Front-end ports
◆ Creating Zones
◆ Creating Zonesets
◆ Activate & Commit Zoneset

VPLEX&XTREMIO-SINGLE

Example1: Download Zoning Configuration

EXAMPLE 2: Single VPLEX Engine & Dual XtremIO X-Brick
For this example the single engine VPLEX System has 100% of the available back-end ports connected: E1-A1-FC00, E1-A1-FC01, A1-FC02, A1-FC03 and E1-B1-FC00, E1-B1-FC01, E1-B1-FC02, E1-B1-FC03. This again allows for a 1:1 mapping of VPLEX->XtremIO ports which equates to a total bandwidth of 64Gb/s between VPLEX->XtremIO. Each VPLEX director is zoned to all 4 XtremIO SCs for maximum HA and bandwidth:

VPLEX-Single-XtremIO-Dual-1 - New Page

VPLEX-SINGLE&XTREMIO-DUAL

Example2: Download Zoning Configuration

EXAMPLE 3: Dual VPLEX Engine & Dual XtremIO X-Brick
For this example the Dual engine VPLEX System has 50% of the available back-end ports connected: E1-A1-FC00, E1-A1-FC01, E2-A1-FC00, E2-A1-FC01, and E1-B1-FC00, E1-B1-FC01, E2-B1-FC00, E2-B1-FC01. This again follows a 1:1 mapping of VPLEX->XtremIO ports which equates to a total bandwidth of 64Gb/s between VPLEX->XtremIO. By using 50% of the available ports on each VPLEX director in a dual-dual configuration allows for a future expansion of the XtremIO cluster to a Quad configuration and still maintain a 1:1 mapping from a Dual VPLEX solution. Each VPLEX director is zoned to all 4 XtremIO SCs for maximum HA and bandwidth:

VPLEX-Dual-XtremIO-Dual-1 - New Page

VPLEX&XTREMIO-DUAL

Example3: Download Zoning Configuration

XtremIO Initiator Groups
Each IG can access all of the created storage volumes within the array once the storage is mapped to the Initiator Group, there is no requirement as is the case with other arrays to configure groups of array ports/initiators/volumes such as masking views or storage groups, with XtremIO you simply create the IG and map the required volumes to that IG (see earlier post “Creating Initiator Groups and Mapping LUNs”)
Related XtremIO maximums v4.x:

◆ Maximum volumes per IG = 2048 (3.0.x the limit is 1024)
◆ Maximum volumes presented to VPLEX = 4096
◆ Maximum volumes per cluster = 8192
◆ Mapping limit for XtremIO is 16,384 volumes to Initiator Groups

In most cases a single XtremIO Initiator Group shall suffice for VPLEX->XtremIO connectivity due to the fact that you may present up to a maximum of 2048 volumes to a single IG, this equates to 2048 XtremIO system mappings (one volume map counts as one mapping). In the event that greater than 2048 volumes are required you may group all the back-end ports of VPLEX into 2 XtremIO Initiator Groups and map them to the volumes as follows: configure VPLEX Back-end ports FC00 and FC01 across all directors to one IG and ports FC02 and FC03 across all directors to a second IG, essentially this type of configuration would allow a VPLEX cluster to be seen as 2 independent hosts from the XtremIO cluster allowing the user to theoretically provision the maximum allowed 4096 volumes from one XtremIO cluster to one VPLEX cluster! At a high level this is an example configuration:

Initiator Group1: assign ports FC00 and FC01 across all directors to one IG. Map VPLEX Volumes 1-2048 to IG1 = 2048 Mappings

Initiator Group2: assign ports FC02 and FC03 across all directors to a second IG. Map VPLEX Volumes 2049-4096 to IG2 = 2048 Mappings

At this stage we now have 4096 Mappings and 4096 Volumes presented to VPLEX with 50% of volumes mapped to 50% of VPLEX BE ports FC00 and FC01 and the remaining 50% of volumes mapped to ports FC02 and FC03.

The above example allows to scale to the maximum limit of 4096 volumes but if there is no requirement to scale beyond 2048 volumes then a single XtremIO IG may contain all 32 VPLEX BE ports.

This recommended best practice configuration would apply to any combination of XtremIO X-Bricks to VPLEX Engines.

Note (VPLEX aside): Mappings per cluster = 16,384. The only way we would ever reach the maximum of 16,384 mapping is if volumes were shared across multiple initiator groups. Example maximum mapping configuration:
2048volumes assigned to IG1 & IG2 = 4096 mappings
2048volumes assigned to IG3 & IG4 = 4096 mappings
2048volumes assigned to IG5 & IG6 = 4096 mappings
2048volumes assigned to IG7 & IG8 = 4096 mappings
TOTAL = 8192volumes / 16,384 mappings

ViPR
While the following examples provide the associated Cisco zoning for their respective configurations, in the case where ViPR is part of the solution you may chose to either manually complete this task as per the example scripts below which is seamless to ViPR or let ViPR automatically handle this zoning configuration task. As an example if you chose to allow ViPR to automatically configure the Back-end zoning and you only require at the time to use 50% of the available VPLEX BE ports (depending on the VPLEX-XtremIO config and future scalability requirements) then you can manually tell ViPR to use only the FC00 & FC01 director ports, ViPR follows VPlex-XtremIO best practices and chooses ports based on maximum redundancy.
Where the number of X-Bricks is less than the number of VPLEX Engines ViPR will automatically zone 2 Paths Per VPlex Director allowing the flexibility to scale the number of X-Bricks as outlined in the examples below.
Another point to note is ViPR at present in the case of a VPLEX-XtremIO solution recognises only a single XtremIO IG thus a 2048 volume limit applies when using ViPR.

Useful References:
EMC® VPLEX SAN Connectivity Implementation Planning and Best Practices:
http://www.emc.com/collateral/technical-documentation/h13546-vplex-san-connectivity-best-practices.pdf

EMC ViPR – Cisco IVR Cross-Connect Zoning (VPLEX)

Known ViPR&VPLEX Storage Provisioning Issue:
The following error may be encountered while provision a shared VPLEX distributed volume to an ESXi Cluster using ViPR v2.x – 2.3:

CLUSTER MDS ERROR

The reason why this issue occurs during a ViPR storage provisioning task with VPLEX is due to the fact that ViPR incorrectly attempts to apply two simultaneous updates to the Cisco MDS IVR database, correctly the MDS database is locked by the first task and the second task times out resulting in a failed ViPR provisioning process. The tasks should be executed in a sequential fashion allowing each task to complete and then commit changes to the IVR database thus removing the lock it held once the commit is successful. Once the database lock is removed then the subsequent task may execute on the database.

Workaround:
Executing an exclusive storage provisioning order from ViPR catalog for a single ESXi host works perfectly, including automatically creating the required Cross-Connect Zoning, this is due to the fact the single workflow performs MDS IVR database updates sequentially. During the single ESXi host exclusive storage provisioning task ViPR creates the necessary initiators, storage views and IVR Zones (both local and cross-connect zoning) for a single host. BUT performing a shared storage provisioning task to an ESXi Cluster fails in a single catalog order, it will also fail if two exclusive storage provision orders are executed at the same time. In summary the workaround is to execute an exclusive storage provisioning order for each host in the cluster individually one at a time. Once this is complete and each host has a volume presented and VPLEX has the correct initiators and storage views created by ViPR, you may then create a new distributed LUN for the whole ESXi cluster. ViPR simply adds the new distributed volumes to existing storage views in VPLEX (there is no zoning going on when you run the ddev creation, thus no locking). Once you have a working distributed volume for all of the hosts, you may then remove the exclusive volumes and everything should function accordingly. Ensure to verify that all the required zoning (including IVR Zones) is configured correctly on all switches and the ESXi hosts can see all associated paths.

NOTE: ViPR engineering plan to enhance the Zoning workflow with an additional step to obtain/monitor any IVR database locks before proceeding with the IVR zoning operations. This will be targeted for the next ViPR release. I will provide updates to this post in due course.

Solution Example:
The below diagram depicts the connectivity requirements in order to implement a ViPR storage provisioning solution with a VPLEX Metro configuration using Cross-Connect Zoning:

VIPR-VPLEX-IVR-Dual - New Page

From the above digram you can see that an ISL is in place for Site-to-Site connectivity, in this example configuration the ISL carries VPLEX-FC-WAN-Replication traffic over VSAN30(Fabric-A) and VSAN31(Fabric-B) -(VPEX FC WAN COM). VSAN30 is stretched between Fabric-A switches on both sites and VSAN31 is stretched between both switches on Fabric-B for Site1&2. VSAN30&31 can be used as transit VSANs for this example IVR configuration.

In order for ViPR v2.x to successfully execute the task of automatically creating the required cross-connect zoning the following configuration needs to be in place (as per example diagram above):

Site1:
Fabric-A, VSAN10: associated interfaces|PC (even ESX hba of site1, VPLEX FE&BE and PC30) added as members to vsan10.
Fabric-B, VSAN11: associated interfaces|PC (odd ESX hba of site1, VPLEX FE&BE and PC30) added as members to vsan11.
Site 2:
Fabric-A, VSAN20: associated interfaces|PC (even ESX hba of site2, VPLEX FE&BE and PC31) added as members to vsan20.
Fabric-B, VSAN21: – associated interfaces|PC (odd ESX hba of site2, VPLEX FE&BE and PC31) added as members to vsan21.

Site1 – Site2:
Fabric-A: VSAN30 used as a transit vsan over Port-channel 30.
Fabric-B: VSAN31 used as a transit vsan over Port-channel 31.

A prereq is required in order for ViPR to successfully create the cross-connect zoning automatically as part of the provisioning workflow, the prereq is to manually create an IVR zone on fabric A, connecting vsan 10 and vsan 20 and an IVR zone on Fabric B connecting vsan11 and vsan 21 (example IVR Zones provided below).

In the case of ViPR v2.2 an additional prereq task is required and that is to stretch the VSANs between sites, as per this example VSAN20 gets added to switch-A on Site 1 and vice-versa VSAN10 added to switch-A on Site2, repeat same for Fabric-B switches but no local interfaces are assigned to these dummy VSANs, essentially a VSAN20 is created without any member on Switch-A Site1 etc. This is done for all respective VSANs as can be seen in the example configuration provided below. As part of the VSAN stretch ensure to add the allowed VSANs to the respective port-channels:

Port-Channel 30 Allowed VSAN 10,20,30
Port-Channel 31 Allowed VSAN 11,21,31

Once the VSAN is stretched across the sites as per the prereq for ViPR v2.2, ViPR will then automatically create the required IVR zones as part of the provisioning workflow.

Note: The vArray should be set for Automatic Zoning for all this to occur.

Example MDS Configuration
These are example configuration steps to be completed on both sites MDS switches in order to enable Cisco Inter-VSAN Routing (IVR is the standard for cross-connect zoning with VPLEX Metro) and to enable automatic cross-connect zoning with ViPR:

FABRIC ‘A’ Switches

feature ivr
ivr nat
ivr distribute
ivr commit

system default zone distribute full
system default zone mode enhanced
ivr vsan-topology auto
zone mode enhanced vsan 10
zone mode enhanced vsan 20
zone mode enhanced vsan 30

vsan database
vsan 10 name “VSAN10”
vsan 20 name “VSAN20”
vsan 30 name “vplex1_wan_repl_vsan30”

interface port-channel 30
channel mode active
switchport mode E
switchport trunk allowed vsan 10
switchport trunk allowed vsan add 20
switchport trunk allowed vsan add 30
switchport description CROSS-SITE-LINK
switchport speed 8000
switchport rate-mode dedicated

Configuring FABRIC A switches Fcdoamin priorities:

Site1:
fcdomain priority 2 vsan 10
fcdomain domain 10 static vsan 10
fcdomain priority 100 vsan 20
fcdomain domain 22 static vsan 20
fcdomain priority 2 vsan 30
fcdomain domain 30 static vsan 30

Site2:
fcdomain priority 100 vsan 10
fcdomain domain 12 static vsan 10
fcdomain priority 2 vsan 20
fcdomain domain 20 static vsan 20
fcdomain priority 100 vsan 30
fcdomain domain 32 static vsan 30

Example: configuring Inter-VSAN routing (IVR) Zones connecting an ESXi host HBA0 over VSANs 10 and 20 from site1->site2 and vice versa site2->site1 utilising the transit VSAN30:

device-alias database
device-alias name VPLEXSITE1-E1_A0_FC02 pwwn 50:00:14:42:A0:xx:xx:02
device-alias name VPLEXSITE1-E1_B0_FC02 pwwn 50:00:14:42:B0:xx:xx:02
device-alias name VPLEXSITE2-E1_A0_FC02 pwwn 50:00:14:42:A0:xx:xx:02
device-alias name VPLEXSITE2-E1_B0_FC02 pwwn 50:00:14:42:B0:xx:xx:02
device-alias name ESXi1SITE1-VHBA0 pwwn xx:xx:xx:xx:xx:xx:xx:xx
device-alias name ESXi1SITE2-VHBA0 pwwn xx:xx:xx:xx:xx:xx:xx:xx
device-alias commit
device-alias distribute

ivr zone name ESXi1SITE1-VHBA0_VPLEXSITE2-E1_A0_FC02
member device-alias ESXi1SITE1-VHBA0 vsan 10
member device-alias VPLEXSITE2-E1_A0_FC02 vsan 20
ivr zone name ESXi1SITE1-VHBA0_VPLEXSITE2-E1_B0_FC02
member device-alias ESXi1SITE1-VHBA0 vsan 10
member device-alias VPLEXSITE2-E1_B0_FC02 vsan 20

ivr zone name ESXi1SITE2-VHBA0_VPLEXSITE1-E1_A0_FC02
member device-alias ESXi1SITE2-VHBA0 vsan 20
member device-alias VPLEXSITE1-E1_A0_FC02 vsan 10
ivr zone name ESXi1SITE2-VHBA0_VPLEXSITE1-E1_B0_FC02
member device-alias ESXi1SITE2-VHBA0 vsan 20
member device-alias VPLEXSITE1-E1_B0_FC02 vsan 10

ivr zoneset name IVR_vplex_hosts_XC_A
member ESXi1SITE1-VHBA0_VPLEXSITE2-E1_A0_FC02
member ESXi1SITE1-VHBA0_VPLEXSITE2-E1_B0_FC02

member ESXi1SITE2-VHBA0_VPLEXSITE1-E1_A0_FC02
member ESXi1SITE2-VHBA0_VPLEXSITE1-E1_B0_FC02

ivr zoneset activate name IVR_vplex_hosts_XC_A
ivr commit

FABRIC ‘B’ Switches

feature ivr
ivr nat
ivr distribute
ivr commit

system default zone distribute full
system default zone mode enhanced
ivr vsan-topology auto
zone mode enhanced vsan 11
zone mode enhanced vsan 21
zone mode enhanced vsan 31

vsan database
vsan 11 name “VSAN11”
vsan 21 name “VSAN21”
vsan 31 name “vplex1_wan_repl_vsan31”

interface port-channel 31
channel mode active
switchport mode E
switchport trunk allowed vsan 11
switchport trunk allowed vsan add 21
switchport trunk allowed vsan add 31
switchport description CROSS-SITE-LINK
switchport speed 8000
switchport rate-mode dedicated

Configuring FABRIC B switches Fcdoamin priorities:

Site1:
fcdomain priority 2 vsan 11
fcdomain domain 11 static vsan 11
fcdomain priority 100 vsan 21
fcdomain domain 23 static vsan 21
fcdomain priority 2 vsan 31
fcdomain domain 31 static vsan 31

Site2:
fcdomain priority 100 vsan 11
fcdomain domain 13 static vsan 11
fcdomain priority 2 vsan 21
fcdomain domain 21 static vsan 21
fcdomain priority 100 vsan 31
fcdomain domain 33 static vsan 31

Example configuring Inter-VSAN routing (IVR) zones connecting an ESXi host HBA1 over VSANs 11 and 21 from site1->site2 and vice versa site2->site1 utilising the transit VSAN31:

device-alias database
device-alias name VPLEXSITE1-E1_A0_FC02 pwwn 50:00:14:42:A0:xx:xx:03
device-alias name VPLEXSITE1-E1_B0_FC02 pwwn 50:00:14:42:B0:xx:xx:03
device-alias name VPLEXSITE2-E1_A0_FC02 pwwn 50:00:14:42:A0:xx:xx:03
device-alias name VPLEXSITE2-E1_B0_FC02 pwwn 50:00:14:42:B0:xx:xx:03
device-alias name ESXi1SITE1-VHBA1 pwwn xx:xx:xx:xx:xx:xx:xx:xx
device-alias name ESXi1SITE2-VHBA1 pwwn xx:xx:xx:xx:xx:xx:xx:xx
device-alias commit
device-alias distribute

ivr zone name ESXi1SITE1-VHBA1_VPLEXSITE2-E1_A0_FC03
member device-alias ESXi1SITE1-VHBA1 vsan 11
member device-alias VPLEXSITE2-E1_A0_FC03 vsan 21
ivr zone name ESXi1SITE1-VHBA1_VPLEXSITE2-E1_B0_FC03
member device-alias ESXi1SITE1-VHBA1 vsan 11
member device-alias VPLEXSITE2-E1_B0_FC02 vsan 21

ivr zone name ESXi1SITE2-VHBA1_VPLEXSITE1-E1_A0_FC03
member device-alias ESXi1SITE2-VHBA0 vsan 20
member device-alias VPLEXSITE1-E1_A0_FC02 vsan 10
ivr zone name ESXi1SITE2-VHBA1_VPLEXSITE1-E1_B0_FC03
member device-alias ESXi1SITE2-VHBA1 vsan 21
member device-alias VPLEXSITE1-E1_B0_FC03 vsan 11

ivr zoneset name IVR_vplex_hosts_XC_B
member ESXi1SITE1-VHBA1_VPLEXSITE2-E1_A0_FC03
member ESXi1SITE1-VHBA1_VPLEXSITE2-E1_B0_FC03

member ESXi1SITE2-VHBA1_VPLEXSITE1-E1_A0_FC03
member ESXi1SITE2-VHBA1_VPLEXSITE1-E1_B0_FC03

ivr zoneset activate name IVR_vplex_hosts_XC_B
ivr commit

Verification commands to check status of configuration:
show fcdomain domain-list
Verifies unique domain ID assignment. If a domain overlap exists, edit and verify the allowed-domains list or manually configure static, non-overlapping domains for each participating switch and VSAN.

show interface brief
Verifies if the ports are operational, VSAN membership, and other configuration settings covered previously.

show fcns database
Verifies the name server registration for all devices participating in the IVR.

show zoneset active
Displays zones in the active zone set. This should include configured IVR zones.
show zone active vsan X |grep -i ivr

show ivr fcdomain
Displays the IVR persistent fcdomain database.

show ivr internal
Shows the IVR internal troubleshooting information.

show ivr pending-diff
Shows the IVR pending configuration.

show ivr service-group
Shows the difference between the IVR pending and configured databases.

show ivr tech-support
shows information that is used by your customer support representative to troubleshoot IVR issues.

show ivr virtual-domains
Shows IVR virtual domains for all local VSANs.

show ivr virtual-fcdomain-add-status
Shows IVR virtual fcdomain status.

show ivr vsan-topology
Verifies the configured IVR topology.

show ivr zoneset
Verifies the IVR zone set configuration.

show ivr zone
Verifies the IVR zone configuration.

clear ivr zone database
Clears all configured IVR zone information.
Note: Clearing a zone set erases only the configured zone database, not the active zone database.

Useful CISCO Docs:
Cisco IVR Troubleshooting
IVR Zones and Zonesets

Inter-VSAN Routing (IVR) definition: An IVR zone is a set of end devices that are allowed to communicate across VSANs within their interconnected SAN fabric. An IVR path is a set of switches and Inter-Switch Links (ISLs) through which a frame from an end device in one VSAN can reach another end device in some other VSAN. Multiple paths can exist between two such end devices. A Transit VSAN is a VSAN that exists along an IVR path from the source edge VSAN of that path to the destination edge VSAN of that path, in the example solution diagram above you will see that VSAN 30 and VSAN 31 are transit VSANs. Distributing the IVR Configuration Using CFS: The IVR feature uses the Cisco Fabric Services (CFS) infrastructure to enable efficient configuration management and to provide a single point of configuration for the entire fabric in the VSAN.

Thanks to @HeagaSteve,Joni,Hans,@dclauvel & Sarav for providing valuable input.