EMC XtremIO: VMware ESXi Multi-Array Settings

The following table represents the recommended VMware vSphere ESXi host settings to be applied when an ESXi host(s) is connected to a single array and also details multi-array settings when at least one of the arrays is XtremIO.

The following information is referenced from EMC KB303782:
Recommended settings for VNX, VMAX, VPLEX, and XtremIO Colocation within VMware vSphere

XIO-MultiArray1

Notes: 

  1. Unless otherwise noted, the term VMAX refers to VMAX, VMAX3, and VMAX All Flash arrays
  2. The setting for FC Adapter policy IO Throttle Count can be set to the value specific to the  individual storage array type if connections are segregated. If the storage arrays are connected using the same vHBA s, use the multi-array setting in the table.
  3. The value for Disk.SchedNumReqOutstanding can be set on individual LUNs and therefore the value used should be specific to the underlying individual storage array type.

 

Related Post:

EMC XtremIO – Setting Disk.SchedNumReqOutstanding On vSphere 5.5 & 6.0 (PowerCLI)

EMC XtremIO – Smart Zoning Example

The example provided is based on the following design:

  • Dual X-Brick Cluster
  • Cisco MDS Switches – Dual Fabric
  • VMware ESXi 4x Host Environment
  • 4 Smart Zoned Paths per ESXi Host (2 paths per Fabric)

ESXi in this scenario may depict a standalone host or a 4x cluster configuration (Scripts provided are based on a 4x Host basis). Balancing the ESXi Hosts between the XtremIO Storage Controllers is key for the design to provide a distributed workload across all the available Storage Controller target ports. The following example depicts a 4x Host configuration, in the event a 5th Host is required then it is advised to use a round-robin methodology (ESXi05 utilises ESXi01 zoning logic etc.).

The Smart Zoning feature is available with MDS 9000 series switches from NX-OS 5.2(6).

Some of the key benefits of using Smart Zoning:

  • Reduced configuration simplifying the zoning process.
  • Simplified addition of new ESXi hosts – add new ESXi member Host to a Zone and reactivate.
  • Eliminates single-initiator to single-target zones.
  • Reduced Zoneset size – multiple initiators and multiple targets zoned together.
  • Reduced number of Access Control Entries (ACEs).

Continue reading

EMC XtremIO – VPLEX BackEnd Connectivity Considerations

This post will detail some general best practices for VPLEX Back-end connectivity to an XtremIO storage array. As you will see from the diagrams below each VPLEX director should have redundant physical connectivity to the XtremIO storage array across Fabric-A & B. Due to the fact that XtremIO is an active/active array each X-Brick storage controller FC port has access to all provisioned storage on the array. In essence the objective here is to balance each VPLEX Director across each XtremIO Storage controller as evenly as possible thus avoiding any bottlenecks in connectivity between Back-end VPLEX to XtremIO front-end. The configuration examples provided (not limited to) are for the following scenarios:

EXAMPLE 1: Single VPLEX Engine & Single XtremIO X-Brick
EXAMPLE 2: Single VPLEX Engine & Dual XtremIO X-Brick
EXAMPLE 3: Dual VPLEX Engine & Dual XtremIO X-Brick
Further examples to follow…

EXAMPLE 1: Single VPLEX Engine & XtremIO X-Brick
For this example the single engine VPLEX System has 50% of the available back-end ports connected: E1-A1-FC00, E1-A1-FC01, E1-B1-FC00, and E1-B1-FC01. This allows for a 1:1 mapping of VPLEX->XtremIO ports which equates to a total bandwidth of 32Gb/s between VPLEX->XtremIO. This design meets VPLEX HA requirements as each VPLEX director is zoned to both XtremIO SCs. The design also allows for future scalability of the XtremIO cluster as the remaining VPLEX back-end ports, E1-A1-FC02, E1-A1-FC03, E1-B1-FC02, and E1-B1-FC03 can be used to upgrade to a Dual X-Brick XtremIO configuration at a later stage (next example).

VPLEX-XtremIO-Single-1 - New Page

Cisco MDS-SERIES Zoning Configuration

The configuration steps below will detail creating:
◆ Alias’s for VPLEX Back-end & XtremIO Front-end ports
◆ Creating Zones
◆ Creating Zonesets
◆ Activate & Commit Zoneset

VPLEX&XTREMIO-SINGLE

Example1: Download Zoning Configuration

EXAMPLE 2: Single VPLEX Engine & Dual XtremIO X-Brick
For this example the single engine VPLEX System has 100% of the available back-end ports connected: E1-A1-FC00, E1-A1-FC01, A1-FC02, A1-FC03 and E1-B1-FC00, E1-B1-FC01, E1-B1-FC02, E1-B1-FC03. This again allows for a 1:1 mapping of VPLEX->XtremIO ports which equates to a total bandwidth of 64Gb/s between VPLEX->XtremIO. Each VPLEX director is zoned to all 4 XtremIO SCs for maximum HA and bandwidth:

VPLEX-Single-XtremIO-Dual-1 - New Page

VPLEX-SINGLE&XTREMIO-DUAL

Example2: Download Zoning Configuration

EXAMPLE 3: Dual VPLEX Engine & Dual XtremIO X-Brick
For this example the Dual engine VPLEX System has 50% of the available back-end ports connected: E1-A1-FC00, E1-A1-FC01, E2-A1-FC00, E2-A1-FC01, and E1-B1-FC00, E1-B1-FC01, E2-B1-FC00, E2-B1-FC01. This again follows a 1:1 mapping of VPLEX->XtremIO ports which equates to a total bandwidth of 64Gb/s between VPLEX->XtremIO. By using 50% of the available ports on each VPLEX director in a dual-dual configuration allows for a future expansion of the XtremIO cluster to a Quad configuration and still maintain a 1:1 mapping from a Dual VPLEX solution. Each VPLEX director is zoned to all 4 XtremIO SCs for maximum HA and bandwidth:

VPLEX-Dual-XtremIO-Dual-1 - New Page

VPLEX&XTREMIO-DUAL

Example3: Download Zoning Configuration

XtremIO Initiator Groups
Each IG can access all of the created storage volumes within the array once the storage is mapped to the Initiator Group, there is no requirement as is the case with other arrays to configure groups of array ports/initiators/volumes such as masking views or storage groups, with XtremIO you simply create the IG and map the required volumes to that IG (see earlier post “Creating Initiator Groups and Mapping LUNs”)
Related XtremIO maximums v4.x:

◆ Maximum volumes per IG = 2048 (3.0.x the limit is 1024)
◆ Maximum volumes presented to VPLEX = 4096
◆ Maximum volumes per cluster = 8192
◆ Mapping limit for XtremIO is 16,384 volumes to Initiator Groups

In most cases a single XtremIO Initiator Group shall suffice for VPLEX->XtremIO connectivity due to the fact that you may present up to a maximum of 2048 volumes to a single IG, this equates to 2048 XtremIO system mappings (one volume map counts as one mapping). In the event that greater than 2048 volumes are required you may group all the back-end ports of VPLEX into 2 XtremIO Initiator Groups and map them to the volumes as follows: configure VPLEX Back-end ports FC00 and FC01 across all directors to one IG and ports FC02 and FC03 across all directors to a second IG, essentially this type of configuration would allow a VPLEX cluster to be seen as 2 independent hosts from the XtremIO cluster allowing the user to theoretically provision the maximum allowed 4096 volumes from one XtremIO cluster to one VPLEX cluster! At a high level this is an example configuration:

Initiator Group1: assign ports FC00 and FC01 across all directors to one IG. Map VPLEX Volumes 1-2048 to IG1 = 2048 Mappings

Initiator Group2: assign ports FC02 and FC03 across all directors to a second IG. Map VPLEX Volumes 2049-4096 to IG2 = 2048 Mappings

At this stage we now have 4096 Mappings and 4096 Volumes presented to VPLEX with 50% of volumes mapped to 50% of VPLEX BE ports FC00 and FC01 and the remaining 50% of volumes mapped to ports FC02 and FC03.

The above example allows to scale to the maximum limit of 4096 volumes but if there is no requirement to scale beyond 2048 volumes then a single XtremIO IG may contain all 32 VPLEX BE ports.

This recommended best practice configuration would apply to any combination of XtremIO X-Bricks to VPLEX Engines.

Note (VPLEX aside): Mappings per cluster = 16,384. The only way we would ever reach the maximum of 16,384 mapping is if volumes were shared across multiple initiator groups. Example maximum mapping configuration:
2048volumes assigned to IG1 & IG2 = 4096 mappings
2048volumes assigned to IG3 & IG4 = 4096 mappings
2048volumes assigned to IG5 & IG6 = 4096 mappings
2048volumes assigned to IG7 & IG8 = 4096 mappings
TOTAL = 8192volumes / 16,384 mappings

ViPR
While the following examples provide the associated Cisco zoning for their respective configurations, in the case where ViPR is part of the solution you may chose to either manually complete this task as per the example scripts below which is seamless to ViPR or let ViPR automatically handle this zoning configuration task. As an example if you chose to allow ViPR to automatically configure the Back-end zoning and you only require at the time to use 50% of the available VPLEX BE ports (depending on the VPLEX-XtremIO config and future scalability requirements) then you can manually tell ViPR to use only the FC00 & FC01 director ports, ViPR follows VPlex-XtremIO best practices and chooses ports based on maximum redundancy.
Where the number of X-Bricks is less than the number of VPLEX Engines ViPR will automatically zone 2 Paths Per VPlex Director allowing the flexibility to scale the number of X-Bricks as outlined in the examples below.
Another point to note is ViPR at present in the case of a VPLEX-XtremIO solution recognises only a single XtremIO IG thus a 2048 volume limit applies when using ViPR.

Useful References:
EMC® VPLEX SAN Connectivity Implementation Planning and Best Practices:
http://www.emc.com/collateral/technical-documentation/h13546-vplex-san-connectivity-best-practices.pdf

EMC XtremIO – 4.0 Maximums

The following details the maximum values common to all four X-Brick models and the maximums applicable to each specific X-Brick model. As of XtremIO Verison 4.0 there are four types of X-Brick model to chose from:

1. 5TB Starter X-Brick
2. 10TB X-Brick
3. 20TB X-Brick
4. 40TB X-Brick

An XtremIO storage system has a Scale-Out architecture and can include a single X-Brick or a cluster of multiple XBricks(2,4,6 or 8 X-Brick clusters):
XtremIO 4 MAX

Universal Maximums
Maximum values common to all 4 X-Brick models:
• Storage Controllers per X-Brick = 2
• A single Xtremio Management Stations(XMS) can manage up to 8 clusters. XMS Multi-cluster support is a new feature of 4.0 and allows clusters running 4.x code and above to be managed by a single XMS. When an XtremIO 3.0 cluster is upgraded to 4.x then it can be added to an XMS 4.x managing multiple clusters.
• Initiators per cluster(FC or iSCSI) = 1024
If you consider a host has 2 initiators this would imply a max of 512 hosts.
• Initiators per Initiator Group = 64
If you consider a host has 2 initiators this would imply a max of 32 hosts per Initiator Group.
• Initiator Groups per cluster = 1024
• Volumes per cluster = 8192
• Number of Initiator Groups mappings per Volume = 64
• Number of Volumes mappings per Initiator Group = 2048

• Mappings per cluster (10 Volumes mapped to 10 Initiator Groups results in 100 mappings) = 16,384
Example: Maximum Mappings per cluster:
The only way we would ever reach the maximum of 16,384 mapping is if volumes were shared across multiple initiator groups. Example maximum mapping configuration:
2048volumes assigned to IG1 & IG2 = 4096 mappings
2048volumes assigned to IG3 & IG4 = 4096 mappings
2048volumes assigned to IG5 & IG6 = 4096 mappings
2048volumes assigned to IG7 & IG8 = 4096 mappings
TOTAL = 8192volumes / 16,384 mappings

• Snapshots per production Volume = 512
• Consistency Groups = 512
• Volumes per Consistency Groups = 256
• Consistency Groups per Volume = 4
• iSCSI portals per X-Brick = 16
• Physical iSCSI 10Gb/s Ethernet ports per X-Brick = 4
• iSCSI routes per cluster = 32
• Physical FC 8Gb/s ports per X-Brick = 4
• Largest block size supported = 4MB
• Maximum Volume size = 281.4TB / 256TiB (Starter X-Brick = 132TB / 120TiB)
• Maximum volumes presented to VPLEX = 4096

40TB X-Brick
25*1600 GB eMLC SSDs per X-Brick
• Number of X-Bricks per cluster = 8
• Raw Capacity = 40TB / 36.4TiB
• Usable physical capacity per X-Brick = 33.6TB / 30.55TiB (with no data reduction)
• Maximum logical capacity per X-Brick = 201.6TB / 183.3TiB (with the use of data reduction techniques such as Thin Provisioning, inline compression and inline deduplication. As you can see these figures are based on a 6:1 ratio (33.6TB * 6 = 201.6TB) and will vary depending on types of data sets residing on XtremIO.)

20TB X-Brick
25*800 GB eMLC SSDs per X-Brick
• Number of X-Bricks per cluster = 8
• Raw Capacity = 20TB / 18.2TiB
• Usable physical capacity per X-Brick = 16.7TB / 15.2TiB (with no data reduction)
• Maximum logical capacity per X-Brick = 100.2TB / 91.2TiB (with the use of data reduction techniques based on a 6:1 ratio (16.7TB * 6 = 100.2TB))

10TB X-Brick
25*800 GB eMLC SSDs per X-Brick
• Number of X-Bricks per cluster = 4
• Raw Capacity = 10TB / 9.1TiB
• Usable physical capacity per X-Brick = 8.33TB / 7.6TiB (with no data reduction)
• Maximum logical capacity per X-Brick = 50TB / 45.5TiB (with the use of data reduction techniques based on a 6:1 ratio (8.33TB * 6 = 50TB))

5TB Starter X-Brick
13*400 GB eMLC SSDs
A starter X-Brick has 13 eMLC SSDs(a standard X-Brick has 25 eMLC SSDs), a starter X-Brick can be expanded to a standard 10TB X-Brick by adding 12 SSDs. Once the starter X-Brick has been expanded to a 10TB X-Brick then it may be scaled-out as per a std. 10TB X-Brick to two and four X-Brick clusters.
• Number of X-Bricks per cluster = 1
• Raw Capacity = 5.2TB / 4.7TiB
• Usable physical capacity per X-Brick = 3.6TB / 3.3TiB (with no data reduction)
• Maximum logical capacity per X-Brick = 21.5TB / 19.5TiB (with the use of data reduction techniques based on a 6:1 ratio (3.6TB * 6 = 21.5TB))

Note: 1 Kilobyte = 1000 bytes whereas 1 Kibibyte = 1024 bytes ( 1TB = 1000(bytes)4 & 1TiB = 1024(bytes)4)

PERFORMANCE
Referenced from the following data sheet: EMC XTREMIO 4.0 SYSTEM SPECIFICATIONS
XtremIO 4 MAX2

EMC XtremIO – VMware VMFS LUN Expansion

In this post I will detail how to increase an XtremIO volume and extend the associated VMware VMFS volume.

As can be seen form the XtremIO management interface the existing size of this XtremIO volume is 1TB:
XtremIO_EXP1

By right clicking the volume you are presented with an option to “Modify Volume”:
XtremIO_EXP4

The existing Xtremio and VMFS volume size is 1TB, this example will demonstrate increasing the volume size to 2TB:
XtremIO_EXP5

Note: you may also use the cli option for increasing an XtremIO LUN:
modify-volume vol-id="Data_VOLUME1" vol-size="2000G"

Login to Vcenter chose a host where the XtremIO volume is presented, under ‘Hardware’ navigate to ‘Storage’ and click on the ‘Configuration’ tab, from here chose the option to ‘Rescan All’ this will scan the host bus for newly assigned storage:
XtremIO_EXP3
XtremIO_EXP6

Once the rescan completes enter the volume properties, from here the additional 1TB of space will be visible under ‘Extents’, chose the option to ‘Increase’:
XtremIO_EXP7

Select the XtremIO volume and hit next:
XtremIO_EXP8

The next window will provide the breakdown of the existing volume and the available free space by which we can increase the volume:
XtremIO_EXP9

Chose ‘Maximum available size’ which is 1TB in this example:
XtremIO_EXP10

Click ‘Finish’:
XtremIO_EXP11

The ‘Data_Volume1’ VMFS datastore has been successfully increased by 1TB of XtremIO storage:
XtremIO_EXP12

EMC XtremIO – Redeploying XMS (XtremIO Management Server)

If the situation ever arises where you have lost your XtremIO Management Server (XMS) – do not worry as you can quite easily recover the XMS to its previous state. In this scenario even though you may have lost your XMS any I/O to the cluster is unaffected since the XMS is not in the data path. Therefore an XMS failure will only affect any configuration or monitoring activities of the XtremIO Array and production host I/O will continue as normal. There may be different scenarios of failure since there exists an option for either a physical or virtual XMS; for example in a virtual environment someone may accidentally delete the XMS VM entirely (yes this can happen!).

These are the steps involved to redeploy your XtremIO XMS; it really is a straight forward process since the database is stored on the controllers and during the recovery process the XMS will sync up with the controllers in order to gather all the existing configuration required to return your management console to working order.

The first step is to re-install the XMS image, in the event it is a physical XMS then you may install an image via a USB flash drive or for a virtual XMS simply deploy the provided VMware OVA image. The following step is to upload the XMS software to the images directory of the XMS and login with install mode ……..
XMS_Redeploy6

Once logged into the XMS console with xinstall then perform the following sequence of steps:
1. Configuration
5. Perform XMS installation only
11. Run XMS Recovery

XMS_Redeploy5

Options to choose when running the “XMS Recovery”:

XMS_Redeploy3

Completion:
XMS_Redeploy4

EMC XtremIO – Changing Cluster, XMS & Storage Controller Name(s) & IP(s)

Change the Xtremio Cluster Name:
xmcli (tech)> show-clusters-info
Cluster-Name
XTREMIO_1234

xmcli (tech)> rename cluster-id=”XTREMIO_1234″ new-name=”XTREMIO_4321″
Object XTREMIO_1234 [1] renamed to XTREMIO_4321

xmcli (tech)> show-clusters-info
Cluster-Name
XTREMIO_4321

Change XMS IP Address:
xmcli (tech)> modify-ip-addresses xms-ip-sn=”192.168.0.10/24″

Change Storage Controller Name:
xmcli (tech)> show-storage-controllers
SC Name:
X1-SC1
X1-SC2
Example of renaming both storage controllers for X-Brick-1:
xmcli (tech)> rename sc-id=”X1-SC1″ new-name=”XBrick1-SC1″
xmcli (tech)> rename sc-id=”X1-SC1″ new-name=”XBrick1-SC2″

Change Storage Controller IP Address:
Using the ‘modify-ip-addresses’ cmd:
xmcli (tech)> modify-ip-addresses sc-ip-list=[sc-id=1 sc-ip-sn=”192.168.0.1/24″,sc-id=2 sc-ip-sn=”192.168.0.2/24″]
Change the IPMI IP Address:
xmcli (tech)> modify-ip-addresses sc-ip-list=[sc-id=1 ipmi-ip-sn=”192.168.1.1/24″,sc-id=2 ipmi-ip-sn=”192.168.1.2/24″]