EMC VNX2 – Drive Layout (Guidelines & Considerations)

Applies only to VNX2 Systems.

CHOICES made in relation to the physical placement of Drives within a VNX can have an impact on how the VNX performs. The intention here is to shed some light on how to best optimize the VNX by placing Drives in their best physical locations within the VNX Array. The guidelines here deal with optimising the Back-End system resources. While these considerations and examples may help with choices around the physical location of Drives you should always work with a EMC certified resource in completing such an exercise.


Maximum Available Drive Slots
You cannot exceed the maximum slot count, doing so will result in drives becoming unavailable. Drive form factor and DAE type may be a consideration here to ensure you are not exceeding the stated maximum. Thus the max slot count dictates the maximum drives and the overall capacity a system can support.

BALANCE is the key when designing the VNX drive layout:

Where possible the best practice is to EVENLY BALANCE each drive type across all available back-end system BUSES.This will result in the best utilization of system resources and help to avoid potential system bottlenecks. VNX2 has no restrictions around using or spanning drives across Bus 0 Enclosure 0.

These are rule of thumb figures which can be used as a guideline for each type of drive used in a VNX2 system.
Throughput (IOPS) figures are based on small block random I/O workloads:

Bandwidth (MB/s) figures are based on large block sequential I/O workloads:

Recommended Order of Drive Population:

1. FAST Cache
3. SAS 15K
4. SAS 10K

Physical placement should always begin at Bus0 Enclosure0 (0_0) and the first drives to get placed are always the fastest drives as per the above order. Start at the first available slot on each BUS and evenly balance the available Flash drives across the first slots of the first enclosure of each bus beginning with the FAST Cache drives. This ensures that FLASH Drives endure the lowest latency possible on the system and the greatest RoI is achieved.

FAST Cache drives are configured as RAID-1 mirrors and again it is good practice to balance the drives across all available back-end buses. Amount of FAST Cache drives per B/E Bus differs for each system but ideally aim for no more than 8 drives per bus (Including SPARE), this is due to the fact that FAST Cache drives are extremely I/O Intensive and placing more than the recommended maximum per Bus may cause I/O saturation on the Bus.

Note: Do not mix different drive capacity sizes for FAST Cache, either use all 100GB or all 200GB drive types.

Also for VNX2 systems there are two types of SSD available:
• ‘FAST Cache SSDs’ are single-level cell (SLC) Flash drives that are targeted for use with FAST Cache. These drives are available in 100GB and 200GB capacities and can be used both as FAST Cache and as TIER-1 drives in a storage pool.
• ‘FAST VP SSDs’ are enterprise Multi-level cell (eMLC) drives that are targeted for use as TIER-1 drives in a storage pool (Not supported as ‘FAST Cache’ drives). They are available in three flavors 100GB, 200GB and 400GB.

More detailed post on FAST Cache: ‘EMC VNX – FAST Cache’

Drive form factor (2.5″ | 3.5“) is an important consideration. For example if you have a 6 BUS System with 6 DAE’s (one DAE per BUS) consisting of 2 x 2.5” Derringer DAEs and 4 x 3.5” Viper DAEs as follows:

Best practice is to ensure 1 spare is available per 30 of each drive type. When there are different drives of the same type in a VNX, but different speeds, form factors or capacities, then these should ideally be placed on different buses.

Note: Vault drives 0_0_0 – 0_0_3 if 300GB in size then no spare is required, but if larger than 300G is used and user luns are present on the Vault then a spare is required in this case.

While all un-configured drives in the VNX2 Array will be available to be used as a Hot Spare, a specific set of rules are used to determine the most suitable drive to use as a replacement for a failed drive:

1. Drive Type: All suitable drive types are gathered.
2. Bus: Which of the suitable drives are contained within the same bus as the failing drive.
3. Size: Following on from the Bus query MCx will then select a drive of the same size or if none available then a larger drive will be chosen.
4. Enclosure: This is another new feature where MCx will analyse the results of the previous steps to check if the Enclosure that contains the actual Failing drive has a suitable replacement within the DAE itself.

See previous post for more info: ‘EMC VNX – MCx Hot Sparing’

Drive Layout EXAMPLE 1:

VNX 5600 (2 BUS)

FAST Cache:
1 X Spare, 8 x FAST Cache Avail.
8 / 2 BUSES = 4 FAST Cache Drives Per BUS
1 x 2.5” SPARE Placed on 0_0_24
1 X Spare, 20 x Flash VP Avail.
Fast VP:
20 / 2 BUSES = 10 Per BUS
10 x 3.5” Placed on BUS 0 Encl 1
10 x 2.5” Placed on BUS 1 Encl 0
1 X 2.5” SPARE Placed on 1_0_24

Drive Layout EXAMPLE 2:

VNX 5800 (6 BUS)


Drive Layout EXAMPLE 3:

VNX 8000 (16 BUS)



Useful Reference:
EMC VNX2 Unified Best Practices for Performance

EMC VNXe 3200 – MCx Drive Mobility

Related post: ‘EMC VNX – MCx Hot Sparing Considerations’

MCx code has brought many new features including the revolutionary ‘Multicore Raid’ which includes the ‘Drive Mobility’ feature. Drive Mobility (also referred to as Portable Drives) allows for the physical relocation of drives within the same VNXe, this provides the flexibility to relocate a drive to another slot within the same DAE or to another DAE either on the same BUS or to another BUS. This option allows us to modify the storage layout of a VNXe which may be very useful for example if additional DAEs are purchased and/or for performance reasons a re-balance of certain drives across DAEs or to a different BUS is required. Another reason as outlined in the related post highlighted above is when a drive failure occurs and you wish to move the spared drive to the failed drive slot location once the rebuild has completed.

The Drive relocation can be executed online without any impact, provided the Drive is relocated within the 5 minute window allowed before the VNXe flags the missing drive as a failure and invokes a spare drive. No other drive within the RAID N+1 configuration can be moved at the same time, moving another drive at the same time if exceeding the RAID N+1 configuration, for example moving more than 1 drive at a time in a RAID-5 configuration may result in a Data Unavailable(DU) situation and/or data corruption. Once the drive is physically removed from its slot then a 5 minute timer kicks in, if the drive is not successfully relocated to another slot within the system by the end of this 5 minute window then a spare drive is invoked to permanently replace the pulled drive. During the physical relocation process of a single drive within a pool the health status of the pool shall display ‘degraded’ until such time as the drive has been successfully relocated to another slot at which time the pool shall return to a healthy state with no permanent sparing or data loss occurring due to the fact that a single drive in a RAID N+1 configuration was moved within the 5 minute relocation window allocated. At this stage a second drive from within the same pool can be moved, continuing this process until you achieve the desired drive layout.

You may wonder how this Drive Mobility is possible: With MCx when a Raid Group is created the drives within the Raid Group get recorded using the drives serial numbers rather than using the drives physical B_E_D location which was the FLARE approach. This new MCx approach of using the drives serial number is known as VD (Virtual Drive) and allows the drive to be moved to any slot within the VNXe as the drive is not mapped to a specific physical location but instead is recorded based on the drives serial number.

Note: System drives DPE_0_0 – 0_3 are excluded from any Drive Mobility:

Example Drive Relocation
For this example the drive located in SLOT-5 of the DPE will be physically removed and placed in SLOT-4 on the same DPE.


Examine the health status of the Drive in SLOT-5 prior to the relocation:
uemcli -d VNXe_IP -u Local/admin -p Password /env/disk -id dpe_disk_5 -detail


After the relocation of the drive (to SLOT-4 in this example) UNISPHERE will temporarily display a warning:

At this stage it is good practice to perform some checks on the Drive(SLOT-4), Pool and System:
uemcli -d VNXe_IP -u Local/admin -p Password /stor/config/pool show -detail
uemcli -d VNXe_IP -u Local/admin -p Password /sys/general healthcheck
uemcli -d VNXe_IP -u Local/admin -p Password /env/disk -id dpe_disk_4 -detail


Returning to UNISPHERE after performing the checks and you will notice all warnings have disappeared:

At this stage it is safe to proceed with the next move!