EMC VNX – POOL PERFORMANCE PLANNING

In this post I will give a guideline on how to calculate the required drive count for a VNX Pool based on Throughput performance (IOPS). This is only a Rough-Order-of Magnitude (ROM) which will give an approximation to the required drive count. Please consult EMC/VCE/Partner Technical sales as they will have access to the latest sizing tools in order to calculate the exact drive count for your worload.

Firstly we need to consider the impact the different RAID types will have on the drive calculation (Choosing your RAID type is for another discussion).The main difference between the RAID types is the random write performance. The performance of Read IOPS is much the same across all these RAID types. RAID 1/0 requires two disk operations for each host write, RAID 5 requires four operations per host write and RAID 6 requires six operations per host write. To summarise here is how the back-end RAID penalties differ:

Mirrored RAID 1/0: 1 Host Write = 2 Writes
Parity RAID 5: 1 Host Write = 2 Reads + 2 Writes
Parity RAID 6: 1 Host Write = 3 Reads + 3 Writes

So taking the above implications that Host writes have on the backend RAID types here are the calculations that can be used to determine the required Drive IOPS (Back-end):

Parity RAID 5: Drive IOPS = Read IOPS + 4*Write IOPS
Parity RAID 6: Drive IOPS = Read IOPS + 6*Write IOPS
Mirrored RAID 1/0: Drive IOPS = Read IOPS + 2*Write IOPS

Example RAID_5 IOPS Calculation
HOST WORKLOAD: 20,000 Random IOPS, 80 % Reads, 20% Writes
Using RAID_5 as an example with an I/O ratio of 80/20 with a total Host IOPS requirement of 20,000 we can calculate the Drive IOPS:

Drive IOPS = (0.8 * 20,000 + 4 * (0.2 * 20,000))
Drive IOPS = 32,000

Determine your disk count Based on IOPS:

Using the provided table we can do a quick calculation to estimate the required drive count for the Pool. These examples and the estimated IOPS count is based on Small-block random I/O similar to the I/O’s of database systems (16KB or Less). The drive IOPS figures listed here are used as a starting point for 16KB or less for any I/0 larger than 16KB there will be a reduction in the IOP figures listed below.

IOPS_DISK

Take it that we want to use all “SAS 15K” drives in a heterogeneous pool as a first example:
Given the requirement for 32,000 IOPS we can do a quick calculation to get the required drive count to service this workload:

32,000 IOPS / 180 IOPS PER DISK = 177.8 Disks

As this is a RAID_5 based pool the private Raid groups will be using 4+1 thus rounding the drive count to 180 will give us 36 x RAID_5 4+1 private raid groups.

Now as a second example assume that a two-tiered FAST VP Homogenous Pool is the preferred option using Flash and SAS drives in a two tiered approach. Using 5 Flash Drives in a FAST VP Pool configuration then we would vastly reduce the count of SAS drives as follows:

5 X 3500IOPS (Flash Drives) = 17,500 IOPS serviced by the 5 Flash Drives

This leaves us with 18,500IOPS that will need to be serviced by “SAS 15K” drives which would have a total drive count requirement of :

18,500/180 = 102.7
Again using a RAID_5 4+1 configuration we would have a total drive count of 105 for SAS.

Thus giving a reduction of 70 (180-110) drives compared to the previous example of using a Heterogeneous VP Pool of all SAS 15K drives. The power of FLASH!

BLOG_IOPS

NOTE: The first four drives in a VNX are the system (Vault) drives and cannot be used in a pool.

EMC VMAX – FAST VP Configuration Via SYMCLI

In this post I will detail how to configure EMC FAST VP (Virtual Provisioning) on a VMAX storage array via SYMCLI, you can also use the “EMC Unisphere for VMAX” user interface which I will cover in a later post.

Please see an excellent post by Sean Cummins on ‘VMAX FASTVP Best Practice Essentials’, this will give you a great insight into how the Pools should be designed and configured in order to implement an optimized FAST VP solution.

Note: FAST VP only performs promotion/demotion activity between tiers defined on differing drive technologies. RAID protection and drive rotational speed are not considered. As a result, a FAST VP policy should not be created where two or more tiers use the same drive type. For example, a FAST VP policy should not contain two or more FC tiers.

Before you start configuring FAST VP all basic storage provisioning should be completed. For example you will need to have created all required Storage Pools, LUNs, and Storage Groups.

Steps Outlined:

1. Creating the Storage Tiers
2. Creating FAST VP Policy
3. Associate Storage Groups with VP Policy
4. Enable FAST VP
5. View FAST VP Details
6. Configuring Time Window Settings

Note: Each ‘Tier Name’ and ‘Policy Name’ may consist of up to 32 alphanumeric characters, underscores (_) and hyphens (-). To a max of 256 Tiers and FAST Policies.

1. Creating the Storage Tiers

The first step is to create the tiers using the ‘symtier’ command. For this example we will create three tiers SATA,FC & EFD:

• SATA TIER:
symtier -sid xxx create -name Tier_SATA_raid6 -tgt_raid6 -tgt_prot 6+2 -technology SATA -vp -pool PoolName-SATA-R66
• FC TIER:
symtier -sid xxx create -name Tier_FC_raid5 -tgt_raid5 -tgt_prot 3+1 -technology FC -vp -pool PoolName -FC-R53
• EFD TIER:
symtier -sid xxx create -name Tier_EFD_raid5 -tgt_raid5 -tgt_prot 3+1 -technology EFD -vp -pool PoolName -EFD-R53

List the created tiers:
symtier -sid xxx list
List the current allocation of storage tiers:
symtier -sid xxx list -vp

Note: Syntax per RAID Configuration
RAID 0 = -tgt_unprotected
RAID 1 = -tgt_raid1
RAID 5 = -tgt_raid5 -tgt_prot 3+1 | 7+1
RAID 6 = -tgt_raid6 -tgt_prot 6+2 | 14+2

2. Creating FAST VP Policy
Next create the FAST VP policy. As part of creating the Policy we will add the tiers and the required Storage Groups to the FAST VP policy.

Create the FAST VP Policy:
symfast -sid xxx -fp create -name POLICY_Name

Next task is to add the tiers created above to the policy and set the percentage usage of each tier allowed. For this example 100% of the Storage Group’s capacity can reside on SATA, 50% On FC and 10% on EFD.

symfast -sid xxx -fp add -tier_name Tier_SATA_raid6 -max_sg_percent 100 -fp_name VP_POLICY
symfast -sid xxx -fp add -tier_name Tier_FC_raid5 -max_sg_percent 50 -fp_name VP_POLICY
symfast -sid xxx -fp add -tier_name Tier_EFD_raid5 -max_sg_percent 10 -fp_name VP_POLICY

List FAST VP Policy:
symfast -sid xxx list –fp -vp
For more detail us the -v switch symfast -sid xxx list –fp -vp -v

3. Associate Storage Groups with VP Policy

Firstly list the Storage Group’s in order to get SG names:
symaccess -sid xxx list -type storage

Secondly Associate SG’s with the FAST POLICY:
symfast -sid xxx associate -sg SG_Name1 -fp_name VP _POLICY -priority 2
symfast -sid xxx associate -sg SG_Name2 -fp_name VP _POLICY -priority 2

To list associations:
symfast -sid xxx list –association

To show a particular SG association details:
symfast -sid xxx show -association –sg SG_Name

To Disassociate a storage group from a FAST policy:
symfast -sid xxx disassociate -sg SG_Name1

4. Enable FAST VP
symfast -sid xxx enable -vp

Set Data Movement to AUTO
symfast -sid xxx set -control_parms -vp_data_move_mode AUTO

5. View FAST VP Details

Show Policy details:
symfast -sid xxx show -fp_name VP_POLICY

List FAST controller state:
symfast -sid xxx list –state

List the FAST controller settings:
symfast -sid xxx list -control_parms

List a Storage Groups Tier location percentage breakdown:
symCFG -sid xxx list -tdev -sg SG_Name -tier

6. Configuring Time Window Settings

The symtw command defines time windows to control FAST VP.
Arguments:
SYMTW

CONFIGURE MOVE TIME WINDOW SETTINGS
The move time window specifies the time at which FAST VP should move data. Create a move time window at the time of day when you expect to have the least amount of traffic on the storage array.

Example Move Time Window Setting:
MOVE

Firstly issue a command to remove all time windows that may already exist (-inclusive implies Open Time Windows and -exclusive closed):
symtw -sid xxx -inclusive -type move_vp rmall -noprompt
symtw -sid xxx -exclusive -type move_vp rmall -noprompt

Adding a new time window based on the above table:
symtw -sid xxx -inclusive -type move_vp add -days Mon,Tue,Wed,Thu,Fri,Sat,Sun -start_time 22:00 -end_time 24:00 -noprompt
symtw -sid xxx -inclusive -type move_vp add -days Mon,Tue,Wed,Thu,Fri,Sat,Sun -start_time 00:00 -end_time 03:00 -noprompt

List the time window settings configuration:
symtw -sid xxx -inclusive -type move_vp list
symtw -sid xxx -exclusive -type move_vp list

MOVE2

CONFIGURE PERFORMANCE TIME WINDOW SETTINGS
Create a performance time window setting that excludes periods of negligible or low system utilization (such as nights and weekends) from the period during which statistics are collected on the data access pattern.
Example Performance Time Window Setting:
PERF

Removes all existing time windows:
symtw -sid xxx -inclusive -type perf rmall -noprompt
symtw -sid xxx -exclusive -type perf rmall -noprompt

Adding a new performance time window based on the above table:
symtw -sid xxx -inclusive -type perf add -days Mon,Tue,Wed,Thu,Fri,Sat,Sun -start_time 07:00 -end_time 19:00 -noprompt
List the performance time window information:
symtw -sid xxx -inclusive -type perf list
symtw -sid xxx -exclusive -type perf list

PERF2