EMC VMAX – Thin Pool Creation and Considerations
Considerations in planning to create Thin Pools: • The underlying structure of Thin Pools are made up of whats called ‘Data Devices’ (TDAT’s). These devices provide the actual physical storage […]
Virtualization & Storage
Considerations in planning to create Thin Pools: • The underlying structure of Thin Pools are made up of whats called ‘Data Devices’ (TDAT’s). These devices provide the actual physical storage […]
Considerations in planning to create Thin Pools:
• The underlying structure of Thin Pools are made up of whats called ‘Data Devices’ (TDAT’s). These devices provide the actual physical storage used by Thin Devices (TDEV’s).
• A Thin Pool can only be configured with one disk type.
• Thin Pools may only consist of Data devices with the same emulation and protection type. Thus TDAT’s with a protection type of RAID-1, RAID-5, or RAID-6 will be used to create a Pool. TDEV’s inherit the protection type of the TDAT’s used in the pool. The differenet emulation types are fba, ckd, as400 and celerra. The Most commonly used emulation type is FBA, this is what your open systems use ( Unix/Linux, vSphere, Windows).
• Data devices that make up a pool may be of different sizes but it is recommended that all data devices in a pool are of the same size to ensure even data distribution.
• Data devices are not visible to hosts and cannot be used until they are assigned to a Pool.
• TDAT’s should be spread evenly across DAs and drives. The wide striping provided by Virtual Provisioning will spread thin devices evenly across the data devices. Ensuring that the TDATs are spread evenly across the back end will result in best performance.
• The balance of the pools is very important – Each pool should be spread over disks evenly. That is to say every disk in the pool should have the same number of TDATs on it. If one disk has twice as many TDATs as another in the same pool, that disk will serve twice as many IOPs. Also every pool should have 8 splits/hypers active per disk, or the minimum number to use the whole disk.
• I think each disk should be used for one pool only, and there should just be one thin pool per technology to keep things simple. Note: There may be circumstances where a thin pool with multiple applications consisting of mixed type workloads utilizing the same underlying spindles does not favor a particular application and may result in inconsistent performance levels for that application; in this type of instance the application may require a dedicated thin pool.
• 512 Pool’s is the maximum that can be created in a VMAX.
• There is no limit to the number of thin devices that can be bound to a thin pool or data devices that can be added to a thin pool.
• The limit to the number of thin and data devices that can be configured is 64,000.
Given the following Thin Pool requirements I will detail how to create the Thin Pools and assign the TDAT ranges:
Firstly list the Disk Groups:
symdisk list -dskgrp_summary
Ensure the TDAT’s are availbale and have not been used in another Pool by issuing the -nonpooled cmd. The output will also display the RAID configuration of the datadevs:
symdev list -datadev -disk_group 2 -nonpooled
symdev list -datadev -disk_group 3 -nonpooled
symdev list -datadev -disk_group 4 -nonpooled
Create the Thin Pools
The Thin Pools can be created first without adding data devices, TDATs are added at a later time.The Pool name can contain a sequence of 1 to 12 alphanumeric or ‘-‘ (hyphen) or ‘_’ (underscore) characters.:
symconfigure -sid xxx -cmd “create pool Prod-HP-R53 type=thin;” COMMIT
symconfigure -sid xxx -cmd “create pool Prod-GP-R14 type=thin;” COMMIT
symconfigure -sid xxx -cmd “create pool Prod-AR-R66 type=thin;” COMMIT
Populate the Thin Pools with the defined TDAT’s
After creating the Pools, TDAT’s are added to the Pools and enabled:
symconfigure -sid xxx -cmd “add dev 00F0:017F to pool Prod-HP-R53 type=thin, member_state=enable;” COMMIT
symconfigure -sid xxx -cmd “add dev 0180:05FF to pool Prod-GP-R14 type=thin, member_state=enable;” COMMIT
symconfigure -sid xxx -cmd “add dev 0600:0943 to pool Prod-AR-R66 type=thin, member_state=enable;” COMMIT
Using the list command to display a list of the Thin Pools created utilizing the –thin option:
symcfg list -pools -thin -gb
View details of the newly created Pools
In this example, the newly created Pool’s are displayed along with details about the pool and the data devices that have been added to it. The –detail option displays any bound thin devices:
symcfg show -pool “Prod-HP-R53” -thin -detail -GB
symcfg show -pool “Prod-GP-R14” -thin -detail -GB
symcfg show -pool “Prod-AR-R66” -thin -detail -GB
View of Thin Pools from Unisphere:
Ramblings by Keith Lee
Discussions about all things VxRail.
Random Technology thoughts from an Irish Virtualization Geek (who enjoys saving the world in his spare time).
Musings of a VMware Cloud Geek
Converged and Hyper Converged Infrastructure
'Scamallach' - Gaelic for 'Cloudy' ...
Storing data and be awesome
Best Practices et alia
Every Cloud Has a Tin Lining.
Are there any recommendations on overall pool size when considering protection levels in a VMAX?
i.e. max pool size with 14R2 SATA vs. max pool size with 2M FC?
Great Points indeed, only one question, is there any best practice on- to keep fewer pool count for better performance?
Thanks Suddhasil. To answer your question; there will often be a need to seperate Applcations across multiple Thin Pools but generally speaking the best use of resources will be achieved by having less Pools.
I thought in order to expand or populate a thin pool , one needs to add data devices to the Disk Group, then the thin pools are populated or expanded from the Disk Group . except if their is enough space in the disk group then one can expand the thin pools from the Disk group directly . right ? If one is logged into unisphere for vmax , and you right click a thin pool , choosing expand you get a dialog box with a pie chart pick of the disk group associated with the thin pool . But Can you write the symcli syntax that is used to populate the thin pool from the disk group ? and the symcli syntax that is used to populate the disk group with data devices (TDAT’s) ?
Nice Article Dave. Could you please also show how can we migrate the following :
Migrate devices (Thick and Thin) from DG_XX to DG_YY
Migrate TDEVs in Pool XX to Pool YY
Appreciate your knowledge. Regards
Hi Rohan,
I wonder the same thing which you have asked here. Could you please confirm if you got your question’s asnwer, it would be great if you help me for the same.
You didn’t comment on the balance of the pools. Each pool should be spread over disks evenly. That is to say every disk in the pool should have the same number of TDATs on it. If one disk has twice as many TDATs as another in the same pool, that disk will serve twice as many IOPs. Also every pool should have 8 splits/hypers active per disk, or the minimum number to use the whole disk.
I think each disk should be used for one pool only, and there should just be one thin pool per technology to keep things simple.
Excellent contribution John, points added above. Looking forward to your EMCWorld Sessions.
Hi John\Dave,
I have a question, How do we arrive at the number 8. Is it the minimum number of splits considering a 2TB disk?
Gary.
The recommended 8 Split count is related to overall system performance. Essentially there is less overhead on the system with fewer larger hypers. Also if using Virtual Provisioning then you can have large hypers(splits) contained in TDATs assigned to the Thin Pool and the carved TDEV sizes that get presented to the host can be small.
No, since the largest device (usable capacity) is around 240GB, you will need more than 8 for a 2TB drive.
Thanks John. Yes, with the 2TB drive unless you are using Raid1 then you will have greater that 8 hypers per drive.
Even with RAID1, you will need 9 I think to fully utilize a 2TB drive.
And I’d never recommend an odd number of splits/hypers with RAID1.
Excellent recommendation. I think if the drive has reserved Vault space then 8 is the min.
Tha ks John& Dave.
Awesome Dave. Big thumbs up. Technology of the pool represents the type of drives correct? Tech – SATA is SATA and EFD is SSD
So if you have 3 TB drives in a RAID6 (6+2) config and want to set the number of Hypers per disk as low as possible, you’ll end up with 240 GB (max TDEV size) spread over 6 spindels = 40 GB per spindel. So a 3 TB disk then needs 3000 / 40 = 75 Hypers per disk to be fully utilized, right?
Hi Rob
Yes that would be a good estimate for a 3TB Non-Vault R6 6+2 Config – Taking into consideration formatting etc. then it would probably work out at ~70 Hypers Per Drive!
See also: https://davidring.ie/2014/09/17/emc-vmax-disk-group-pool-expansion/
Thank you!
Hi Dave and John,
I did not understand the term number of TDATs per disk as TDAT is consist of number hypers from a set of disks in a disk groups as per the chosen RAID protection. Here is another example from Dave’s post. Could you please explain me what is the number of TDATs /disk in this case?
Scenario:
A Disk Group that consisted of 64 X 600GB disk drives has been increased to 128 X 600GB Drives. EMC will be required to create and apply an upgrade BIN in order to expand the existing DG to 128 Disks, once this has been completed we can then proceed with the TDAT configuration.
After gathering the details with respect to the existing Hyper and TDAT sizes, then we can do a quick calculation in order to determine the count and size of the TDATs required (In this case the configuration is a simple replica of the existing configuration).
Calculating the Hyper Size
If we take the scenario above where a 64 Drive DG has been doubled to 128 Drives, with each of the first 64 drives in the DG having 8 Hypers per disk in a RAID 5 (3+1) configuration.
Listing the existing TDATs within Disk Group 1:
symdev list -disk_group 1 -datadev
From the output of this command we can see that the TDAT size is 206130MB, from this we can calculate the Hyper size used:
206130/3
=68710 MB Hyper Size
Hypers
Calculating the number of TDATs Required
TDAT
From the image above you can gather that for each set of 4 Drives (Raid5 3+1) we require a total of 8 x TDATs.
8 Hypers * 64 Disks
=512 Hypers
=512/4 (R5 3+1)
=128 TDATs
Hi Shidhu
A disk will consist of hypers; where each hyper will be a member of a TDAT. As per the example each disk contains 1 out of 4 hypers that make up that whole TDAT, thus a TDAT spans 4 drives.
Hi again, This is exactly what I understand. However, it’s mentioned in your article “every disk in the pool should have the same number of TDATs on it”. My question is that how it shows up in the example?
Each drive has logical TDAT logical volumes created on them. Each disk should have the same number. You should be able to see the number of TDATs with Unisphere or with the symdisk CLI command.
Very useful, Thanks a lot David
Can we check where is our data lying on the disks? and how?
Thanks so much for these amazing clear and helpful commands. I’m always amazed when people dedicate so much free time for the improvement of others knowledge. you are a star!
Thanks so much for these commands and being so generous to helping others. Always blown away when people spend their free time to help others without any costs.
you’re a star!
where can we run the symcli commands to unmask the lun from the host. How to get the capacity report using unisphere and symcli
Please reply.