This post is a result of having to migrate Cisco UCS Blade boot LUNs between Raid Groups. Both the source and destination were Classic LUNs – RAID Group LUN (FLARE LUN/FLU). The migration occurred across two DAE’s, with each DAE populated with one Raid Group. This is the ideal scenario for best migration performance – having the load spread across different RG’s and also spread across backend ports. The Rates have an incremental effect on the Storage Processor utilization. We used the ASAP migration rate which was acceptable as the system was not in production and would not therefore impact performance of other workloads. Also note that two ASAP migrations can be performed simultaneously per Storage Processor.

For migration of a LUN within a VNX array the following rates can be used in the calculation:

Priority Rate (MB/s)
Low 1.4 MB/s
Medium 13 MB/s
High 44 MB/s
ASAP 85 MB/s

Calculation for the Migration of a LUN is as follows:
Time = (Source LUN Capacity * (1/Migration Rate)) + ((Destination LUN Capacity – Source LUN Capacity) * (1/Initialization Rate))

Calculation for the migration of a 20Gig BOOT LUN between Raid Groups at ASAP SPEED (85MB/s):

Source LUN = 20000MB
Destination LUN = 20000MB
Migration Rate is 85MB/s = 306000 MB/Hour
Initialization Rate = 306000 MB/Hour

(20000 * (1hour / 306000)) + ((20000 – 20000) * (1hour / 306000))
= .0653 hrs
= 3.9 Minutes Per BOOT LUN

The following command can be used if CLI is your preferred option:
naviseccli migrate -start -source 15 -dest 16 -rate asap
Where the source LUN ID is 15 and the destination is 16 with priority ASAP


Check Percentage Complete:
naviseccli -h x.x.x.x migrate -list -source 15 -percentcomplete

Check Time Remaining:
naviseccli -h x.x.x.x migrate -list -source 15 -timeremaining

Note: The LUN Migration feature can also be used to migrate to a larger LUN size, thus increasing capacity of the source LUN. The whole process can be completed online and non-disruptively (transparent to VMware ESXi hosts) and does not require any post configuration. Once the migration completes then the original source LUN is deleted off the array and the new LUN takes the source LUN WWN and LUN ID.

11 thoughts on “EMC VNX – LUN MIGRATION

    • V.Good Point Victor. Yes the rates are applicable to all VNX as far as I am aware. The important thing to be aware of is that in smaller systems the higher rates will have more of an impact on utilization. Best to start on a lower rate and monitor via analyzer. Thanks

  1. It’s a feature that I have always liked and advocated but not all customers are aware of it. Plus it’s available without extra cost. Very useful functionality!

  2. I wish I has stumbled on to this article last month – I have just migrated 10 TB (in about 150 LUNs) from traditional RAID groups to tiered storage pools – it would have been nice to add these calculations to my migration spreadsheet.

  3. This is a great post. I always do the migrations basically for pool re-balancing with in the same VNX. After reading this article, Out of curiosity I have just calculated and verified. It was very precise that a 100 GB LUN migrated in around 40 mins at ‘High’ Priority rate!! Great to learn 🙂
    Thanks a lot Dave. I wish more power to you and make us learn the stuff in a simple way.

  4. Hi David,

    Nice article. Appreciate the efforts.

    In my set up we have many LUNs having dead space (thin provisioned) and we are planning to reclaim the dead space through LUN migration.

    I understand that, if we encounter into any performance issues during LUN migration,we can cancel the LUN migration.

    Once the LUN migration is cancelled and restarted will we get the free blocks upto the point where it was running?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s