Requirements:
- PowerPath, at least version 5.1 SP1 (I would suggest at least 5.3 if running Windows 2003/2008)
- PowerPath Migration Enabler key (host-copy)
- Supported storage array for target and destination
Details:
To explain this migration technique a little more, PPME is a block level migration. It migrates the blocks on the LUN using the host to process the migration. This can cause performance degradation depending on how active the host is and the throttle you specify during the synchronization. Since the migration is at the block level, whatever the filesystem/alignment offset on the source LUN is exactly what gets migrated. You can migrate to a larger LUN but NOT to a smaller LUN.
If you need to make a configuration change to the filesystem, such as correcting an alignment offset issue, PPME is NOT the tool. EMC Open Migrator (OM), however, can do that. OM can migrate LUNs online, but the setup and takedown require reboots (no less than three). I digress, let's get on with the PPME migration.
Process:
The high-level process to complete a migration with PPME is this:
- powermig setup -src <pseduoname> -tgt <pseudoname> -techtype hostcopy
- powermig sync -handle <x>
- powermig query -handle <x>
- powermig throttle -handle <x> -throttlevalue <y>
- powermig selecttarget -handle <x>
- powermig commit -handle <x>
- powermig cleanup -handle <x>
Note: Adding -noprompt to many of these commands, specifically those that take action, prevent the yes/no prompt.
The detailed process to complete a migration with PPME is this:
-
Install PowerPath and/or PPME
-
Be sure to use a custom install and choose Migration Enabler as an option. This is a default installation I use because it doesn't cost anything to install and does not interfere with any other functionality.
-
Supposedly you can install PPME after PowerPath without a reboot, but the few times I did that, I had to reboot. This would be the ONLY reboot during the entire process.
-
-
Prepare/present the target LUN.
-
Once the LUN is accessible to the host, this is all that is necessary. There is no need to prepare the filesystem or alignment offset. In fact, whatever is done to the destination LUN will be overwritten by the storage migration.
-
-
License PPME using the PowerPath Licensing Tool.
-
Setup the migration session using the command:
powermig setup -src <source_pseduoname> -tgt <target_pseudoname> -techtype hostcopy
Where source_pseudoname is whatever PowerPath lists as the hosts LUN name. For instance, harddisk1. The same goes for target_pseudoname. This creates the relationship of the migration, using a session ID for all future tasks. Keep note of the session ID that is provided by PPME. -
Next is to start the synchronization. If there is going to be a negative performance impact for the migration, it will be during the synchronization.
powermig sync -handle <x>
Where <x> is the session ID. This starts the synchronization of the migration at a throttle level of 2 on a scale of 0-9. 0 is the fastest and 9 is the slowest. I found that using 2 was acceptable for most of the migrations I completed. If I thought that was going to be be too aggressive, I set the throttle to a value of 5. The migration takes longer to complete, but the IO contention was far less.
At this point, data is copied from source to target while the read requests are serviced and the writes are mirrored to source and target. -
Since the synchronization starts at a throttle of 2, there may be times when you need to slow it down or speed it up for whatever reason. This is NOT required, but if you need to the command is:
powermig throttle -handle <x> -throttlevalue <y>
Where <y> is a value from 0-9. Again, 0 is the fastest and 9 is the slowest. -
To check the status of the migration, enter the command:
powermig query -handle <x>
The output will give you the percentage complete and give you an idea as to when the synchronization will be complete. -
Once the status of the synchronization is listed as "sourceselected", it is time to finish the migration. One thing to note, is that it is still possible to backout of this migration. Enter the following command to select the target LUN:
powermig selecttarget -handle <x>
At this point, the read requests are serviced by the target and the writes continue to be mirrored. -
Now is the time to have the owner validate that the migration is acceptable. If performance is not as expected, you can still backout of the migration by selecting the source (selectsource) and cleanup the migration as specified below, however that is not what we are after, is it?! If validation is successful, enter the following command to commit the change:
powermig commit -handle <x>
The reads and writes are solely serviced by the target LUN and the source LUN is no longer being used. -
At this point, you can no longer backout the change. The source LUN, however, is still presented to the host and the underlying filesystem is still valid. You have two choices on how to proceed:
-
Cleaning up the migration as documented by the PowerPath guide
-
This is destructive to the source LUN, as the guide says that it removes some data. What that data contains is beyond me, but the end result is that the data on the LUN is not longer accessible by any host.
powermig cleanup -handle <x>
-
-
My preferred option is to remove the LUN from host access and then forcing the session cleanup. That provides the ability to mount that LUN on a different host, or back to the original host if there was a need. I would rename the LUN with the date of the migration and the name of the host. I would then leave that LUN bound for a week before destroying it, call me paranoid or over cautious.
powermig cleanup -handle <x> -force
-
-
This completes the migration.
Will this work for 2003 MSCS without the need to resignature the new disks?
ReplyDeleteGreat Post
Since this migration technique is a block level copy, this will also copy the signature. One thing to note on migrating clusters is that per the EMC documentation, all cluster nodes but one need to be shutdown.
ReplyDeleteThanks for checking out the post!
Awesome! This is going to save the client a ton of headaches. They are a global 24x7 shop with no tolerance for downtime. Thanks again!
ReplyDeleteJay
We had one of our clusters crash last night during a migration. It blue screened. Luckily we were able to restart the node with no data loss but not sure if you have seen this before. We were pushing lots of data at full throttle so that may have caused the issue. Only logs prior to crash were http://kb.qlogic.com/KanisaPlatform/Publishing/492/1027_f.SAL_Public.html
ReplyDeleteNot sure if disabling San Surfer would help.
All other migrations have been smooth.
Never had one bluescreen. I've had one hang during a remote security scan, though I don't think PPME had anything to do with that. We only use Emulex HBAs but like you said, I don't think disabling SANSurfer will do anything. Maybe slowing the PPME session down or rescheduling the migration at a time that has lower IO will do it.
ReplyDeleteWe have multiple LUNS that need to be migrated (Same box, multiple drive letters). Can multiple migrations be done at the same time, or should we perform the operations one at a time?
ReplyDeleteYou can definitely migrate multiple LUNs at the same time. In fact, I am currently migrating 13 LUNs from a Clariion CX3-80 to a 2-engine VMax. My observations are that the more LUNs being migrated, the slower it goes even though the throttle is set at a session level. Also, the host IO will slow things down. If you can afford the time, I would lower the throttle to avoid host impact.
ReplyDeleteThanks for the response!
ReplyDeleteI have a max of 3 LUNS that I will be migrating. Is there anything in the setup of the jobs that I need to be aware of in performing multiple migrations, or do I just setup one session and create the other when the first copy is underway.
Time isn't critical, so I'm probably just going to go with the default throttle of 2.
I know this reply is way late, mainly because I was on vacation and completely forgot to reply when I got back. My apologies.
DeleteTo answer the question though, it does not make a difference when you create the sessions. Creating the sessions is very quick. If you're anything like me, I already have the commands written in notepad and when the time comes for me to start, I just copy and paste. Generally, I've planned the migrations for a couple of weeks in advance. Also, for the vast majority of the migrations I do, a throttle of 2 is what I use. I have yet to have any complaints about performance.
Great info!! Thanks! I used these steps in Linux with the same results. Just migrated 2 LUNS without issues.
ReplyDeleteThanks again for the info.
Awesome! (Also sorry for the late response.) We have a few Redhat servers running Oracle with ASM and I was successful in migrating that from a Clariion to a VMax without issue. In fact, it was extremely fast and had about 1TB moved in an hour. I'm glad you appreciate the info and were successful!
DeleteCan you migrate from EMC to another vendor using this tool ?
ReplyDeleteAs long as the third-party vendor is supported in PowerPath, you should be fine. Definitely check the HCL for PowerPath. Out of curiosity, what is the target?
DeleteMike, Great Info. Any special consideration for Microsoft Windows 2003 Cluster (2 node active/passive)? Specifically Quorum Drive - Is it recommended to migrate Quorum drive also using Powermig?
ReplyDeleteThanks, Rajan. Per the EMC PPME documentation, they don't support migrating the quorum this way. However, I have done it successfully without issue. In fact, I moved our entire QA/Dev environment from a Clariion to a VMax using only PPME Hostcopy. I have 15 clusters in this environment. Now, I won't migrate a production cluster with PPME Hostcopy though. The lack of redundancy is too much for me to feel comfortable with. If its only a 2node active\passive cluster, maybe. I, personally, would rather take some downtime and use either SANCopy or OpenReplicator. It's faster and there is a controlled outage that way. But, it's really all about what your business will allow you to do ad how comfortable you are with the pros and cons.
ReplyDeleteThanks for the advise. Unfortunately, the cluster is running Exchange Server, so I need to migrate it without downtime. I'm very comfortable using HOSTCOPY to migrate all data drives - we recently migrated bunch of standalone servers that way - I was just wondering what to do with Quorum drive . If EMC does not support quorum migration, I will bring passive node down, migrate all data drives and recreate quorum on new LUN and then bring back passive node.
DeleteNice post. I have bookmarked you to check your new stuff.
ReplyDeleteie8 to ie11 upgrade
windows migration
application compatibility toolkit
Thanks. I should also note that this is a really old post BUT the technique is pretty much the same. One difference in the new version though, you can migrate MSCS with ALL nodes online now. The setup is different for clusters but I'll write up a new post on how to do that. Worked flawlessly on my last migration. I will say that sometimes a scheduled outage is the way to go on some migrations, as in using SRDF. That 10-15 minute outage could be tolerable if the host(s) are sensitive to IO degradation.
ReplyDeleteThis works great, I'm migrating 4 drives at once. We move the quorum drive like this as well with no issues.
ReplyDelete