I’m just finishing up a migration of 345 mailboxes from a hosted Zimbra platform, and 24 mailboxes from Google Apps to a single Office 365 tenancy, using CloudMigrator365 and I did the migration on VM’s running in Azure.
A cloud to cloud migration in the cloud!
So why did I use Azure instead of an on prem server?
Primarily for speed.
I had an estimated 1.4TB to move, and a fairly aggressive deadline starting out, and I figured my best bet would be to stay in the cloud, rather than bringing data down to an on prem migration server.
The spec for CloudMigrator365 is pretty low, the current documentation as of v22.214.171.124 is as follows:
- Operating system: Windows XP//Windows 7/Windows 8/Windows Server 2003/2008/2012 (Clean build recommended)
- Recommended system specification: Minimum specification is flexible, but a recommended configuration is: 2.0Ghz or higher Intel Core 2 (or equivalent); 4GB RAM or higher; hard disk with a reasonable amount of free space as only a small amount of migration data is cached.
Initially I opted for an A5 standard machine with 2 cores and 14GB of RAM running Server 2008 R2 Datacenter.
As regards bandwidth, it’s phenomenal, 265Mbps upload, 444Mbps download!
It turns out I didn’t need anywhere near that, for a few reasons.
I started out pushing for 27 threads with a single migration account as per the EWS session limits in Office 365. which states that the default value for EWSMaxConcurrency in Exchange 2013 and Exchange Online is 27.
On larger projects MS will increase that limit, but for 350 mailboxes they told me they wouldn’t.
I found the CloudMigrator365 default of 10 threads worked out well in terms of CPU, it spiked, but averaged below 80%.
Increasing the number of mailboxes moved in parallel on a single server to 16 caused the CPU to stay above well above 80%, hitting 99% to 100%, which means the server is overworked.
I did notice the Memory usage stayed below 3GB.
One of the advantages of Azure is that you pay for compute time and data egress, so one server running for a time period X, or two servers of the same spec running for time X/2 is the same cost.
I scaled out and ran two additional servers, but this time I went for an A2 Standard, and an A2 Basic, both with 2 CPU cores but only 3.5GB of RAM, also running Server 2008 R2 Datacenter.
True to prediction these also stayed below 3GB of RAM, and an A2 is much cheaper than an A5, working out at about £.05 per hour instead of £.19.
For the limited time I ran it my A5 cost me £25.91 for 128.5 hours, which is £.20 per hour, where my A2 basic has cost me £36.79 for 813.9 hours, or £.045 per hour.
An A2 is quarter the price of an A5 for compute but just as good in this situation for CloudMigrator365 running 10 threads.
At 20 threads (2 servers in parallel) I very quickly got a call from my customer saying that users on the source platform were experiencing performance issues, so I scaled back, and we never ran more than ten threads on this migration.
Again,10 threads for time period X or 27 threads over time period X/2.7 is the same cost in terms of compute and data egress (you only pay for data leaving Azure, you don’t pay for data into Azure)
So far my “estimated” 1.4TB of customer data has resulted in 1.84TB of actual data, as MS obviously include all traffic, headers, requests and acknowledgments, as well as actual payload.
Here’s an example from December 22, where my Data out is 10% larger than my data in
and both are considerably larger than the totals displayed by the NIC’s on the VM itself.
The migration throughput was nominal, my bottleneck was Office 365.
At peak I hit close to 9GB/Hr, which in the real world using my magic number of 450MB per hour on a 1Mbps line, I would have needed a 20Mpbs symmetric line to achieve, so Azure has helped out here.
The Azure Dashboard is a really nice feature, and let’s you get a view of what is happening in the background.
When the Network In roughly equals Network Out, we’re moving data for the first time. No surprises here really. There may be more data out than in, depends on what conversion has to take place for the target platform.
The 4GB in and out is about average for this project.
When data in is high, but data out is low, we’re reading data but not writing, which implies (with CloudMigrator365) that the local migration history is either missing, or we’re moving a mailbox on this server that was previously moved on a different server.
We are reading every item from the source mailbox (which with Azure is thankfully free) but then not writing these items to the target mailbox because they already exist.
I actually verified this fact by finding and deleting the history for a test user, and the graph above is her re-migration.
For a single thread I hit close to 6.5GB/Hr out of the source Zimbra system, but didn’t have to write anything much to Exchange.
When both lines are flat, we have the local migration history, so the delta has nothing to do.
I very rarely got more than 4GB/Hr into Office 365. Over a sustained period during the bulk data migration I managed nearly 9, but the average we saw and used for planning was 4, and this was for 10 threads
The fact that I had 265Mbps upload and could get 6.5GB/Hr out of Zimbra for a single mailbox implies the bottleneck here is Office 365.
Unfortunately I wasn’t able to ping, as ping and tracert are disabled in Azure, but I would have loved to have seen the latency and been able to do some calculations on max throughput.
Total Azure cost on this project so far is £210.37, which divided by 350 mailboxes is £.60 per head.
Admittedly on this one it’s a cost I’m picking up, because it was my decision to run this project in Azure, but the experience gained makes the cost very worth while.