Sometimes, even the best planning needs to be tweaked on the fly.
In this case, what appeared to be a 'simple' 9TB, 6 Server, VMWare to HyperV migration, morphed into an "opportunity to get creative."
Issue: Replace aging server hardware, and incorporate a more robust (time to recovery) disaster recovery architecture.
On the surface, this was a slam dunk. Migrate the VMWare virtual machines to new hardware running HyperV, setup HyperV replication, and configure a second Exchange Server for DAG (Database Availability Group).
While the architecture and planning seemed solid (we had done a lot of this type of migration), what was unexpected was the throughput of the existing network, the speed of the older server hardware, and having filenames/paths that exceeded the 255 character limit (didn't see that coming!).
The first 4 machines migrated without a hitch (although slower than expected), inside the outage windows, and ran great on the new HyperV platform.
Regarding the migration speed, it turns out that the hardware drivers (Dell Servers) on the old Vmware servers was the out of the box, VMWare versions, and not the Dell optimized versions. The client never noticed this "opening a single file", or when "checking email" on the Exchange Server.
We, on the other hand, were trying to grab multiple Terabytes of data from the drives (old controller driver) across the network (old network drivers).
The first big gotcha was existing corruption of the VMWare VMDK files. Anytime we attempted to convert a copied out VMDK file to VHD(x), it failed. We tried all the different ways to copy it out, and multiple converters. No luck.
Here is the first (and unsuccessful) clever part: we built a temporary, virtual, Windows 2012R2 Server, on the new HyperV server, and then joined the temporary server to the AD domain. Next, we started a robocopy from the old virtual machines D: drive to the temporary machines D: drive; basically the same setup as any physical to physical migration.
Sounds like a no-brainer, right?
Not quite. The next issue was long file names. Windows Server has a limit of 255 characters for the path and filename. While not recommended, this isn't a problem when mapping a drive to a folder lower in the directory, but a big problem for RoboCopy (and as it turns out, the migration problem for our usual assortment of VMDK to VHD(x) migration tools)!
Here is the 2nd, and in this case, successful, clever part. Robocopy clearly wasnt going to work on the long filenames. So, we used the backup software to backup the old D: drive, and then recover the D: drive to the temporary server (on a Wednesday). All of the files the client was updating where not in the 255+ range, so we ran Robocop every night, to sync the changes.
Then on Friday night, for the migration, we ran one last robocopy to get a solid D: drive. All that was left was to shut down the old server, copy out the VMWare VHDK file of the C: drive, convert it to a VHDX file, and create a new virtual machine on the HyperV server with it.
We started the virtual server in its new home, on the HyperV Server. After everything looked good, we shut it down, as well as the temporary server. We removed the temporary server's D: drive and attached it to the server in its new location. After starting up, the server now saw its D: drive, and purred like a kitten!
Now out of the bushes, we configured the HyperV servers (3 total) to replicate between each other, as part of the new disaster recovery plan (plain vanilla certificate-based replication).
After deploying the new solution, the network speed and server response is incredibly faster, the disaster recovery window is much smaller (10 minutes), and the server footprint went from 16u to 6u.
Not the easiest migration we've done, but certainly one of the most educational!