Let’s say you have a VMWare infrastructure with a bunch of HP DL380 G8 servers. All the ESX hosts and VM’s are in one cluster, including a vCenter Server Virtual Appliance (vCSA). You then, for whatever reason, decide to introduce a few HP DL380 G7 servers into this environment. After you add the G7 servers into the cluster you find out you are unable to vMotion VM’s to these older servers, getting the error below.

vMotionCPUError

This is due to certain CPU features available in the Sandy Bridge architecture which are not supported by the G7’s running a Westmere CPU. Because of this you will not be able to vMotion a VM from a G8 to a G7 server (although it will work the other way around), which is far from ideal in an environment where everything is supposed to be standardised and easy to manage.

There are two solutions to the above problem but both options will present you with a problem.

  1. Create a new cluster and set EVC to Westmere Generation mode. You will then need to shut down every VM in the original cluster, move it to the EVC cluster and power it up again. You will also need to move the G8 servers over to this cluster once they are no longer required in the original cluster.

    Problem: If you shut down the vCSA you obviously won’t be able to move it to the EVC cluster since you won’t be able to manage the cluster.

  2. Shut down all the VM’s in the original cluster, set EVC to Westmere Generation mode and power on all the VM’s.

    Problem: Again, you won’t be able to manage the cluster with the vCSA shut down so you won’t be able to change the EVC mode.

EVCMode

Overcoming this problem is not as difficult as one might think but that’s not the reason why I wrote this post.

  1. Shut down the vCSA.
  2. Connect directly to the host that the vCSA was running on.
  3. Right click the VM and remove it from inventory.
  4. Connect to a G7 server.
  5. Add the vCSA to the inventory.
  6. Start it up and Bob’s your uncle.

If your vCSA wasn’t running on a vDS everything will now be peachy, otherwise read on. If you were using a vDS and you missed a warning in VMWare KB 2058684 which states, “If the vCenter Server virtual machine is running on a Virtual Distributed Switch (vDS), move the vCenter Server virtual machine to a standard vSwitch before proceeding with these steps.”, you will notice that the Network Adapter on the vCSA is not connected. You will also not be able to tick ‘Connected’ if you’re trying to put it back onto the vDS (since you’re connected directly to an ESX host, rather than vCenter).

I double checked with VMWare support, prior to carrying out this procedure, and was told that it should be fine to go ahead without moving the vCSA onto a standard vSwitch first (I had no spare NIC’s on the G8’s and didn’t want to remove any NIC’s from the vDS). Turns out the support tech was wrong and I should have just followed the KB to begin with. The fix was simple enough.

  1. Plug a network cable into a spare NIC on the G7 where the vCSA is now running.
  2. Create a standard vSwitch.
  3. Put the vCSA onto this vSwitch and verify connectivity (might need to reboot it).
  4. Connect the vSphere client to the vCSA and change the network adapter to use the vDS once again.

NOTE: Running those G8 servers in Westmere Generation EVC mode will mean you lose some features of the Sandy Bridge architecture, but that’s a discussion for another day.