It’s been well documented that there are various problems with Broadcom network drivers in implementations of Hyper-V.
Some of these examples are:
Microsoft KB2902166 – Recommendation to disable VMQ with Broadcom NICs - http://support.microsoft.com/kb/2902166
Guest Clustering Issues - http://www.hyper-v.nu/archives/pnoorderijk/2013/06/virtual-guest-cluster-and-nic-teaming-in-the-host-results-in-an-evicted-cluster-node-broadcom-emulex/
Guest Clustering Issues - http://systemscentre.blogspot.co.uk/2013/05/problems-clustering-virtual-machines-on.html
Updated Dell Driver for Broadcom NICs - http://datacenter-flo.de/?tag=broadcom
Various other posts that a simple Bing search will find you - http://www.bing.com/search?q=broadcom+hyper-v&qs=n&form=QBLH&filt=all&pq=broadcom+hyper-v&sc=3-16&sp=-1&sk=
I was hoping that with the release of Windows Server 2012 R2 that these might be a thing of the past and the fixes introduced in the latest 2012 RTM drivers carried across.
How wrong could I be…
After deploying a 2 node 2012 R2 Hyper-V cluster I started to immediately notice slow network performance both deploying new VM’s and copying files between guest virtual machines.
To further confuse me, the problems were heavily present when copying to the host, or VM’s running on the host, that wasn’t the CSV owner. This originally started me looking down the wrong path.
So, after trying multiple things, I came full circle back round to retesting VMQ & Broadcom settings.
At the moment it looks like the problem that I (and others) had experienced in the past with having VMQ enabled on Broadcom adapters is present with the inbox driver in R2 (version 188.8.131.52).
As well as enabling/disabling VMQ I also stepped the driver down to the previous 2012 RTM version driver (184.108.40.206) and it works fine with VMQ enabled.
I can now even swap between drivers without a reboot and show speed impact.
With VMQ Enabled, poor transfer speed between VM’s:
With VMQ Disabled, consistent (and better) transfer speeds regardless of VM/Node placement (Live Migration while copying):
In my environment that I was testing, I have Broadcom NetXtreme B5720 Quad Port NICs in my blades and all firmware is up to date
Obviously I don’t really want to miss out on the VMQ features so for a while I ran the down level driver, hoping that a fix would appear.
Well, Broadcom have recently released an updated driver directly to their site.
This driver is dated 5th November 2013 and version 220.127.116.11
I’ve flattened my environment and let VMM install the updated driver during bare metal deployment and, touch wood, so far all VMQ related speed issues are fixed.
Looks like it’s something to bear in mind that the in-box Broadcom driver in R2 is broken while the current 18.104.22.168 direct from Broadcom works.