In Part 1, I laid out a brief summary of VMQ and an example of the configuration that is appropriate for our four-socket, ten-core Hyper-V host. Here in Part 2, I’ll unpack the issue we’re facing in spite of our textbook configuration.
Following the guidance in VMQ Deep Dive, Part 2, using the commands in Part 1 of this blog, we find the below queue list. The three line items are default queues given to the Hyper-V host (HV01) on its physical ports and the related logical switch. We should be seeing the VMs on this host in that list as well, but we aren’t.
The Windows System event log shows the following error, which seems to be the issue with queuing.
The oddity with this situation is that we actually get better results when the interfaces are configured to be overlapping beginning with base processor zero, which is the default, but not in conformance with Sum-of-Queue reporting mode.
As you can see, VMs are being assigned queues in this setup. However, we’re still seeing the OID failure on some things. It also seems curious that while the NICs should be active-active, VMQs seem to stick with only one interface.
Our environment consists of the following hardware and drivers:
- Dell PowerEdge R820 (or other 12th gen models) w/ latest firmware updates
- Intel I350-t quad-port onboard NICs (two used for Management Team)
- QLogic QLE8262 Converged Network Adapters (two dual-port cards, one port used from each card)
- Network drivers: started w/ Dell’s; running newest from QLogic, 5.3.12.0925
- FCoE storage driver: started w/ Dell’s; running newest from QLogic, 184.108.40.206
- Cisco Nexus 5548UP switches
At this point, we have cases open with both Microsoft and Dell to track down the problem. It looks to be the QLogic driver, but we’ll see. More as we have it…