EMC XtremIO and VMware EFI

After a couple weeks of troubleshooting by EMC/XtremIO and VMware engineers, the issue was determined to be an issue with EFI boot handing off a 7MB block to the XtremIO array, which filled the queue, and which would never clear as it was waiting for more data to be able to complete communication (i.e. deadlock). This seems to only happen with EFI firmware VMs (tested with Windows 2012 and Windows 2012 R2) and the issue is on the XtremIO end.

The good news is that the problem can be mitigated by adjusting the Disk.DiskMaxIOSize setting on each ESXi host from the default 32MB (32768) to 4MB (4096). You can find this in vCenter > Host > Configuration > Advanced Settings (bottom one) > Disk > Disk.DiskMaxIOSize. The XtremIO team is working on a permanent fix in the meantime, and the workaround can be implemented hot with no impact to active operations (potentially minor host CPU load increase as ESXi breaks down >4MB I/O into 4MB chunks).

On the other points of latency spikes and small block I/O, these have been corrected through the installation of Service Pack 2 to the XIOS 2.2 code. The upgrade includes new Infiniband firmware, which addresses the cause of the latency (problems with active/active controller communication that cascaded out to host I/O processing), and tweaks to alert thresholds and definitions. This latter item relegates the ~20% small block IO alert to an information event so as to clean up the alerts dashboard. The net result of SP2: latency normally <0.5ms and “spiking” to 5ms at most (very momentarily) and an empty alerts pane.

Many thanks to the EMC/XtremIO team and VMware/Darius of the EFI group.

We’re still working on the capacity points from the prior post, but that’s not a technical problem, and word has it that compression is on the mid-term road map, which will even the feature count between XtremIO and Pure…

One Comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.