Tag: <span>VMware</span>

IMG_4424I am so excited to launch this post and give two enthusiastic thumbs up to VMware on the second general session! They brought the fire with the speakers, the concrete concepts & tech, and the Pat Gelsinger finale. Way to go, VMware!

Truly, from the depths, thank you to Sanjay, Martin, and Pat for bringing the message back to the center. The core of VMware’s passion and strategy shined brightly this morning. While Horizon has come short of inspiring me in past VMworld events, the expanding device and OS support makes it finally something I can see becoming a realistic value-add in my organization. Add to that AppVolumes and NSX underpinning it all, and you have a winning presentation.

CEO Pat Gelsinger to the session and overall event out to the 30,000 foot view–or rather, the stratosphere–without becoming vague, salesy, or irrelevant. Pat laid out our history and foundation of IT and the internet, beginning in 1995, and then cast vision forward to today and beyond. His five imperatives hit the heart of business with technical excellence as only a visionary can do.

Hit up the notes below and catch the video when you can. This is what VMworld is all about.

Technology Virtualization

I’ve been meaning to write this post for a couple weeks now, and Virgin America is giving me that opportunity with an hour departure delay (silver lining? ;).

So much of the tech talk today centers on specs and numbers, but behind every product are people–engineers, executives, and various support staff. These folks have an amazing power to influence the success of their products and services for good and ill.

I still recall a support situation in 1998 or maybe ’99 and a Compaq laptop that had recurring display issues. On tech specs alone, the product would have earned a scathing review (especially since I wasn’t the only one facing the exact, repeat failure). However, Compaq Support all the way up to the VP engaged, rectified (with a replacement laptop), and topped it with a duffel bag and personal note of apology. Doing it right etches it in people’s memories.

Today I’d like to highlight a few folks and groups that have stood out to me recently and reflect well on their products and organizations. It’s a far-from-exhaustive, unordered list that centers on those I’ve not mentioned in previous posts.


When I wrote the “Doing It Again” posts about XtremIO and Pure Storage, I didn’t actually think I would get that chance. EMC’s concessions around our initial XtremIO purchase seemed like our next site replacement would be a foregone conclusion. However, when the chips were counted, the hand went to another player: Pure Storage.

pure_boxesLast Friday, the Pure hardware arrived. Unpacking and racking was simple–no cage nuts needed, and the only necessary tool (screwdriver) is included in the “Open Me First” box. The same instructions that I respected during our 2013 POC led the way. I recall back then that QA on their readability was the CEO’s 12-year-old son. If he could follow them, they were customer-ready. Unconventional but effective.

This morning, the Pure SE (@purebp) and I finished the cabling and boot-up config. Three IP addresses, two copper switch ports, and four FC interfaces. The longest part was my perfectionistic cable runs. What can I say? The only spaghetti I like is the edible Italian kind. Fiber and copper should be neat and clean.

Storage Technology Virtualization

If you use a backup product that leverages VMware’s changed block tracking (CBT), you have probably also found cases when CBT wasn’t the right fit. In the world of EMC Avamar, I’ve found that VM image-level backups will slow to a crawl if enough blocks change but don’t quite reach the level (25%) necessary to automatically revert to a full backup.

When I created a case with EMC Support, they dug into the logs and then pointed to best practices that recommend disabling CBT when more than 10,000 blocks regularly change between backups. The problem I hit next was that the top result and KB for enabling/disabling CBT was a VMware post stating that step 1 was to power off the VM. Backups are running long and the next maintenance window isn’t for two weeks. Hmm…

  1. Avamar method
  2. PowerCLI method

Technology Virtualization

Last week I ran across a tweet talking about a VMware Labs fling that introduces ESXtop statistics as a plugin into the vSphere Web Client. If you’re not familiar with “flings”, they are experimental tools made by VMware engineers and shared with the community. Anyways, this fling jumped on my list immediately.

Download ESXtopNGC Plugin: https://labs.vmware.com/flings/esxtopngc-plugin

The first thing you might notice is the System Requirements’ sole item: “vCenter Server Appliance 5.5”. Hmm. I run vCenter Server on Windows since Update Manager still requires it and I don’t see the value of having both the vCSA and a Windows VM, as opposed to just one Windows VM. A few comments quickly came in, though, confirming that it works just fine as a plugin on Windows vCenter, too.

Here’s how to install it:

1. Download ESXtopNGC (“Agree & Download” in the top left)


Technology Virtualization

shout-iopsTuesday, October 7, was a big day for me. After searching for more than three months for the cause of a repeated storage connectivity failure, I finally found a chunk of definitive data. The scientific method would be happy–I had an hypothesis, a consistently reproducible test, and a clear finding to a proposition that had hung in the ether unanswered for two months.

My environment has never seemed eccentric or exceptional until EMC, VMware, and I were unable to explain why our ESXi hosts could not sustain a storage controller failover (June). It was a “non-disruptive update” sans the “non-“. The array, though, indicated no issues inside itself. The VMs and hosts depending on the disks didn’t agree.

As with any troubleshooting, a key follow-up is being able to reproduce it and to gather sufficient logs when you do, so that another downtime event isn’t necessary after that. We achieved that first part (repro) with ease, but came up short on analytical data to find out why (August). Being that this was a production environment, repeated hard crashes of database servers wasn’t in the cards.

The other participant organizations in this Easter egg hunt were suspicious of the QLogic 8262 Converged Network Adapter firmware as the culprit, apparently after receiving indications to that effect from QLogic. As that data came second-hand, I can’t say whether that was a guess or a hard-evidence-based hypothesis. Our CNAs were running the latest available from Dell’s FTP update site (via the Lifecycle Controller), but that repository stays a few revisions behind for some unknown yet intentional reason (ask Dell).

Storage Technology Virtualization

PowerCLIScripting and automation are ashamedly new territories for me. I’ve heard enough clarion calls to grow and develop personally and professionally, though, that I know I have to gain ground here. Hopefully this is the first step in building my knowledge base of such tools.

In this entry, I need to solve for two configuration tasks.


First, I am concluding an evaluation of Splunk and need to reset my vSphere ESXi 5.5 hosts’ syslog global log hosts to only our existing syslog server. My inclination was to click through the vSphere Client and change it host by host, as it would have taken less time than it has to write the words in the post so far. However, as I am in search of a new syslog solution, possibly VMware LogInsight, I know I will need to do this again.

Technology Virtualization

Closing the Cloud Skills Gap: Scott Lowe (@scott_lowe, blog.scottlowe.org)


  • 50% of business w/ cloud as high priority
  • Insufficient candidates to fill openings
  • Cloud brings new set of skills unneeded in past (Curtis Robinson, IDC)

NIST Definition of Cloud Computing

  • On-demand self-service <– if you are doing something to allocate service, you aren’t doing cloud
  • Resource pooling <– if no network virtualization, you aren’t doing cloud
  • Rapid elasticity <– scale up AND scale down, often harder on the latter
  • Measured service <– assess what’s being used
  • Broad network access <– ubiquitous, access anywhere

Top Five Job Skills Needed (ranked by importance, from IDC whitepaper)

  1. Risk management
  2. IT service management
  3. Project/program management
  4. Business-IT alignment
  5. Technical skills in cloud implementation <– note that this (tech) is last

Technology Virtualization

VMworld Summary:

  • Need to prioritize delivery of applications to the users
  • Break down the rigidity of infrastructure to become adaptable
  • Success comes through being thoughtful, decisive, and bold

Today’s Silos:

  • Traditional vs cloud
  • IT vs Dev
  • On-premise vs Off-premise
  • Safe/secure/compliant vs instant/elastic/self-service
  • Own assets vs leverage others’ assets (example: Uber)


  • SDDC
    • Virtualize everything you can
    • Compute: vSphere
    • Network: NSX
    • Storage: VSAN & vVol
    • Mgmt: vRealize
  • Hybrid-cloud
  • EUC

Technology Virtualization

When we started our initial foray into the all-flash array space, we had to put on the brakes when the “best practice” recommendations started flying from the SEs and guides. In a perfect world, we’d be entirely on the new array (Pure Storage was first), but migration is a necessary process. We also wanted a clear back to go back if POCs failed. The recommendation for IOPS before changing paths with Round-Robin native multipathing (NMP) was one of those settings.

From the EMC XtremIO Storage Array User Guide 2.4:

For best performance, it is recommended to do the following:

  • Set the native round robin path selection policy on XtremIO volumes presented to the ESX host.
  • Set the vSphere NMP Round Robin path switching frequency to XtremIO volumes from the default value (1000 I/O packets) to 1.

These settings ensure optimal distribution and availability of load between I/O paths to the XtremIO storage.

I never pursued that path to see if HP 3PAR would tolerate it, since other settings were clearly incompatible, but apparently HP came to their own realization on the matter. That said, please take caution with environments running more than just these two arrays, and watch out for the other “best practices” for all-flash arrays. Setting the queue depth to max (256) or raising concurrent operations to 64 will likely overwhelm or cause I/O loss when non-flash arrays are under pressure.

Storage Technology Virtualization