Tag: VMware

While deploying vRealize (formerly, vCenter) Infrastructure Navigator (VIN) yesterday, I ran into an access error that wasn’t at all pleasant.

Access failed. An unknown VM access error has occurred.

I had deployed the virtual appliance per the 5.8.4 documentation on pubs.vmware.com, and had specifically created a Virtual Machine Access role as defined. I set it in Global Permissions for all children and verified that it propagated to the VMs that reported this error (all of them).

Searching VMware KBs and Google for a resolution proved mostly fruitless. I finally came across Scott Norris’s post about resetting the VIN database, which gave me the nugget to resolve my issue. As I look at it now, I’m not quite sure why his pointed me to the answer, but it was the only one out there with exactly the same error–all others were about “discovery errors”. If what I provide below doesn’t solve your issue, check out Scott’s “reboot” option for a more comprehensive refresh.

So what was the problem/answer? DNS.

When I deployed the OVA and reached the field for comma-separated DNS servers, I listed mine–all four of them–like this:,,, I’m quickly learning that four is not a friendly quantity in OVAs or Linux things in general. In the vein of The Matrix, those who write into /etc/hosts seem to like Trinity, or three, as a max. Send it four resulted in none committing.

Fixing it came through these steps:

1. Open the console to the VIN virtual appliance

2. Hit Enter on “Login” to reach the CLI/login prompt

3. Login as “root”

4. Run “yast”

5. Arrow down to “Network Devices”, tab over to “Network Settings” (on the right), and hit Enter


Technology Virtualization

With the release of ESXi 6.0 Update 1a, which fixed the network connectivity issue that plagued all ESXi 6.0 releases until October 6, I have begun my own journey from 5.5 to 6.0. I’m taking a new approach for me, though, as I use Update Manager to perform an upgrade rather than the fresh installs I have always preferred.

Why? Because I learned at VMworld 2015 from the authorities (designers) that upgrading is actually VMware’s recommended path. You can read more from my notes on session INF5123.

What follows below assumes that you have already rebuilt or upgraded to vCenter 6.0 Update 1. In Update 1, the Web Client now supports Update Manager so that everything can be performed there. No more thick client! Now if we can just get rid of Flash…

Step 1: Import ESXi Image

From the home landing page of the vSphere Web Client, navigate here:

  • Update Manager
    • Select an Update Manager server
      • Go to Manage
        • Then ESXi Images
          • Import ESXi Image…
            • Browse to the ISO


Technology Virtualization

This morning, Dell and EMC announced their impending merger as Dell and Silver Lake set out to acquire EMC and its holdings with cash and stock, while maintaining VMware as an independent, publicly-traded company. The event sets off incredible tidal waves financially and technologically and raises many questions.

To that end, the CEOs and other principals from Dell, EMC, VMware, and Silver Lake held conference calls with shareholders and media/analysts this morning. The following 9 questions from participants of the latter call–New York Times, Financial Times, Boston Globe, Wikibon, and others–cover most of the big questions on everyone’s minds. In keeping with Dell’s private holding (and EMC’s soon-to-be), “no comment” showed up a few times where we all hoped to find insight. Time will tell.

Security Storage Technology Virtualization

Dilpreet & Mohan did a great job laying out the install and upgrade paths to reach vCenter 6.0, whether in Appliance or Windows mode. As I mentioned yesterday, the VMware team is encouraging customers to choose the Appliance (VCSA) moving forward due to increased performance, decreased complexity, and overall conformity. You gain much and lose nothing…except:

vSphere Update Manager (VUM), which will be integrated into VCSA in a 2016 release (Yay!). Presumably this will be vSphere 6.1 as that would put vSphere 6.0 a year old and this landmark milestone is too big (in my opinion) for a mere “Update 2”.

Dilpreet was kind to explain that the update which brings VUM integration will pull the VUM configuration & data from existing VUM (Windows) servers and into the VCSA component. This is great to hear as I was unclear about this walking away from the “Part 2” session yesterday.

Please check out the notes below and remember to pay attention to the order of your upgrades. Reference the KBs mentioned so you’re on a supported model & path.

Technology Virtualization

This session is/was true to its title and definitely dove deep into Virtual SAN (VSAN). Due to the extreme nature of details, requirements, parameters, etc, I decided to conclude the live notes about 40 minutes into the presentation. With much respect for court reporters and typists, finishing out the slide notes would have been of no more value that practicing my typing skills.

VSAN looks promising and maturing as a solution. While the concept of metro stretched clusters sounds very intriguing, I believe it is only practical in the right use cases. My own environment, for example, involves significant writing with extended operations, which would not be feasible to replicate live. Local performance would suffer greatly while database crunching generated large amounts of data requiring acknowledgement from the remote site before proceeding.

On the other hand, if your environment is web-scale or low-write intensity, then VSAN stretched clusters may offer great value to you. As always, it depends.

The closing consideration is sheer cost of a VSAN solution. The “HY-4” recommended starting point retails around $10-15K per node (read: $40-60K for the HY-4). That is hardware only, so vSphere and VSAN licensing costs pile on top of that.

The beta preview with dedupe and erasure coding for space efficiency may take VSAN to the next level and make even its premium cost more palatable. IMO: external storage is still the path until this possibility brings down the cost (assuming capacity, not compute, is the limitation).

Storage Technology Virtualization

Excellent presentation by Brian and Salil! They did a great job laying out the upgrade paths, caveats, as well as legacy references for folks coming from ESXi 4.x and early 5.x.

The biggest takeaways were 1) the encouragement to choose the upgrade path for ESXi host upgrades and 2) the announcement of vCSA including VUM in the next version. VMware now fully recommends vCSA for deployments moving forward–with Windows VUM out the way, this is something I can get behind!

A VMware fling exists to upgrade from vCenter Server to vCenter Server Appliance (vCSA) in certain scenarios. Flings aren’t production/supported tools, but may be helpful in the right use case(s).

Highly recommend this session/videos and the notes below!

Technology Virtualization

This was my first experience with Howard Marks, and I would say his reputation accurately precedes him. He’s an eccentric and unabashedly arrogant technologist who calls it as he sees it. While I might not commend most of those attributes, I can respect a guy who acknowledges who he is.

The session as a whole was a good breakdown of vVols (or VVOLS or vvols or vVOLs) as they are today in 1.0. vVols are an exciting evolution, ripe with potential, but are likely not quite enterprise-ready due to feature limitations.

For those with all-flash arrays, the talk periodically bordered on irrelevancy due to the inherent natures of built-in metadata and lacking tiering hinderances of being all-flash. Even so, the parts speaking to validating storage vendors on the quality of their implementations was very relevant and worth reviewing. Checking the box just isn’t enough.

Howard did bring up several rumor-based questions around vendors like EMC having problems with current arrays like VNX supporting vVols. That question begs another around even existing AFA products and their metadata capacity limits. This has been a factor in both XtremIO’s and Pure’s histories and their block size considerations. It’s worth asking AFA vendors, “do your AFAs have enough metadata and compute margin to embrace and support the exponential growth of vVol metadata in production?” Maybe Howard will find the answers for us.

Storage Technology Virtualization

This was by far my longest session as Naveen let the clock fly by–I guess that’s the benefit of being the last session of the day! He definitely made the mode of it, though, and crammed a ton of great information on DRS and HA, both present and future, into the session.

I feel like the notes below actually capture a substantial amount of the practical information, so please enjoy. DRS has always been the magic sauce in vSphere and it’s only getting better.

Biggest joy of DRS in vSphere 6.0: vMotion performance increase by 60%!

Technology Virtualization

Cody & Ravi from Pure Storage brought a good deep-dive of all-flash storage in a virtual (VMware) world. Major emphasis on “deep-dive” as they went into the nitty-gritty of VAAI primitives and especially SCSI UNMAP across the versions.

The only weak spot was the age-old issue of having to cram too much content into too little time. They hit the mark, just a bit rushed. Check out Cody’s blog for an opportunity to ingest it at a pace more appropriate for consumption with coffee or tea.

If you are making the transition from spinning or hybrid storage to all-flash, find the audio for this session and retrain your thinking. Offload old fears to VM-to-datastore limits and RAID considerations. Get simple. Be pure.

Storage Technology Virtualization

The VMware Validated Designs (V2D) session was much like a preface to a book, the book being VMware’s new compilations of proven designs. It lacked a specific design-implementation example (i.e. with HP hardware + Cisco networking + Foundation design), which would have helped, but I’d say that Simran and Mike were still successful.

I should have anticipated it, but all of the designs assume VSAN as the primary storage. They leave the obvious potential for external storage, but that appears to fall outside the scope of any V2Ds. I understand the complication that would come from trying to incorporate non-VMware components, but I also hope that the V2D program grows to encompass partner-assisted V2Ds, particularly on storage, but also on physical networking.

If VSAN is in your potential wheelhouse, check out the customer-facing VMware Validated Designs.

Technology Virtualization