Category: Virtualization

After far, far too long, I finally stood up my home lab to run vSphere 6 and an upcoming assortment of servers and tools. SolarWinds, Rubrik, and many others will shortly be churning away. First things first, though, the foundation needs to be solid.

My ESXi hosts are powered by SuperMicro X10SLH-F/X10SLM+-F boards that shipped with v2.0 in the BIOS. I thought that would be a really simple remedy after seeing the Maintenance > BIOS Update menu in the IPMI, but SuperMicro considers that a “premium” feature that’s separately licensed–and well elaborated upon by Bhargav here.

supermicro_bios_update_pk

Thankfully, Bhargav and others like my colleague, Chris Wahl, documented the USB-bootable option to save both time and money. In my application of their steps, I discovered a couple other helpful hints that I thought I would share here.

supermicro_bios_update_nf

Technology Virtualization

As Pure//Accelerate approaches, one of my favorite aspects of winning solutions comes to mind. It’s a virtue that transforms products into MVPs, rather than the drama generators so common on the court and in the field. What is it?

Simplicity

Businesses have enough knobs and pain points with tier-1 Oracle/SAP deployments and SQL, SharePoint and Exchange farms. The last thing they need is for storage and data protection to jump on the pile. That’s why enterprises need Pure Storage and Rubrik.

From the ground up, Pure and Rubrik have simplicity in their DNA. If you have a FlashArray on the floor, then you already know the freedom and ease it brings to storage infrastructure. Gone are the days of tweaking with RAID sets or tuning LUNs to squeeze out a few performance points. With a few cables and a vSphere plugin, Pure serves up datastores and gets out of the way.

Rubrik brings the same unobtrusive value to data protection and is the perfect pairing to Pure. From rack & go to policy-driven automation to instant recovery, Rubrik drives straight to the point and with beautiful simplicity.

Rack & Go

The first thing that stands out with Rubrik is its lean footprint–it doesn’t eat up precious data center space. When we deployed Rubrik at ExponentHR, we shrunk our backup layout from 14RU at each data center to just 4RU, with an even greater reduction in power consumption and cabling complexity.

With the previous product, the physical installation wasn’t easy, but it paled in comparison to the configuration and learning curve challenges. In contrast, the entire Rubrik deployment took 90 minutes to install and configure at both sites, including drive time. Starting the engine was as easy as a set of vCenter credentials.

Storage Technology Virtualization

Rubrik makes instant recovery easy everywhere. As I wrote four months ago, it only takes a few clicks to bring a previous version of any protected VM into production. In 2.0, the great folks at Rubrik enhanced this capability with replication.

Replication is a word that means many things to many people and could quickly get abused in comparisons. In our previous data protection solution, replication of backups was limited to scheduled jobs and practically meant our off-site backups were anywhere from 3 hours (best case) to 48 hours (worst case) old, with no guarantees.

Rubrik takes a refreshingly different tactic. In its policy-based world, backups are driven by SLAs (gold, silver, bronze, etc), which are defined by frequency and retention of snapshots. Replication is married to these policies and is triggered upon the completion of VM backups.

For example, this morning one of our mission-critical SQL servers in our Gold Repl SLA domain started a backup job at 6:35am and completed that job one minute later at 6:36am. Gold Repl takes snapshots every 4 hours, keeps those hourlies for 3 days, and then keeps dailies for a month. As the “Repl” denotes, it also replicates and retains 3 days of those backups at another site. Oh, and as the cherry on top, it additionally archives the oldest backups to Amazon S3. Pretty comprehensive, eh?

repl_source_snap

Storage Technology Virtualization

While deploying vRealize (formerly, vCenter) Infrastructure Navigator (VIN) yesterday, I ran into an access error that wasn’t at all pleasant.

Access failed. An unknown VM access error has occurred.

I had deployed the virtual appliance per the 5.8.4 documentation on pubs.vmware.com, and had specifically created a Virtual Machine Access role as defined. I set it in Global Permissions for all children and verified that it propagated to the VMs that reported this error (all of them).

Searching VMware KBs and Google for a resolution proved mostly fruitless. I finally came across Scott Norris’s post about resetting the VIN database, which gave me the nugget to resolve my issue. As I look at it now, I’m not quite sure why his pointed me to the answer, but it was the only one out there with exactly the same error–all others were about “discovery errors”. If what I provide below doesn’t solve your issue, check out Scott’s “reboot” option for a more comprehensive refresh.

So what was the problem/answer? DNS.

When I deployed the OVA and reached the field for comma-separated DNS servers, I listed mine–all four of them–like this: 192.168.1.11,192.168.1.12,10.1.1.11,10.1.1.12. I’m quickly learning that four is not a friendly quantity in OVAs or Linux things in general. In the vein of The Matrix, those who write into /etc/hosts seem to like Trinity, or three, as a max. Send it four resulted in none committing.

Fixing it came through these steps:

1. Open the console to the VIN virtual appliance

2. Hit Enter on “Login” to reach the CLI/login prompt

3. Login as “root”

4. Run “yast”

5. Arrow down to “Network Devices”, tab over to “Network Settings” (on the right), and hit Enter

vin_yast_1

Technology Virtualization

With the release of ESXi 6.0 Update 1a, which fixed the network connectivity issue that plagued all ESXi 6.0 releases until October 6, I have begun my own journey from 5.5 to 6.0. I’m taking a new approach for me, though, as I use Update Manager to perform an upgrade rather than the fresh installs I have always preferred.

Why? Because I learned at VMworld 2015 from the authorities (designers) that upgrading is actually VMware’s recommended path. You can read more from my notes on session INF5123.

What follows below assumes that you have already rebuilt or upgraded to vCenter 6.0 Update 1. In Update 1, the Web Client now supports Update Manager so that everything can be performed there. No more thick client! Now if we can just get rid of Flash…

Step 1: Import ESXi Image

From the home landing page of the vSphere Web Client, navigate here:

  • Update Manager
    • Select an Update Manager server
      • Go to Manage
        • Then ESXi Images
          • Import ESXi Image…
            • Browse to the ISO

esxi6_import

Technology Virtualization

This morning, Dell and EMC announced their impending merger as Dell and Silver Lake set out to acquire EMC and its holdings with cash and stock, while maintaining VMware as an independent, publicly-traded company. The event sets off incredible tidal waves financially and technologically and raises many questions.

To that end, the CEOs and other principals from Dell, EMC, VMware, and Silver Lake held conference calls with shareholders and media/analysts this morning. The following 9 questions from participants of the latter call–New York Times, Financial Times, Boston Globe, Wikibon, and others–cover most of the big questions on everyone’s minds. In keeping with Dell’s private holding (and EMC’s soon-to-be), “no comment” showed up a few times where we all hoped to find insight. Time will tell.

Security Storage Technology Virtualization

Dilpreet & Mohan did a great job laying out the install and upgrade paths to reach vCenter 6.0, whether in Appliance or Windows mode. As I mentioned yesterday, the VMware team is encouraging customers to choose the Appliance (VCSA) moving forward due to increased performance, decreased complexity, and overall conformity. You gain much and lose nothing…except:

vSphere Update Manager (VUM), which will be integrated into VCSA in a 2016 release (Yay!). Presumably this will be vSphere 6.1 as that would put vSphere 6.0 a year old and this landmark milestone is too big (in my opinion) for a mere “Update 2”.

Dilpreet was kind to explain that the update which brings VUM integration will pull the VUM configuration & data from existing VUM (Windows) servers and into the VCSA component. This is great to hear as I was unclear about this walking away from the “Part 2” session yesterday.

Please check out the notes below and remember to pay attention to the order of your upgrades. Reference the KBs mentioned so you’re on a supported model & path.

Technology Virtualization

This session is/was true to its title and definitely dove deep into Virtual SAN (VSAN). Due to the extreme nature of details, requirements, parameters, etc, I decided to conclude the live notes about 40 minutes into the presentation. With much respect for court reporters and typists, finishing out the slide notes would have been of no more value that practicing my typing skills.

VSAN looks promising and maturing as a solution. While the concept of metro stretched clusters sounds very intriguing, I believe it is only practical in the right use cases. My own environment, for example, involves significant writing with extended operations, which would not be feasible to replicate live. Local performance would suffer greatly while database crunching generated large amounts of data requiring acknowledgement from the remote site before proceeding.

On the other hand, if your environment is web-scale or low-write intensity, then VSAN stretched clusters may offer great value to you. As always, it depends.

The closing consideration is sheer cost of a VSAN solution. The “HY-4” recommended starting point retails around $10-15K per node (read: $40-60K for the HY-4). That is hardware only, so vSphere and VSAN licensing costs pile on top of that.

The beta preview with dedupe and erasure coding for space efficiency may take VSAN to the next level and make even its premium cost more palatable. IMO: external storage is still the path until this possibility brings down the cost (assuming capacity, not compute, is the limitation).

Storage Technology Virtualization

Excellent presentation by Brian and Salil! They did a great job laying out the upgrade paths, caveats, as well as legacy references for folks coming from ESXi 4.x and early 5.x.

The biggest takeaways were 1) the encouragement to choose the upgrade path for ESXi host upgrades and 2) the announcement of vCSA including VUM in the next version. VMware now fully recommends vCSA for deployments moving forward–with Windows VUM out the way, this is something I can get behind!

A VMware fling exists to upgrade from vCenter Server to vCenter Server Appliance (vCSA) in certain scenarios. Flings aren’t production/supported tools, but may be helpful in the right use case(s).

Highly recommend this session/videos and the notes below!

Technology Virtualization

This was my first experience with Howard Marks, and I would say his reputation accurately precedes him. He’s an eccentric and unabashedly arrogant technologist who calls it as he sees it. While I might not commend most of those attributes, I can respect a guy who acknowledges who he is.

The session as a whole was a good breakdown of vVols (or VVOLS or vvols or vVOLs) as they are today in 1.0. vVols are an exciting evolution, ripe with potential, but are likely not quite enterprise-ready due to feature limitations.

For those with all-flash arrays, the talk periodically bordered on irrelevancy due to the inherent natures of built-in metadata and lacking tiering hinderances of being all-flash. Even so, the parts speaking to validating storage vendors on the quality of their implementations was very relevant and worth reviewing. Checking the box just isn’t enough.

Howard did bring up several rumor-based questions around vendors like EMC having problems with current arrays like VNX supporting vVols. That question begs another around even existing AFA products and their metadata capacity limits. This has been a factor in both XtremIO’s and Pure’s histories and their block size considerations. It’s worth asking AFA vendors, “do your AFAs have enough metadata and compute margin to embrace and support the exponential growth of vVol metadata in production?” Maybe Howard will find the answers for us.

Storage Technology Virtualization