VMworld 2015: Putting Virtual Volumes to Work #STO5822

This was my first experience with Howard Marks, and I would say his reputation accurately precedes him. He’s an eccentric and unabashedly arrogant technologist who calls it as he sees it. While I might not commend most of those attributes, I can respect a guy who acknowledges who he is.

The session as a whole was a good breakdown of vVols (or VVOLS or vvols or vVOLs) as they are today in 1.0. vVols are an exciting evolution, ripe with potential, but are likely not quite enterprise-ready due to feature limitations.

For those with all-flash arrays, the talk periodically bordered on irrelevancy due to the inherent natures of built-in metadata and lacking tiering hinderances of being all-flash. Even so, the parts speaking to validating storage vendors on the quality of their implementations was very relevant and worth reviewing. Checking the box just isn’t enough.

Howard did bring up several rumor-based questions around vendors like EMC having problems with current arrays like VNX supporting vVols. That question begs another around even existing AFA products and their metadata capacity limits. This has been a factor in both XtremIO’s and Pure’s histories and their block size considerations. It’s worth asking AFA vendors, “do your AFAs have enough metadata and compute margin to embrace and support the exponential growth of vVol metadata in production?” Maybe Howard will find the answers for us.

Live Notes & Commentary


  • This is not vvols 101
  • The promise of Virtual Volumes (vvols)
    • What VMware’s telling you
    • What they’re not
  • vVol implementations
    • The good, the bad and the ugly
  • What vVols should bring

VMware’s vVol Pitch

  • It’s like software defined networking
    • VASA provider is out of band control plane
  • Storage Policy Based Management (SPBM) simplifies management
  • Storage provides per-VMDK data services
    • Snapshots are worth something again

vVols Are for Performance

  • Without vVols, virtualization throws I/O into a blender
    • All I/O is now random I/O
  • vVols provide per-VMDK context for storage
  • Bring back
    • Cache read-ahead for sequential reads
    • Sequential write detect
      • Why waste cache on transaction logs?
    • Other cache techniques
  • But only for systems with smart vVols

Per-VM Services

  • Data layout is destiny–must be designed for metadata
  • Log and metadata are now table stakes
    • Easy on flash
    • Snapshots/clones are now metadata–low impact
    • Ask vendor: How granular? How many objects do you support?
  • Snapshots can be application consistent
    • No timing issues quiescing multiple applications
    • Good snapshots open door to new application
      • Backup replacement, etc

vVols Are for Performance – Backup Edition

  • vStorage API-DP now can use array snapshot
  • Array snapshots perform much better

vSphere Snapshots Are So Bad

  • Opvisor Snapwatcher
    • An entire product to track broken snapshots

Storage Quality of Service

  • Some applications are more equal than others
  • Storage allocates resources by policy not application demand
  • Implementations include:
    • Throttles: limit workload to X IOPS
    • Limits, minimums and/or bursts
    • Prioritization: bronze, silver, gold
    • Only works in small neighborhoods w/o vVols
  • Only really works at VM/VMDK level
    • Datastores are just smaller neighborhoods

SPBM and vVols

  • Storage array publishes capabilities
    • Keys/values on storage containers
  • Senior admin defines storage policies
    • Rang of acceptable values for capabilities
    • Sets default policy for new vVols
  • VM admin chooses policy when creating VM
    • vVol provisioned by VASA provider from resources meeting policy

SPBM – The Disappointment

  • Every vendor defines their own capabilities
    • Flashbacks to VASA 1.0
    • Only exception is thick- or thin-provisioned
  • Policies are built from capabilities
    • Policies are vendor-specific
    • vMotion between pools can get complex
  • Wishes
    • More VMware-defined classes
    • Equivalency mapping by users

About VASA Providers

  • Translates vVol commands to array
    • Out of band
    • Required to provision vVol (power-on, power-off)
      • Swap vVol provisioned at power-on
  • Can be internal to array controller or run in an external VM
  • HA is suggested by spec & required for production use

What vVols Mean for Storage

  • Greater abstraction requires more metadata
    • Transition from LUN to vVols = 50X objects
    • Useful snapshots add another 2-3X
    • Storage systems designed around metadata have advantage
      • Chunklets (3PAR), deduplication (Pure, XtremIO), etc
  • Snapshot granularity an issue
  • Lazy vVols implementation will…
    • Use a LUN as a storage container
    • Move vVols around the array to change capabilities
    • Rely on vCenter for all vVol management

The Not-So-Good News (in vVols 1.0)

  • Replication not supported, array or host-based
    • Tintri currently “cheats” this by passing details along replication channel
  • vSphere features/add-ons are not supported with vVols
    • vCloud Director
    • vRealize Operations Manager
    • Data Recovery
    • NSX
    • NFS 4.1
    • Storage I/O Control
    • Fault Tolerance
    • Microsoft clusters

vVols Implementations

  • Ask:
    • How many objects does this system support? (should: 10’s of thousands)
    • How is the VASA provider implemented? Internal or vendor HA?
    • How is vVol visibility in the array UI? Analytics?
    • Is there storage QoS per vVol?
  • Deduplication is a good sign (because requires metadata)
  • Storage containers that are just LUNs are bad

vVols HCL

  • Rumors of major EMC issues with vVols on current VNX
  • CPU-to-capacity ratio is going to have to be higher for vVols
  • “Multi-VC” means one VASA provider can support multiple vCenter’s
  • HA column only applies to external providers


Hopes for the Future

  • Replication support
    • Including VASA X.0 replicate this vVol functions
  • Data services support
    • Data reduction, including realm management
    • Encryption
    • Dynamic QoS, i.e. Cinder
  • Cross-system, if not cross-vendor, clone tracking
    • Report to vCenter that multiple VMs come from same root
    • Helps ensure that data inflation doesn’t occur on destination with SDRS
    • Less/not applicable with array deduplication, except in transit

vVols for Metro Clusters

  • Multiple storage systems emulate one
    • Synchronous replication
    • vPlex-like identity presentation
      • LUNs still single-master replication
      • vVols dynamically bind to endpoints
  • Protocol endpoints are distributed

Bringing vVols to Existing Storage

  • Atlantis USX
    • Layers on host flash/disk and/or external storage
    • Provides compress, dedupe, snapshots, and…vVols
  • Primary Data
    • Global name/data management
    • Uses pNFS, dedicated metadata servers
    • Analytics, automated placement

VMworld 2015 | Wednesday | Putting Virtual Volumes to Work — Storage Best Practices for vSphere 6 and Beyond (STO5822)

Howard Marks, DeepStorageNet

Be First to Comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.