Cody & Ravi from Pure Storage brought a good deep-dive of all-flash storage in a virtual (VMware) world. Major emphasis on “deep-dive” as they went into the nitty-gritty of VAAI primitives and especially SCSI UNMAP across the versions.
The only weak spot was the age-old issue of having to cram too much content into too little time. They hit the mark, just a bit rushed. Check out Cody’s blog for an opportunity to ingest it at a pace more appropriate for consumption with coffee or tea.
If you are making the transition from spinning or hybrid storage to all-flash, find the audio for this session and retrain your thinking. Offload old fears to VM-to-datastore limits and RAID considerations. Get simple. Be pure.
Live Notes & Commentary
Agenda
- Trends in the DC architecture and intro to flash
- Current integration & roadmap
- Customer experience
- Best practices & updates
- Space reclamation
- Q&A
Trends
CPU & Memory: Moore’s Law
Network:
- 10G adoption moving to 40G
- Low latency switches
- Policy driven networks
- SDN gaining ground
Storage: Flash storage
Virtualization & Storage in the Old World
- Legacy storage with spindle drives
- Capacity drives: more drives –> more IOPS
- Aggregates
- Performance tiers
- Short stroking disks
- Hot & cold disk data
- Tiers & caches
- Power, cooling, data center space — huge
- Best practices tuned to cater large disk arrays
Virtualization & Storage in the New World
- Storage based on all-flash or SSD drives
- Drives sizes ranging from 512GB to 4TB
- Massive amounts of IOPS with great latency & bandwidth
- Radically new approach to deal with flash media
- Data reduction techniques
- Helps to reduce costs
- Brings greater storage efficiencies
- Negates the need of performance tiers and caches
- Less power, cooling, and datacenter space
- iPhone level simplicity
Should it be any different in the new world?
- Easy:
- No IOP calculation
- No RAID decisions
- No performance balancing
- No workload isolation
- No wasted space
- Performance increases 1000X
- Operational benefits 10X
Integration aspects
- vSphere plug-ins
- Scripting & cmdlets
- Large LUNs & datastores
- VAAI criticality
UC Davis Campus Data Center Customer Focus
Things to still consider with flash
- SCSI UNMAP
- Per VM IOPS limits — turn it off
- VMs that write larger than 64K chunk sizes
- VM saturating 10Gb network switch
##
VMware VAAI: Unlocking Flash
- Atomic Test & Set (ATS) hardware-assisted locking
- Full Copy (XCOPY)
- Block Zero (WRITESAME)
XCOPY in vSphere 5.x
- Thin virtual disks grow in 1MB chunks–leads to fragmentation, not contiguous
- ESXi did not use the MaxHWTransferSize with thin disks–used VMFS block size of 1MB
XCOPY in vSphere 6.0 (enhancement)
- Now adheres to MaxHWTransferSize for thin virtual disks
- Attempts to maximize each command
Space Allocation & Virtual Disk Type
- Base capacity doesn’t make a difference with disk types
Virtual Disk Choice Revisited: Only Two Concerns Now
- Management ease (thin vs thick)
- Space reclamation
Space Reclamation (UNMAP): Why?
- With block storage–array does not control the file system
- Dead space is preserved on the array until…
…What?
- UNMAP: T10 SCSI operation
- Automates this reclamation process, built into Windows 2012 R2, Linux ext4
…Why Now?
- Least efficient: in a physical server world, less of a problem
- deletions smaller
- thin less common
- More efficient: in a virtualized world, more important
- deleted files are entire virtual disks or VMs
- thin is common throughout the stack
- Most efficient: imperative in all-flash array reduction arrays
- reclamation helps regain efficiency of flash
History of UNMAP in vSphere
- ESXi 5.0 (mid-2011)
- Automatic support introduced — overloaded or timed out arrays
- ESXi 5.0 P2 (end-2011)
- Automatic UNMAP disabled
- ESXi 5.0 U1 (2012)
- VMFS UNMAP in vmkfstools
- ESXi 5.5 (2013)
- Added to esxcli
- ESXi 6.0 (2015)
- In-guest UNMAP introduced
In-Guest Space Reclamation (UNMAP): How?
- Traditional in-guest space reclamation is tough
- UNMAP cannot traverse the many levels of SCSI virtualization
- Space had to be reclaimed by zeroing out empty space inside guest
- sdelete, dd, etc
- Array discards the zeroes
- Space returned to free pool
vSphere 6: In-Guest UNMAP
- End-to-end UNMAP fully supported
- Zeroing not required for supported configurations
- Enable the option “EnableBlockDelete”
- Requires
- ESXi 6.0
- VM hardware version 11
- EnableBlockDelete set to 1 (disabled by default)
- Guest OS that can issue SCSI-2 UNMAP
- Thin virtual disk
Virtual Volumes & UNMAP
- Space reclaimed automatically: No more VMFS UNMAP
- viols provide guest OS more direct SCSI control: in-guest UNMAP support, etc
- No more worrying about VMware configuration to provide UNMAP support
Expectations about UNMAP
- Running UNMAP doesn’t guarantee anything…
- Physical capacity
- Increase/decrease data reduction
- UNMAP is a housekeeping responsibility
—
VMworld 2015 | Tuesday | Flash Storage Best Practices & Technology Preview (STO6326-SPO)
Cody Hosterman, Pure Storage | Ravi Venkat, Pure Storage
Be First to Comment