When I wrote the “Doing It Again” posts about XtremIO and Pure Storage, I didn’t actually think I would get that chance. EMC’s concessions around our initial XtremIO purchase seemed like our next site replacement would be a foregone conclusion. However, when the chips were counted, the hand went to another player: Pure Storage.
Last Friday, the Pure hardware arrived. Unpacking and racking was simple–no cage nuts needed, and the only necessary tool (screwdriver) is included in the “Open Me First” box. The same instructions that I respected during our 2013 POC led the way. I recall back then that QA on their readability was the CEO’s 12-year-old son. If he could follow them, they were customer-ready. Unconventional but effective.
This morning, the Pure SE (@purebp) and I finished the cabling and boot-up config. Three IP addresses, two copper switch ports, and four FC interfaces. The longest part was my perfectionistic cable runs. What can I say? The only spaghetti I like is the edible Italian kind. Fiber and copper should be neat and clean.
After lunch, I zoned the Pure FC ports to my VMware vSphere host converged network adapters (CNAs) in the Cisco Nexus Data Center Network Manager (DCNM). One FC interface from each Pure controller to each Nexus switch provides both controller path and switch fabric redundancies (technical best practice is utilizing all four ports per controller–eight total for the array–but for now, I’m sticking with two per / four total).
I also hit up each of my ESXi hosts via the CLI to configure the SATP rules for multipathing so that new Pure volumes are automatically configured with the Round Robin policy and change paths after every I/O.
esxcli storage nmp satp rule add -s "VMW_SATP_ALUA" -V "PURE" -M "FlashArray" -P "VMW_PSP_RR" -O "iops=1"
Following along with the Pure Storage vSphere 5 Best Practices Guide, I deployed the Pure plugin for the vSphere Web Client. Smooth as butter. In case you’re concerned about the Administrator credentials needed to install the plugin, they are only used to push it, so you don’t have to create anything new to persist. The only persistent credentials are those of ‘pureuser’ which you set in the vSphere Web Client Pure plugin to reach back to the Pure array.
From the Pure web UI, which can also be accessed from the top-level plugin in vSphere Web Client, I created host groups for my Hyper-V and vSphere clusters. Since my hosts are sequentially numbered, I used the “Create Multiple…” option with values similar to those below:
Adding the host ports for each host initially eluded me as the default tab in the right pane is “Connected Volumes (0)” and none of the gears (Options) buttons have WWN options. Then I noted the “Host Ports (0)” text–not quite distinguishable enough to be called a “tab”–to the right of “Connected Volumes (0)”. That opened up the “Configure Fibre Channel WWNs” option in the lower top-left gear. With those configured, I added the hosts to the host groups and moved on to storage provisioning.
The guide spells it out clearly, but to create and provision a volume as a datastore directly to the cluster, go to:
- Host & Clusters
- Related Objects (tab)
- Datastores (sub-tab)
- Pure Storage > (bottom option)
- Create Pure Datastore
This piece is nice and handles the rescanning for the volume (LUN) beforehand, creating the datastores, and rescanning VMFS afterwards. Checking the datastores list, the new “PURE Fibre Channel Disk” is detected as an “SSD” drive type and ready for use, multipathing with Round Robin, as I configured earlier.
Before proceeding further, it was time to put some data on the box, so I picked a modest VM (40GB fat / 22GB used), initiated a Storage vMotion, set it to convert to Eager Zeroed Thick (best practice), and let it fly. 2 minutes, 29 seconds later, the VM was running on Pure. Most of that “2 minutes” part was due to the source spinning disks.
That about wraps up this “Going Live” intro to Pure Storage on our floor. The rest will play out in the coming days as the rest of our data makes its way over to its new flashy home.