Going Live with Pure Storage

When I wrote the “Doing It Again” posts about XtremIO and Pure Storage, I didn’t actually think I would get that chance. EMC’s concessions around our initial XtremIO purchase seemed like our next site replacement would be a foregone conclusion. However, when the chips were counted, the hand went to another player: Pure Storage.

pure_boxesLast Friday, the Pure hardware arrived. Unpacking and racking was simple–no cage nuts needed, and the only necessary tool (screwdriver) is included in the “Open Me First” box. The same instructions that I respected during our 2013 POC led the way. I recall back then that QA on their readability was the CEO’s 12-year-old son. If he could follow them, they were customer-ready. Unconventional but effective.

This morning, the Pure SE (@purebp) and I finished the cabling and boot-up config. Three IP addresses, two copper switch ports, and four FC interfaces. The longest part was my perfectionistic cable runs. What can I say? The only spaghetti I like is the edible Italian kind. Fiber and copper should be neat and clean.

After lunch, I zoned the Pure FC ports to my VMware vSphere host converged network adapters (CNAs) in the Cisco Nexus Data Center Network Manager (DCNM). One FC interface from each Pure controller to each Nexus switch provides both controller path and switch fabric redundancies (technical best practice is utilizing all four ports per controller–eight total for the array–but for now, I’m sticking with two per / four total).

I also hit up each of my ESXi hosts via the CLI to configure the SATP rules for multipathing so that new Pure volumes are automatically configured with the Round Robin policy and change paths after every I/O.

esxcli storage nmp satp rule add -s "VMW_SATP_ALUA" -V "PURE" -M "FlashArray" -P "VMW_PSP_RR" -O "iops=1"

pure_pluginFollowing along with the Pure Storage vSphere 5 Best Practices Guide, I deployed the Pure plugin for the vSphere Web Client. Smooth as butter. In case you’re concerned about the Administrator credentials needed to install the plugin, they are only used to push it, so you don’t have to create anything new to persist. The only persistent credentials are those of ‘pureuser’ which you set in the vSphere Web Client Pure plugin to reach back to the Pure array.

From the Pure web UI, which can also be accessed from the top-level plugin in vSphere Web Client, I created host groups for my Hyper-V and vSphere clusters. Since my hosts are sequentially numbered, I used the “Create Multiple…” option with values similar to those below:

pure_create_multiple

Adding the host ports for each host initially eluded me as the default tab in the right pane is “Connected Volumes (0)” and none of the gears (Options) buttons have WWN options. Then I noted the “Host Ports (0)” text–not quite distinguishable enough to be called a “tab”–to the right of “Connected Volumes (0)”. That opened up the “Configure Fibre Channel WWNs” option in the lower top-left gear. With those configured, I added the hosts to the host groups and moved on to storage provisioning.

The guide spells it out clearly, but to create and provision a volume as a datastore directly to the cluster, go to:

  1. Host & Clusters
  2. <cluster>
  3. Related Objects (tab)
  4. Datastores (sub-tab)
  5. Actions
  6. Pure Storage > (bottom option)
  7. Create Pure Datastore

This piece is nice and handles the rescanning for the volume (LUN) beforehand, creating the datastores, and rescanning VMFS afterwards. Checking the datastores list, the new “PURE Fibre Channel Disk” is detected as an “SSD” drive type and ready for use, multipathing with Round Robin, as I configured earlier.

Before proceeding further, it was time to put some data on the box, so I picked a modest VM (40GB fat / 22GB used), initiated a Storage vMotion, set it to convert to Eager Zeroed Thick (best practice), and let it fly. 2 minutes, 29 seconds later, the VM was running on Pure. Most of that “2 minutes” part was due to the source spinning disks.

That about wraps up this “Going Live” intro to Pure Storage on our floor. The rest will play out in the coming days as the rest of our data makes its way over to its new flashy home.

5 Comments

  1. stewdapew said:

    Chris,

    Thanks for sharing your experience, I look forward to future posts.

    Soon Purity O.E. 4.1 will be released with a new vSphere web client plug-in, which will automate all of the manual steps you made today.

    — cheers,
    v

    April 13, 2015
    Reply
  2. Matt said:

    Chris,

    I recently had my new //m20 installed and have performed all the steps outlined. Only my experience with svMotion is not as quick as yours. I did not time the first 2 modest vm’s under 100G, but the massive 730+ Gb server took over 18 hours. My Pure professional was surprised to say the least. one of the differences is that I am still using thin provisioned and not the Eager ZerodThick(provisioning). The LUNS are the same 1Mb block size. Just wondering what your thoughts are.

    I am extremely excited as this SA requires no knowledge about storage administration. well a little, but oh my…it is so simple my 12 year old can manage it….

    September 24, 2015
    Reply
    • Chris said:

      Hey Matt,

      What’s your source array/storage? Also, what’s your connectivity between hosts and source/dest arrays?

      My gut would guess that the 730GB+ server was sitting on nearline SAS (or equivalent 7.2K slow media), which suffer greatly from reads and seeking. The other thought would be around the bandwidth from the source to the host and the host to the dest, esp if you run iSCSI on 1GbE NICs.

      I’m curious about the root cause so let me know! Glad the rest is as easy for you as it was for me!

      –Chris

      September 25, 2015
      Reply
  3. Matt said:

    Source is a VNX5300 running tiered storage, connectivity runs through 6120’s as the Fabrics in the UCS, and they dump into the N5K’s where the zoning happens. Running FCoE 10Gb over to the FC8 Gbs on the destination array. I am more reluctant to say that the EMC drives are the culprit.

    September 25, 2015
    Reply
    • Chris said:

      Gotcha. Yeah, tiered storage is a likely culprit regardless of the vendor, if it is demoting “cold” data down to a slower tier. Our source was 3PAR, but we had long since turned off auto-tiering, because our data didn’t fit a pattern that could be intelligently managed by the array. If random pieces of that large VM were sitting on the cold tier, that would explain a lot.

      Networking looks solid and the legacy defaults for queues, etc won’t result in poor performance like those 18 hours. Once you’re fully on all-flash (Pure or anyone), you can raise those settings to the best practices so that they aren’t throttled by the HBA.

      All that said, it still seems fishy that it took 18hrs, even if *everything* was on cold storage. I migrated a 4TB volume off of our cold 3PAR storage to Pure and it still took less than a day–I think it averaged 250GB per hour (tho that’s pulling of my mental cold storage and might be inaccurate on the slow side). Hmm…

      September 25, 2015
      Reply

Leave a Reply