Category: Technology

IPv6, for those unfamiliar, is the Internet Protocol version 6, the next evolution of network addressing and the internet. Just like Bill Gates’ famous statement about 640KB being all that we’d ever need in computing, so the designers of IPv4 (Internet Protocol version 4) thought of the 4.3 billion addresses in the 32 bits of IPv4. Surely that’s enough! Nearly one per every person on earth?!? But how many of us have a smart phone (iPhone, Android, BlackBerry, etc), a home computer, an Xbox or PS3…not to mention any internet-connected devices at your place of employment?

Those 4.3B quickly disappear, especially when a lot of blocks were eliminated from distribution from day 1 (10.x.x.x, 172.16.x.x-172.31.x.x, 192.168.x.x, and all the multicast and experimental chunks). Add to that the Class A’s (16 million address blocks) wastefully given to large corporations, and you can see where the addresses went. Two weeks ago, the last Class A and thus, the last allotment from the centralized addressing authority, IANA, was dispensed. In technical terms, IPv4 is officially spent. Sure, ISPs still have supplies, but those are now a non-replenishable  resource.

Enter IPv6. 128 bits of addressing glory. The IETF (Internet Engineering Task Force) decided that once was enough with regards to running out of space (at least until we expand to other worlds). How many addresses is that, you ask?

Networking Technology

Are you familiar with VCE? If not, add it to your IT acronym dictionary, but it’ll be something you hear more about in the future if virtualization, shared storage, converged networks, and/or server infrastructure are in your purview. VCE stands for “Virtual Computing Environment” and is a consortium of Cisco, EMC, VMware, and Intel (funny…if you take three of those initials, you get V-C-E). The goal and objective, which they seem to be realizing, is to deliver a “datacenter in a box” (or multiple boxes, if your environment is large), and in a lot of ways, I think they have something going…

The highlights for quick consumption:

  • a VCE Vblock is an encapsulated, manufactured product (SAN, servers, network fully assembled at the VCE factory)
  • a Vblock solution is designed to be sized to your environment based on profiling of 200,000+ virtual environments
  • one of the top VCE marketed advantages is a single support contact and services center for all components (no more finger pointing)
  • because a Vblock follows “recipes” for performance needs and profiles, upgrades also come/require fixed increments
  • Cisco UCS blade increments are in “packs” of four (4) blades; EMC disks come in five (5) RAID group “packs”
  • Vblock-0 is good for 300-800 VMs; Vblock-1 is for 800-3000 VMs; Vblock-2 supports 3000-6000 VMs
  • when crossing the VM threshold for a Vblock size, Vblocks can be aggregated

Those are the general facts. So what does all that mean for interested organizations? Is it a good fit for you? Here are some takeaways I drew from the points above as well as the rest of the briefing by our VCE, EMC, and Cisco reps…

Storage Technology Virtualization

We recently performed some upgrade our Cisco MDS 9509 and thought we’d share the steps with you. You’re welcome to hop on Cisco.com as well and grab the user guide, but if you’re running a 9500 with redundant Sup-2’s, this should be all you need to hop between SAN-OS 3.x versions and all the way up to NX-OS 5.x…

Networking Technology

If you’re running a VMware vSphere cluster on a two-tier (or greater) Cisco network, you might be in a situation like I was. You see, we built in redundancy when we planned our core and access switches, but the design had one significant flaw (see the simplified diagram to the right). Pretend all of those lines are redundant paths. Looks good so far, right? If CoreA goes down, ESX(i) can still send traffic up through AccessB to CoreB. The reverse applies if -B is down, and likewise for either of the Access- switches.

The catch comes for VMs on ESX(i) when one of the Core- switches goes down. ESX(i) balances VMs across the ports in the Virtual Machine port group(s). If a port goes down, it will smartly move the VM(s) to another port that is up. If an “upstream” hop like CoreB goes down, though, ESX(i) doesn’t know about that event, so it keeps its VMs in place, oblivious to the fact that the VMs on AccessB ports are as good as dead to the world. [Enter Link-State Tracking]

Networking Technology Virtualization

We’ve been running ESX since the days of v2.5, but with the news that v4.1 will be the last “fat” version with a RedHat service console, we decided it was time to transition to ESXi. The 30+ step guide below describes our process using an EMC CLARiiON CX3 SAN and Dell hosts with redundant Qlogic HBAs (fiber environment).

  1. Document network/port mappings in vSphere Client on existing ESX server
  2. Put host into maintenance mode
  3. Shutdown host
  4. Remove host from Storage Group in EMC Navisphere
  5. Create dedicated Storage Group per host for the boot LUN in Navisphere
  6. Create the 5GB boot LUN for the host
  7. Add the boot LUN to the host’s Storage Group
  8. Connect to the host console via the Dell Remote Access Card (DRAC)
  9. Attach ESXi media via DRAC virtual media
  10. Power on host (physically or via the DRAC)

Storage Technology Virtualization

The default heap size for VMFS-3 is set to 16Mb. This allows for a maximum of 4Tb of open virtual disk capacity on a single ESX host. In ESX 3.0,…

Technology Virtualization

Storage Technology Virtualization