Have you upgraded your vCenter Server Appliance from vSphere 5.1 to vSphere 5.5 yet?

It’s been well over a year since I’ve been here (well, closer to 18 months, really).  I’ll apologize for that now 🙂

The reality, though, is that it’s been a long, exhausting, and rewarding year and a half, and I’ve taken on some different responsibilities at work.  That’s taken up quite a bit of my time.  I decided I just needed to roll some of that time together with this blog.  Well, at least some of the fruits of that work.

We’ve just released the latest, greatest version of vSphere – vSphere 5.5  With a new version comes a need to upgrade.

Some of you may be using the vCenter Server Appliance.  There’s an Update feature in the appliance to help update from one version to another.  But right next to that in the management UI is a tab called Upgrade.  And that’s the process I’ve just stepped through for you.  Keep in mind here that I haven’t read any of the upgrade KBs (shame on me), but this is a relatively intuitive process, I think.  

Take a look at the video – it’s about 20 minutes long – some time has been shaved off; the entire process took me about an hour, but you don’t want to watch a bunch of silence and spinning wheels, do you?  I didn’t think so.

I will throw out a caveat (that I didn’t show onscreen) that I did have to regenerate the self-signed certificates before the Web Client worked properly.  

Test this process extensively before you try this in a production environment!! Please, don’t try this blindly!

I learned quite a bit during this process, and I hope it helps you a bit.

New Lab is here!

I’m a happy camper!  My new lab gear is in.

I believe we all need some kind of lab environment to play with, otherwise we just don’t learn the hands-on stuff nearly as well or as quickly.  Some employers have lab environments in which to test.  My employer is no different, but I prefer to have control over what I deploy, and when I deploy it.  That way I have no one to blame about anything but myself 🙂

That said, I was running my lab in an old Dell Precision 390 with nothing but 4 cores, 8GB of RAM, and local storage.  That was great a couple of years ago when I put it together, but now, not so much.

The new gear is actually server-grade stuff.  And reasonably inexpensive, if you ask me.

For my storage, I stumbled on a great deal on a N40L Proliant MicroServer from HP.  after repurposing some disk I had laying around the house, I had a small, reasonable storage server.  I installed a bunch of SATA disk: 3 7200 RPM 500GB spindles and a 1TB 7200 RPM spindle in the built-in drive cage.  But that wasn’t quite enough for what I had in mind.  So I bought an IcyDock 4-bay 2.5″ drive chassis for the 5.25″ bay in the MicroServer, and added an IBM M1015 SAS/SATA PCI-e RAID controller to drive the 2.5″ devices.  I had an Intel 520 Series 120GB SSD (bought for the ESXi host, but it didn’t work out) and a WD Scorpio Black 750GB drive just hanging around.  So I added another SSD and Scorpio Black so I could mirror the devices and have some redundancy.

So there’s my SAN and NAS box.  I installed FreeNAS to a 16GB USB stick, and carved up 4 ZFS pools – platinum, gold, silver, and bronze.  Creative, I know LOL.

  • Platinum is a ZFS mirror of the 2 SSDs
  • Gold is a RAID-Z set of the 3 500GB spindles
  • Silver is a ZFS mirror of the 2 Scorpio Blacks
  • Bronze is a ZFS volume on the single 1TB spindle

ZFS Volumes

I debated on swapping Gold and Silver at length, but in the end, left the layout as described.

There are two things I don’t like about this setup, and they both revolve around the networking baked into the MicroServer.

 

  1. Jumbo Frames aren’t supported by the FreeBSD driver for the onboard BroadCom NIC.  This could be fixed in the future with a driver update or the official release of FreeNAS 8.2 (I’m running beta 2 at the moment)
  2. There’s only one onboard NIC.  I’d have liked two NICs, but for the price, maybe I’ll add a PCI-e dual-port Intel Gig card.  That would solve both dislikes.

Platinum, Gold, and Silver are presented via the iSCSI Target on the FreeNAS box as zVol extents.  Bronze is shared via NFS/CIFS, primarily for ISO storage.

As for the ESXi host itself, well here we go:

  • ASUS RS-500A-E6/PS4 chassis
  • 2 x AMD Opteron 6128 8-core CPUs
  • 64GB of Kingston ECC RAM
  • 250GB 7200RPM spindle from the MicroServer
  • 1TB 7200RPM spindle that was recycled from the old lab gear

I chose this seemingly overpowered setup for a few reasons (yep, another bullet point list)

  • Price (the server and its constituent parts only ran me ~$2100USD)
  • Nearly pre-assembled.  I’m not one for building machines anymore
  • Capacity.  Instead of running multiple physical ESXi hosts, I chose to run my lab nested.
  • Compatibility.  This server’s Intel counterpart is on the VMware HCL.  That didn’t mean this one would work, but I felt the odds were high.  The onboard NICs are also both Intel Pro 1000s, which helps.
  • LOM was included.  This is important to me, as I don’t want/need/have tons of extra monitors/keyboards hanging around

So all the parts came in, I put them installed the disks, CPUs, and RAM, dropped an ESXi CD in the drive, booted it up, and wondered – where’s the remote console?  I hadn’t thought about that, so I jacked in a monitor and keyboard only to find that the Delete key is necessary to get into the BIOS to configure the iKVM.  Well, in my case, that posed a little bit of a problem.  See, the only wired keyboards (or wireless, for that matter) are Apple keyboards, since I recently let the last physical Windows box leave my house.  So I had to see if the iKVM pulled DHCP.  I got out iNet, my trusty Mac network scanning utility, scanned my network, and there it was – a MAC address identifying as “ASUSTek Computer, Inc”.  That had to be it, so I fired up a web browser and plugged in the IP.  Now I just had to figure out the username and password.  Documentation to the rescue!  So I got everything configured up, and booted to the ESXi installer, and there you have it, one nice, 16-core 64GB of RAM ESXi host.

Host Summary

It’s doing rather well so far, I’ve got the storage attached, networking set up, and all kinds of VMs running right now, including vCenter Operations, View, SQL, vCloud Director, VMware Data Recovery, vShield Manager, a couple of Win7 desktops, and a few virtualized ESXi hosts, and this is what the box is doing:
Resource Usage
Just to reinforce the importance of Transparent Page Sharing, at the moment, this host is sharing ~17GB of RAM.
Shared Memory
Not to repeat myself, but I’m a happy camper.  I’ve got View set up, so I can work with the environment while I’m on the road, and my next step is to get vCD rolling and happy with a couple of virtualized ESXi hosts so I can start plugging away at building class-specific vApps so I can keep up with the different courses we run.
I hope this helps and perhaps even gives you some inspiration for your own lab environment.  I’m happy to answer any questions you may have about the setup, just drop me a line!

Auto Deploy with the vCenter Server Appliance

Auto Deploy is probably one of my favorite new features of vSphere 5.  The ability to build an ESXi image (with Image Builder), and automate the deployment of stateless hosts quickly and seamlessly just gives me a warm fuzzy.

So how do we set this up?

There are two options:

  1. Install Auto Deploy from the vCenter DVD, set up an external DHCP and TFTP server, setup your images, and go
  2. Deploy the vCenter Server Appliance (vCSA), configure the existing DHCP server, start the DHCP and TFTP servers, setup your images, and go.

I went with option number 2, since there was that much less to install.  Just configure and run!

I started by adding a NIC to the vCSA, since I didn’t want my management network also serving up DHCP.  Since everything I have at the moment in the lab is virtual, I chose to set up a deployment vSwitch just for this purpose.  In your lab or production environment, you may attach that deployment network to an existing network.

I copied the ifcfg-eth0 file in /etc/sysconfig/networking/devices/ to ifcfg-eth1 (the 2nd NIC will be eth1) and edited the new one

# cp /etc/sysconfig/networking/devices/ifcfg-eth0 /etc/sysconfig/networking/devices/ifcfg-eth1

# vi /etc/sysconfig/networking/devices/ifcfg-eth1

It should look something like this (I’m using 10.1.1.0/24 as my deployment network):

ifcfg-eth1

Then I created a new symlink in /etc/sysconfig/network to the new file

# ln -s /etc/sysconfig/networking/devices/ifcfg-eth1 /etc/sysconfig/network/ifcfg-eth1

This provides a persistent configuration for the network device should you reboot your vCenter Server Appliance.

That finishes up the Deployment Network configuration.  Now we need to configure all of the other services.

I started with DHCP.  In poking around the /etc/ directory on the appliance, I found that VMware kindly provided a mostly pre-configured configuration template for the DHCP server: /etc/dhcpd.conf.template

/etc/dhcpd.conf.template

So, being kind of lazy, I simply backed up the existing dhcpd.conf file:

# cp /etc/dhcpd.conf /etc/dhcpd.conf.orig

And then copied the template into place as the config:

# cp /etc/dhcpd.conf/template /etc/dhcpd.conf

And got to editing.  My final config file looks like this:

edited /etc/dhcpd.conf

Once that’s done, you can start the DHCP server:

# /etc/init.d/dhcpd start

Then you need to start the TFTP server:

# /etc/init.d/atftpd start

At this point, I have an ESXi VM PXE booting and doing all the right things – SUCCESS!.

I don’t have Auto Deploy configured from PowerCLI quite yet.  I’ve got a default image loaded up, but without Auto Deploy rules waiting, it’s a wash.  I’ll update when I have things set up more completely.  You probably know more about PowerShell and PowerCLI than do I, but this is what I’m getting (even right after I Connect-VIServer). Something’s wacky with PowerCLI communications:

PowerCLI error

I’ll get it figured out, but until then, take this as a start to your Auto Deploy adventures with the vCenter Storage Appliance!

***EDIT***

Well, stilly me figured out the “cannot connect” problem with PowerCLI. Turns out the Auto Deploy services weren’t started on my vCenter Server Appliance. A quick jaunt to https://:5480, then to the Services tab, then clicking the magic “Start ESXi Services” button resolved that one. I think the “Stopped” status for ESXi Autodeploy was what gave it away 🙂 I’m off and running again!

What’s with the new vSphere vRAM licensing?

Ok, the cat’s out of the bag, the outcry has begun, but is the new vRAM licensing really as bad as you think?

My answer: No.

I’m noting that people seem to be absolutely up in arms about the new licensing structure, but keep this in mind: you’re not licensing your PHYSICAL memory, you’re licensing VIRTUAL memory. If you buy an Enterprise Plus license (which entitles you to 48GB of vRAM), that may well cover that host you have with 128GB of physical RAM, depending on your overcommitment.

You can also pool vRAM entitlements within vCenter, meaning that 3 Enterprise Plus licenses grant you a pool of 48GB*3 in your vCenter environment, and it doesn’t matter on which of your hosts your VMs are using the vRAM.

Watch this space, I’ll have a more in-depth writeup soon-ish, but the moral of the story right now is taken by the words on the cover of the Hitchhiker’s Guide to the Galaxy: DON’T PANIC

-jk