Auto Deploy with the vCenter Server Appliance

Auto Deploy is probably one of my favorite new features of vSphere 5.  The ability to build an ESXi image (with Image Builder), and automate the deployment of stateless hosts quickly and seamlessly just gives me a warm fuzzy.

So how do we set this up?

There are two options:

  1. Install Auto Deploy from the vCenter DVD, set up an external DHCP and TFTP server, setup your images, and go
  2. Deploy the vCenter Server Appliance (vCSA), configure the existing DHCP server, start the DHCP and TFTP servers, setup your images, and go.

I went with option number 2, since there was that much less to install.  Just configure and run!

I started by adding a NIC to the vCSA, since I didn’t want my management network also serving up DHCP.  Since everything I have at the moment in the lab is virtual, I chose to set up a deployment vSwitch just for this purpose.  In your lab or production environment, you may attach that deployment network to an existing network.

I copied the ifcfg-eth0 file in /etc/sysconfig/networking/devices/ to ifcfg-eth1 (the 2nd NIC will be eth1) and edited the new one

# cp /etc/sysconfig/networking/devices/ifcfg-eth0 /etc/sysconfig/networking/devices/ifcfg-eth1

# vi /etc/sysconfig/networking/devices/ifcfg-eth1

It should look something like this (I’m using 10.1.1.0/24 as my deployment network):

ifcfg-eth1

Then I created a new symlink in /etc/sysconfig/network to the new file

# ln -s /etc/sysconfig/networking/devices/ifcfg-eth1 /etc/sysconfig/network/ifcfg-eth1

This provides a persistent configuration for the network device should you reboot your vCenter Server Appliance.

That finishes up the Deployment Network configuration.  Now we need to configure all of the other services.

I started with DHCP.  In poking around the /etc/ directory on the appliance, I found that VMware kindly provided a mostly pre-configured configuration template for the DHCP server: /etc/dhcpd.conf.template

/etc/dhcpd.conf.template

So, being kind of lazy, I simply backed up the existing dhcpd.conf file:

# cp /etc/dhcpd.conf /etc/dhcpd.conf.orig

And then copied the template into place as the config:

# cp /etc/dhcpd.conf/template /etc/dhcpd.conf

And got to editing.  My final config file looks like this:

edited /etc/dhcpd.conf

Once that’s done, you can start the DHCP server:

# /etc/init.d/dhcpd start

Then you need to start the TFTP server:

# /etc/init.d/atftpd start

At this point, I have an ESXi VM PXE booting and doing all the right things – SUCCESS!.

I don’t have Auto Deploy configured from PowerCLI quite yet.  I’ve got a default image loaded up, but without Auto Deploy rules waiting, it’s a wash.  I’ll update when I have things set up more completely.  You probably know more about PowerShell and PowerCLI than do I, but this is what I’m getting (even right after I Connect-VIServer). Something’s wacky with PowerCLI communications:

PowerCLI error

I’ll get it figured out, but until then, take this as a start to your Auto Deploy adventures with the vCenter Storage Appliance!

***EDIT***

Well, stilly me figured out the “cannot connect” problem with PowerCLI. Turns out the Auto Deploy services weren’t started on my vCenter Server Appliance. A quick jaunt to https://:5480, then to the Services tab, then clicking the magic “Start ESXi Services” button resolved that one. I think the “Stopped” status for ESXi Autodeploy was what gave it away 🙂 I’m off and running again!

Up and coming

So, I really do work stuff, along with all the tinkering lately.  That’s the problem with new gadgets!

I’ve been gearing up on the vSphere 5 courses from VMware, and I gotta say, you should take these.  Even if it’s just the 2-day What’s New course for you VMware gurus.  What’s New is the condensed “look at all the cool new stuff” class that gets you some hands on time with the new knobs and dials as well as gets you some good discussion time.  The new Install, Configure, Manage class is no slouch, but we’re gently massaging it to work better for most that will likely be taking it.

Add to that the fact that I’m working on a post (more back-burner) about my take on why customers should think about the cloud.  And I’m tossing around a post about automation, and why.  Not so much how, but why.

On the front burner, however, I’m in the process of working through the new Auto Deploy feature of vSphere 5, specifically the integration of Auto Deploy and its related components into the vCenter Server Appliance (vCSA).  Everything’s baked in, so I’m doing a “what to edit and how to make it work” post.  I’m having just a touch of difficulty I think due to the wacky nature of my lab (should be taken care of soon enough, I hope), but the framework is there.

Oh, and add to that my DSL modem gave up the ghost.  I’d say it let out all its magic smoke (you know, the magic smoke that all electronics run on – when the smoke escapes, the electronics don’t work anymore!), but there was no puff of smoke.  It just stopped.  I looked up and there were no lights.  No biggie, I’ve got a U-Verse installation scheduled already to replace the DSL with a fatter pipe, and my cable modem is still the primary pipe.  It just means that my next class won’t have any network redundancy if something goes wrong.

So that’s what’s going on.  Blog breaking, DSL dying, vCSA tinkering fun.  Stay tuned for more goodness!

 

vRAM Licensing Reprise

So I promised to follow up, and here I am. I briefly touched on the new licensing structure the other day, and left with the words “DON’T PANIC”.

I stand by my words today, and here’s why:

The vRAM-based license entitlement is only a factor of what needs to be observed. We still need to purchase licenses per-socket, the virtual memory entitlement is pooled at the vCenter level, and we only need to worry about it for VMs that we’ve powered on. We no longer need to be concerned about the number of cores per socket or physical memory limitations in the host.

Let’s think about this for a moment.

Say I’m a small shop – I figure I need 3 hosts to virtualize my physical environment (say, all of 30 hosts). So I shell out for 3 dual-socket, 4-core hosts, with 32GB of RAM each, and vSphere Standard licenses all around (maybe I saved some cash and bought Essentials or Essentials Plus – those both use the Standard license). 6 total licenses are necessary, each giving me an entitlement to 24GB of vRAM. 144GB of vRAM total. If I virtualize each of my physical machines and give them each 4GB of RAM, I’m looking at 120GB of vRAM allocated. I’m still 24GB under my entitlement. Sure, I’m overcommitting a bit, with only 96GB of physical RAM available to the cluster, but at the same time, I’m going to guess that all of those hosts don’t require 4GB of RAM.

In this scenario, I still have room to kick each host up to 48GB of RAM before I really have to worry about memory overcommitment in earnest.

But let’s take this scenario out just a little farther. I’ve upgraded my hosts to 48GB of RAM each, and as my environment’s grown, I’m finding that I’m getting ready to overcommit memory. I know my environment, and realize that a little bit of overcommitment isn’t a bad thing. I just need to buy 1 more vSphere CPU license, and my vRAM entitlement grows.

Or let’s throw another curveball – instead of upgrading RAM, I replace my hosts. I keep the memory specs the same – 32GB each, but I get get new boxes with 8 cores per socket. In vSphere 4.1, that meant either upgrading my licenses to Enterprise Plus, or purchasing an additional, say, Standard license for each socket. Now, all I have to do is turn up the new boxes, remove the license from the 4-core node, and reapply it to the 8-core node. Problem solved, and no more money spent on software.

What about a fairly large shop? What if I’ve got a pair of 20-node clusters running a ton of VMs?

My large shop has been virtualizing a long time, and has a virtualize-first policy. It has also matured its provisioning processes to go along with virtualization. Virtual machines in this environment are generally provisioned with 1GB of vRAM. The hosts are 4-socket, 6-core systems. We’re running Enterprise Plus. This gives a 48GB vRAM per license entitlement. This means that I can deliver 3840GB of vRAM to each cluster. 3800VMs per cluster (assuming the VMs are provisioned with 1GB vRAM each). Now, that’s 192 VMs per host, which is fairly uncommon consolidation as far as I’ve seen.

The highest consoidation I’ve seen (with my own eyes) is 60:1. But more typically, I tend to see closer to 20:1. Even at 4GB vRAM per VM, at 20:1 consolidation, you’re still only allocating 80GB of vRAM per host, which is well under the 192GB of vRAM entitlement based on the 4 sockets licensed on the host. That gives us a lot of breathing room.

Sure, your VMs will vary in size, 1GB here, 8GB there, but the point is still the same. In most cases, your licensing will not cause you any trouble in most cases. I really think that most customers will find better flexibility in this new licensing model.

What’s even better is that the vRAM entitlement is pooled in a vCenter, so you’re not stuck with a workload on one host.

Change is tough, but it’s not as bad as it may seem at first glance.

What’s with the new vSphere vRAM licensing?

Ok, the cat’s out of the bag, the outcry has begun, but is the new vRAM licensing really as bad as you think?

My answer: No.

I’m noting that people seem to be absolutely up in arms about the new licensing structure, but keep this in mind: you’re not licensing your PHYSICAL memory, you’re licensing VIRTUAL memory. If you buy an Enterprise Plus license (which entitles you to 48GB of vRAM), that may well cover that host you have with 128GB of physical RAM, depending on your overcommitment.

You can also pool vRAM entitlements within vCenter, meaning that 3 Enterprise Plus licenses grant you a pool of 48GB*3 in your vCenter environment, and it doesn’t matter on which of your hosts your VMs are using the vRAM.

Watch this space, I’ll have a more in-depth writeup soon-ish, but the moral of the story right now is taken by the words on the cover of the Hitchhiker’s Guide to the Galaxy: DON’T PANIC

-jk

How do you approach your virtual networking?

I ask silly questions sometimes, but I do it for a reason. As a teacher, I i try to inspire you to think. So I ask questions that may seem a little goofy, but I also try to gently guide you down a new path.

I’ve been using this for a while in my vSphere classes (everything I teach that discusses networking, at least) and thought it was worth sharing. I lead off the discussion with a simple question: do you treat an ESXi host any differently than any other physical server while planning to attach it to the network? Sure, an ESXi host likely has more interfaces to cable, but that’s not all you need to think about. A fundamental shift in thought process should occur when thinking about your vSphere hosts and your network.

If you look at the vSphere network architecture long enough, it’s clear that you’re not just connecting a host to your network. You’re actually connecting more infrastructure to your network. You’re connecting physical switches to virtual switches, not connecting hosts to physical switches. Your vmnic devices aren’t really NICs at all – they’re bridging physical Ethernet to virtual Ethernet. Once that realization is made, everything’s different.

I’ll admit, I didn’t come to this realization all on my own – a friend of mine actually introduced me to the idea. We were discussing something about a class, and he drew on the whiteboard something that could easily be described as a cabinet in the context of a physical data center, and then began to explain that it could just as easily represent an ESX host (this was a couple of years ago). And the epiphany struck.

It’s easy for us systems guys (and gals) to avoid this thought process. We were never programmed that way. But the times, they are a changin’, and we need to remember to change with them.

If you think about your networking like any old host, let me suggest, kindly, that you’re doing it wrong. Start thinking about adding a cabinet to your raised floor, and then you’ll be right on track.


Ramblings on ESXi

As VMware continues the push to a Service-Console-less world with ESXi, there are things that we may want to contemplate with our customers.

Something that came to mind earlier today was logging. ESXi, by default, has a built-in syslog, but it writes logs to a local memory-based file system. That means that when the host goes offline, the logs just go away. There is a method by which one can redirect those messages to a specific Data Store, but let’s face it, centralized logging is all the rage! If nothing else, it provides a remote facility that won’t be modified if someone gets into the ESXi host and cleans entries up after they’re done. To me, that’s some pretty important security. So how does one redirect syslog on an ESXi host, you ask?

It’s as simple as changing a single Advanced Setting via the vSphere client. Take a look at this brief blog entry atVirtualizationAdmin.comby David Davis:http://blogs.virtualizationadmin.com/davis/2010/02/22/how-to-redirect-esxi-system-logs-to-a-central-syslog-server/

Some other things we want to think about in the transition will ultimately all be COS-related, that being the biggest difference between ESX and ESXi.

Does the customer have agents running in the COS for anything?

  • Backup agents – Perhaps it’s time to revisit backup strategies and methodologies.
  • Hardware management agents – Insight Manager, OpenManage, etc – Many of these functions are being replaced through vendor-specific CIM providers. VMware has available 4 total ISOs for ESXi Installable – one for each of the major vendors (HP, IBM, Dell), and the basic ESXi. The vendor-specific distributions have the appropriate CIM providers cleanly integrated. We should work with our customers in their labs to determine if the CIM providers have the functionality necessary for their specific environments.

Scripts in the COS – customers have developed many scripts to help with management activities in the ESX environment. It is time to begin investigating the transitioning of these scripts to a remote environment. There are a couple of directions that a customer could take in porting their scripts

  • vCLI – the vCLI is a set of tools available from VMware to provide much of the COS toolkit on a remote host. The vCLI is available in 3 forms: a Windows installable package, a Linux installable package, and the vSphere Management Assistant (vMA). The two installable packages can be installed on and run from a Windows or Linux environment. The vMA is a Linux-based Virtual Appliance that can be integrated into a customer’s environment and is designed to provide a prepackaged remote scripting environment for a virtual infrastructure. The vMA provides a number of benefits over the installable vCLI tools such as FastPath Authentication to streamline session authentication functionality without compromising security and simplified deployment as an OVF appliance.
  • PowerShell/PowerCLI – PowerShell is fast becoming a favorite management and scripting toolkit of ESX administrators, partially due to the overwhelming number of Windows administrators that have inherited the responsibility of managing the virtual infrastructure. The PowerCLI toolkit from VMware is a robust set of cmdlets and objects to be used from PowerShell scripts to work with a virtual infrastructure
  • Other SDKs from VMware – VMware provides SDKs for API access from Perl and Java as well, if those languages are more to a customer’s liking

There are still some pieces of functionality that are missing from this stack, admittedly. I’ve spoken with customers about the lack of tools available to manage things like RAID controllers from ESXi. Many of these things are up to the hardware vendors to implement, but VMware can be a conduit for functionality requests as well. We can work with customers to file feature requests through VMware (http://www.vmware.com/support/policies/feature.html). When filing such a request, please be as specific as possible regarding what functionality is being requested. Using the above-mentioned RAID controller management as an example, a good feature request may document that a user would like to be able to add disks to a RAID array, create a new RAID array, destroy a RAID array, and rebuild a RAID array after disk replacement. The more specific the requests are, the move VMware can help implement the functionality.

Expanded functionality seems to be the focus of the next release of vSphere (from the small amounts of info flowing out of VMware’s recent Partner Exchange), and the product continues to improve. Just because a customer doesn’t want to migrate now is no reason to put off testing, evaluation, and porting of the customer-developed management tools.

Just remember, I’m a consultant and a trainer, and these are the kinds of things I think about 🙂

-jk