More Lion Fun: TotalTerminal

So, a while ago I talked up a SIMBL plugin for Terminal.app called Visor.  I don’t use Visor any longer, as it’s been renamed and updated to a new app, TotalTerminal.  It’s still written and maintained by BinaryAge, and still works the same.   But they’ve dropped the SIMBL requirement and now ships with an installer (and auto-updating, as well!).

It’s still customizable, and works wonderfully.  Go check it out (and upgrade if you’re still using Visor!)

The Lion Sleeps Tonight

Well, not, really, but I couldn’t resist with the title.  I’m experiencing new things with Mac OS X Lion this weekend.  So here’s what’s going on.

I just bought a new iMac.  Sandy Bridge i7, 3.4GHz, loaded up with 16GB of RAM and a 256GB SSD.  Oh, and I have a 1TB spindle installed as well.  27 diagonal inches of beautiful Apple display all built in as well.

I bought this machine for two reasons.  First, so I could clear off any personal stuff from my work-issued MacBook Pro.  Second, this thing has enough horsepower to run a sizable chunk of my home lab, along side my old, unsupported loaded-as-i-can-make-it Dell Precision 390.  So some of my infrastructure is running on an ESXi host, and some in Fusion VMs.  I’m still working out the final details.  And I’m actually contemplating buying a new Thunderbolt Cinema display so I can have a 2nd GigE interface on the iMac.  May just be a pipe dream, but the thought is there.

This configuration will suffice until next year (probably closer to 9 months, if I have my way) when I can pick up and populate a couple of ASUS barebones 1U boxes (assuming, of course, that they or their equivalents are still available).  I’ve got them all picked out now, and assuming everything works out, I’ll have 32 AMD cores and 128GB of RAM to work with.  That should keep me happy for a few years from a compute perspective.  I can only complain about my lab for so long before I put the wheels in motion to make it happen.  I’m still trying to figure out what i want to do for the storage, though.  I’m thinking a stack of ~100GB SSDs, since they’re relatively cheap and will do me well for running VMs without worrying about tons of contention.  I don’t need much for a lab, 200 or 300GB will actually probably be ok, especially with vSphere Thin Provisioning.  I’m sure I’ll keep some spindles around for overflow, since they’re real cheap.  But anyway, I digress from the point here (not that I’ve ever been known to do that)

What I really came to talk about was Lion’s new Recovery Partition and doing a fresh Lion install on a Mac.  I’m wiping the MacBook Pro clean, as I had (just earlier this morning) a Mercury Pro SSD from Other World Computing  (for the OS and VMs) and a 750GB Scorpio Black from WD installed for my Dropbox folder, and media.  About 30 minutes ago, I ripped out the spindle, so now the laptop simply has the SSD, and the optical drive has been reinstalled.

So, hardware work out of the way, I booted into Lion’s Recovery HD (hold the “option” key while you reboot, just in case you didn’t know how to get there), and there it was – what looked like, well, like an OS X installer.  I went ahead and erased the SSD partition, and then told the installer I wanted a fresh install of OS X.  After confirming my eligibility, confirming the disk to which I wanted to install, and requesting my Mac App Store credentials, it’s now “Downloading additional components.”  I have an hour and a half to go, the installer has been doing this for ~15-20 minutes as it stands.  It appears that it really _is_ downloading Lion from Apple’s servers somewhere.  I’m connected to a relatively fat pipe, and it’s still looking like 1.5 hours.  I’m glad I connected to the Time Capsule connected to the cable modem, rather than the DSL.  yikes!

So over the course of the weekend, I’ll be resetting things on the MacBook Pro, but it’ll be nice, as I’ll be able to keep work and play (mostly) separate now, between the iMac, iPad, and new Lion install on the MacBook Pro.  I’m a happy camper.  Oh, and while I’m waiting, I get to put up more drywall in the basement office.  Things are coming together!

vRAM Licensing Reprise

So I promised to follow up, and here I am. I briefly touched on the new licensing structure the other day, and left with the words “DON’T PANIC”.

I stand by my words today, and here’s why:

The vRAM-based license entitlement is only a factor of what needs to be observed. We still need to purchase licenses per-socket, the virtual memory entitlement is pooled at the vCenter level, and we only need to worry about it for VMs that we’ve powered on. We no longer need to be concerned about the number of cores per socket or physical memory limitations in the host.

Let’s think about this for a moment.

Say I’m a small shop – I figure I need 3 hosts to virtualize my physical environment (say, all of 30 hosts). So I shell out for 3 dual-socket, 4-core hosts, with 32GB of RAM each, and vSphere Standard licenses all around (maybe I saved some cash and bought Essentials or Essentials Plus – those both use the Standard license). 6 total licenses are necessary, each giving me an entitlement to 24GB of vRAM. 144GB of vRAM total. If I virtualize each of my physical machines and give them each 4GB of RAM, I’m looking at 120GB of vRAM allocated. I’m still 24GB under my entitlement. Sure, I’m overcommitting a bit, with only 96GB of physical RAM available to the cluster, but at the same time, I’m going to guess that all of those hosts don’t require 4GB of RAM.

In this scenario, I still have room to kick each host up to 48GB of RAM before I really have to worry about memory overcommitment in earnest.

But let’s take this scenario out just a little farther. I’ve upgraded my hosts to 48GB of RAM each, and as my environment’s grown, I’m finding that I’m getting ready to overcommit memory. I know my environment, and realize that a little bit of overcommitment isn’t a bad thing. I just need to buy 1 more vSphere CPU license, and my vRAM entitlement grows.

Or let’s throw another curveball – instead of upgrading RAM, I replace my hosts. I keep the memory specs the same – 32GB each, but I get get new boxes with 8 cores per socket. In vSphere 4.1, that meant either upgrading my licenses to Enterprise Plus, or purchasing an additional, say, Standard license for each socket. Now, all I have to do is turn up the new boxes, remove the license from the 4-core node, and reapply it to the 8-core node. Problem solved, and no more money spent on software.

What about a fairly large shop? What if I’ve got a pair of 20-node clusters running a ton of VMs?

My large shop has been virtualizing a long time, and has a virtualize-first policy. It has also matured its provisioning processes to go along with virtualization. Virtual machines in this environment are generally provisioned with 1GB of vRAM. The hosts are 4-socket, 6-core systems. We’re running Enterprise Plus. This gives a 48GB vRAM per license entitlement. This means that I can deliver 3840GB of vRAM to each cluster. 3800VMs per cluster (assuming the VMs are provisioned with 1GB vRAM each). Now, that’s 192 VMs per host, which is fairly uncommon consolidation as far as I’ve seen.

The highest consoidation I’ve seen (with my own eyes) is 60:1. But more typically, I tend to see closer to 20:1. Even at 4GB vRAM per VM, at 20:1 consolidation, you’re still only allocating 80GB of vRAM per host, which is well under the 192GB of vRAM entitlement based on the 4 sockets licensed on the host. That gives us a lot of breathing room.

Sure, your VMs will vary in size, 1GB here, 8GB there, but the point is still the same. In most cases, your licensing will not cause you any trouble in most cases. I really think that most customers will find better flexibility in this new licensing model.

What’s even better is that the vRAM entitlement is pooled in a vCenter, so you’re not stuck with a workload on one host.

Change is tough, but it’s not as bad as it may seem at first glance.

What’s with the new vSphere vRAM licensing?

Ok, the cat’s out of the bag, the outcry has begun, but is the new vRAM licensing really as bad as you think?

My answer: No.

I’m noting that people seem to be absolutely up in arms about the new licensing structure, but keep this in mind: you’re not licensing your PHYSICAL memory, you’re licensing VIRTUAL memory. If you buy an Enterprise Plus license (which entitles you to 48GB of vRAM), that may well cover that host you have with 128GB of physical RAM, depending on your overcommitment.

You can also pool vRAM entitlements within vCenter, meaning that 3 Enterprise Plus licenses grant you a pool of 48GB*3 in your vCenter environment, and it doesn’t matter on which of your hosts your VMs are using the vRAM.

Watch this space, I’ll have a more in-depth writeup soon-ish, but the moral of the story right now is taken by the words on the cover of the Hitchhiker’s Guide to the Galaxy: DON’T PANIC

-jk

Mac App: Visor

Admittedly, I spend less and less time at my Mac thanks to my newer iPad-based workflow. And I spend even less time in a terminal (well, until the past couple of days). And the last couple of days are what bring me to talk about this little app. Well, SIMBL plugin, really.

Visor was written by Antonin over at BinaryAge, and offers a simple, yet rather spectacular drop-down implementation of Terminal.app, very much akin to the command input window from Quake. This is triggered by a hotkey. Once the Visor window is in place, you have your Mac Terminal.app ready to go, and have the option to open new tabs as well, just like in a traditional Terminal.app window.

Visor is completely customizable, and can be made what you want, thought I have found the defaults to be pretty workable.

This all comes to light as I’m ramping up on the vCloud Director classes, and as such, I’m setting up vCloud Director (VCD) in my lab environment. VCD runs on Linux, and backends to an Oracle database (which I’m setting up on Linux), so I’m spending a fair amount of time SSH’d into my VMs to work toward getting things set up

Anyway, I have to recommend Visor if you spend any appreciable time in a terminal on your Mac. I love the elegance of something so simple, yet so functional!.


VCAP-DCD

Well, it happened on Friday. I passed the VCAP4-DCD exam. So now I have two VCAP credentials to drop behind my name.

I was rather surprised by the exam. It was definitely challenging, and asked many questions that I didn’t feel I completely understood. I guess knew better 🙂 There are times when you’re out building these designs that you just don’t understand what the customer is asking, so it’s not so far-fetched, except that, in the field, you can clarify things with the customer. Not so much with a proctored exam.

As I did with the DCA exam, here’s my instructor’s take on the DCD. If you didn’t read my DCA post, I talk about the exam from this perspective because there are many resources already written about how to prep for the exam, but I don’t see many that discuss the overall mapping of VMware Education’s offerings to the exam. That is in part because VMware doesn’t tend to develop courses toward certification, but toward a job role. There is definitely some overlap there, however, as the certifications are also developed toward a job role.

Unlike the VCAP-DCA, the VCAP-DCD only has one instructor-led course offering support for the exam: vSphere: Design Workshop. vDW is a 3-day course designed to teach, not how to design, per se, but how to approach design. We all want to have a nice design checklist or if-then flowchart to take into all of our design engagements, but we all know how different each of those engagements will be. We can’t always follow leading practices for one reason or another, but that’s ultimately OK, because we can justify our deviations. And, really, that’s the key. How does a deviation map to a business requirement? That’s what the vDW is geared toward teaching. It’s really more an exercise in critical thinking, which is of paramount importance when putting together a design for a customer.

This critical thinking is absolutely validated in the VCAP-DCD exam. While absolutely not required, I would suggest, without reservation, the vDW course for anyone approaching the DCD.

As a couple of points of disclosure, though. I feel like I’ve been clear so far, but I will repeat that I am employed directly by VMware Education, and I would also like to note that the vDW is probably my favorite class to deliver right now. It’s also a partner competency requirement. So I may seem biased, and in many ways, I am, but I’ve been a big proponent of good instructor-led training for far longer than I’ve been an instructor.

Anyway, it’s off to the vCloud with me. I’m ramping up on the vCloud Director classes, so I hope to see you in one somewhere along the line!

-jk


How do you approach your virtual networking?

I ask silly questions sometimes, but I do it for a reason. As a teacher, I i try to inspire you to think. So I ask questions that may seem a little goofy, but I also try to gently guide you down a new path.

I’ve been using this for a while in my vSphere classes (everything I teach that discusses networking, at least) and thought it was worth sharing. I lead off the discussion with a simple question: do you treat an ESXi host any differently than any other physical server while planning to attach it to the network? Sure, an ESXi host likely has more interfaces to cable, but that’s not all you need to think about. A fundamental shift in thought process should occur when thinking about your vSphere hosts and your network.

If you look at the vSphere network architecture long enough, it’s clear that you’re not just connecting a host to your network. You’re actually connecting more infrastructure to your network. You’re connecting physical switches to virtual switches, not connecting hosts to physical switches. Your vmnic devices aren’t really NICs at all – they’re bridging physical Ethernet to virtual Ethernet. Once that realization is made, everything’s different.

I’ll admit, I didn’t come to this realization all on my own – a friend of mine actually introduced me to the idea. We were discussing something about a class, and he drew on the whiteboard something that could easily be described as a cabinet in the context of a physical data center, and then began to explain that it could just as easily represent an ESX host (this was a couple of years ago). And the epiphany struck.

It’s easy for us systems guys (and gals) to avoid this thought process. We were never programmed that way. But the times, they are a changin’, and we need to remember to change with them.

If you think about your networking like any old host, let me suggest, kindly, that you’re doing it wrong. Start thinking about adding a cabinet to your raised floor, and then you’ll be right on track.