I’m a happy camper! My new lab gear is in.
I believe we all need some kind of lab environment to play with, otherwise we just don’t learn the hands-on stuff nearly as well or as quickly. Some employers have lab environments in which to test. My employer is no different, but I prefer to have control over what I deploy, and when I deploy it. That way I have no one to blame about anything but myself 🙂
That said, I was running my lab in an old Dell Precision 390 with nothing but 4 cores, 8GB of RAM, and local storage. That was great a couple of years ago when I put it together, but now, not so much.
The new gear is actually server-grade stuff. And reasonably inexpensive, if you ask me.
For my storage, I stumbled on a great deal on a N40L Proliant MicroServer from HP. after repurposing some disk I had laying around the house, I had a small, reasonable storage server. I installed a bunch of SATA disk: 3 7200 RPM 500GB spindles and a 1TB 7200 RPM spindle in the built-in drive cage. But that wasn’t quite enough for what I had in mind. So I bought an IcyDock 4-bay 2.5″ drive chassis for the 5.25″ bay in the MicroServer, and added an IBM M1015 SAS/SATA PCI-e RAID controller to drive the 2.5″ devices. I had an Intel 520 Series 120GB SSD (bought for the ESXi host, but it didn’t work out) and a WD Scorpio Black 750GB drive just hanging around. So I added another SSD and Scorpio Black so I could mirror the devices and have some redundancy.
So there’s my SAN and NAS box. I installed FreeNAS to a 16GB USB stick, and carved up 4 ZFS pools – platinum, gold, silver, and bronze. Creative, I know LOL.
- Platinum is a ZFS mirror of the 2 SSDs
- Gold is a RAID-Z set of the 3 500GB spindles
- Silver is a ZFS mirror of the 2 Scorpio Blacks
- Bronze is a ZFS volume on the single 1TB spindle
I debated on swapping Gold and Silver at length, but in the end, left the layout as described.
There are two things I don’t like about this setup, and they both revolve around the networking baked into the MicroServer.
- Jumbo Frames aren’t supported by the FreeBSD driver for the onboard BroadCom NIC. This could be fixed in the future with a driver update or the official release of FreeNAS 8.2 (I’m running beta 2 at the moment)
- There’s only one onboard NIC. I’d have liked two NICs, but for the price, maybe I’ll add a PCI-e dual-port Intel Gig card. That would solve both dislikes.
Platinum, Gold, and Silver are presented via the iSCSI Target on the FreeNAS box as zVol extents. Bronze is shared via NFS/CIFS, primarily for ISO storage.
As for the ESXi host itself, well here we go:
- ASUS RS-500A-E6/PS4 chassis
- 2 x AMD Opteron 6128 8-core CPUs
- 64GB of Kingston ECC RAM
- 250GB 7200RPM spindle from the MicroServer
- 1TB 7200RPM spindle that was recycled from the old lab gear
I chose this seemingly overpowered setup for a few reasons (yep, another bullet point list)
- Price (the server and its constituent parts only ran me ~$2100USD)
- Nearly pre-assembled. I’m not one for building machines anymore
- Capacity. Instead of running multiple physical ESXi hosts, I chose to run my lab nested.
- Compatibility. This server’s Intel counterpart is on the VMware HCL. That didn’t mean this one would work, but I felt the odds were high. The onboard NICs are also both Intel Pro 1000s, which helps.
- LOM was included. This is important to me, as I don’t want/need/have tons of extra monitors/keyboards hanging around
So all the parts came in, I put them installed the disks, CPUs, and RAM, dropped an ESXi CD in the drive, booted it up, and wondered – where’s the remote console? I hadn’t thought about that, so I jacked in a monitor and keyboard only to find that the Delete key is necessary to get into the BIOS to configure the iKVM. Well, in my case, that posed a little bit of a problem. See, the only wired keyboards (or wireless, for that matter) are Apple keyboards, since I recently let the last physical Windows box leave my house. So I had to see if the iKVM pulled DHCP. I got out iNet, my trusty Mac network scanning utility, scanned my network, and there it was – a MAC address identifying as “ASUSTek Computer, Inc”. That had to be it, so I fired up a web browser and plugged in the IP. Now I just had to figure out the username and password. Documentation to the rescue! So I got everything configured up, and booted to the ESXi installer, and there you have it, one nice, 16-core 64GB of RAM ESXi host.
It’s doing rather well so far, I’ve got the storage attached, networking set up, and all kinds of VMs running right now, including vCenter Operations, View, SQL, vCloud Director, VMware Data Recovery, vShield Manager, a couple of Win7 desktops, and a few virtualized ESXi hosts, and this is what the box is doing:
Just to reinforce the importance of Transparent Page Sharing, at the moment, this host is sharing ~17GB of RAM.
Not to repeat myself, but I’m a happy camper. I’ve got View set up, so I can work with the environment while I’m on the road, and my next step is to get vCD rolling and happy with a couple of virtualized ESXi hosts so I can start plugging away at building class-specific vApps so I can keep up with the different courses we run.
I hope this helps and perhaps even gives you some inspiration for your own lab environment. I’m happy to answer any questions you may have about the setup, just drop me a line!