Here I am, procrastinating on other stuff, to talk about the new lab setup. I promised in the last video, I’d do a writeup, and I figured “no time like the present”!
So what do I have going on? I’ve ripped out the ML370 and ASUS RS500A boxes, and replaced them with a veritable steal from the Dell Outlet. I decided to go all-in for a nested lab, since I can’t come up with a good reason to put together a physical lab.
So I found a Precision T5610 with a pair of Xeon E5-2620v2s. Twelve hyper threaded cores of processor get up and go. The Scratch and Dent unit I bought had 32 GB of RAM installed, along with a 1 TB spindle. Not a bad start. But I had gear to work with, and needed more RAM.
So I ordered a nice 64 GB upgrade kit from Crucial (well, 4 16 GB kits, technically, hoping it’d give me a 96 GB box to work with. The pre-installed RAM and the Crucial RAM didn’t play too nicely together (Windows wouldn’t load from the spindle, nor would the ESXi installer start). Boo. So I pulled out the factory RAM, ran memtest86 for about a day, just to be safe, and am proceeding, for the moment, with 64 GB.
Still just a start, though, as I needed storage. I had an Icy Dock 4-bay SATA chassis and an IBM M1015 SAS RAID controller in my little HP Microserver. That box didn’t need those things anymore, so a transplant was necessary. 4 screws later, I had lots of room for 2.5” SATA drives. I already had a pair of 120 GB Intel 520 SSDs in Icy Dock trays, and I pulled the 2 1 TB Crucial M550 SSDs from the ASUS box. Now I have plenty of lab storage. If I need more, I can always take the performance hit and attach something from the DS412+ (like I’ve already done with my “ISO_images” NFS share.
**EDIT** I ran into another snag. The LSI controller and Icy Dock combination seem to not be playing nicely with the host. The SSDs seem to randomly just drop offline periodically, which makes me sad. It could be a power thing (these big SSDs are kind of notorious for power problems, especially in small drive chassis or NAS units), but I’m not going to push it any further. I pulled the controller and drive cage out of the system, and I should have a SATA power splitter after UPS shows up today (I was too lazy to leave the house LOL), so I’m just going to run the SSDs off the on-boad SATA controller channels, and be happy about it.
At this point, I thought I was ready for ESXi, but upon installation, I hit a snag. The T5610 has an Intel Gigabit NIC. As such, I expected no issues, but the 82579 isn’t noticed by ESXi 5.5u2 for whatever reason. No biggie – this thing has a nice tool-less case, and less than a minute later, I had a dual-port Intel NIC installed and ready to go (82571EB if you’re curious). One more reboot and ESXi was installing.
Everything’s pretty happy at the moment. Here’s what it looks like now that I’ve built up all my virtual ESXi hosts:
Oh, and the “Remote_External” cluster is a set of ESXi virtual machines I have running in VMware Workstation on a Precision M4800 laptop.
I’ll follow up with some network details shortly, since that’s also gotten _way_ complex recently. All in the name of scenario-based play with NSX. More fun later!