New Lab Server

Here I am, procrastinating on other stuff, to talk about the new lab setup.  I promised in the last video, I’d do a writeup, and I figured “no time like the present”!

So what do I have going on?  I’ve ripped out the ML370 and ASUS RS500A boxes, and replaced them with a veritable steal from the Dell Outlet.  I decided to go all-in for a nested lab, since I can’t come up with a good reason to put together a physical lab.

So I found a Precision T5610 with a pair of Xeon E5-2620v2s.  Twelve hyper threaded cores of processor get up and go.  The Scratch and Dent unit I bought had 32 GB of RAM installed, along with a 1 TB spindle.  Not a bad start.  But I had gear to work with, and needed more RAM.  

So I ordered a nice 64 GB upgrade kit from Crucial (well, 4 16 GB kits, technically, hoping it’d give me a 96 GB box to work with.  The pre-installed RAM and the Crucial RAM didn’t play too nicely together (Windows wouldn’t load from the spindle, nor would the ESXi installer start).  Boo.  So I pulled out the factory RAM, ran memtest86 for about a day, just to be safe, and am proceeding, for the moment, with 64 GB.

Still just a start, though, as I needed storage.  I had an Icy Dock 4-bay SATA chassis and an IBM M1015 SAS RAID controller in my little HP Microserver.  That box didn’t need those things anymore, so a transplant was necessary.  4 screws later, I had lots of room for 2.5” SATA drives.  I already had a pair of 120 GB Intel 520 SSDs in Icy Dock trays, and I pulled the 2 1 TB Crucial M550 SSDs from the ASUS box.  Now I have plenty of lab storage.  If I need more, I can always take the performance hit and attach something from the DS412+ (like I’ve already done with my “ISO_images” NFS share.

**EDIT** I ran into another snag.  The LSI controller and Icy Dock combination seem to not be playing nicely with the host.  The SSDs seem to randomly just drop offline periodically, which makes me sad.  It could be a power thing (these big SSDs are kind of notorious for power problems, especially in small drive chassis or NAS units), but I’m not going to push it any further.  I pulled the controller and drive cage out of the system, and I should have a SATA power splitter after UPS shows up today (I was too lazy to leave the house LOL), so I’m just going to run the SSDs off the on-boad SATA controller channels, and be happy about it.  

At this point, I thought I was ready for ESXi, but upon installation, I hit a snag.  The T5610 has an Intel Gigabit NIC.  As such, I expected no issues, but the 82579 isn’t noticed by ESXi 5.5u2 for whatever reason.  No biggie – this thing has a nice tool-less case, and less than a minute later, I had a dual-port Intel NIC installed and ready to go (82571EB if you’re curious).  One more reboot and ESXi was installing.

Everything’s pretty happy at the moment.  Here’s what it looks like now that I’ve built up all my virtual ESXi hosts:

Screenshot 2014 09 24 08 47 22

 

Screenshot 2014 09 24 08 48 53

 

Oh, and the “Remote_External” cluster is a set of ESXi virtual machines I have running in VMware Workstation on a Precision M4800 laptop.

I’ll follow up with some network details shortly, since that’s also gotten _way_ complex recently.  All in the name of scenario-based play with NSX. More fun later!

-jk

Deploying NSX Manager

Well, another day, another post.  

Ok, that may be exaggerating a bit, but I’m trying.  Again.  Still.

I’m rebuilding my lab (more to come on that later), and with the big push toward VMware NSX in my life right now, I thought I’d capture my fun and excitement in deploying everything.  Here’s Part 1, where I deploy the NSX Manager and register it with vCenter.  Nothing earth-shattering here, but it might help someone.

Still here. Or “Meandering thoughts about the lab that time forgot”

So, I do still exist.  I am still here, and look at this!  I even update once in a while.

My focus has shifted a little bit over the past few months.  I’m working a lot with NSX, and have been teaching the NSX: Install, Configure, Manage for a few months now.  Like, a lot.  Q3 has been absolutely nuts for me, especially with the announcement of NSX 6.1 at VMworld.

So, what does all that mean for me, and this blog, exactly?

Well, for the blog, it means a little bit less vSphere stuff, and a little bit more networking.  

Which leads us to “What does this mean for me?”.  Well, I’ve got a stack of Cisco gear here in the office now.  That must mean Cisco certification prep.  Someday, I’ll have a break and be able to actually do that 🙂

So, what Cisco stuff is in the pipeline?  CCNA – Routing and Switching is first.  that’s the basic stuff.  CCNA Data Center is soon to follow.  I’ve been tossing around CCDA because design is just a thing I do.  Do I pursue those to the NP/ND level?  I don’t know.  My guess is no, but I never leave anything off the table.

I’ve also got the VCP-NV on my list, and the forthcoming VCIX-NV when it launches.  I’ve taken a swing at the VCP-NV – I did that while I was out at VMworld, but I just missed the passing mark.  I was caught completely off-guard by some of the stuff in the exam.  It was my own fault, however, as I spent a grand total of zero time preparing and studying for the exam.  I even talked to the cert devs about the Exam Blueprint, that I didn’t actually read until the evening _after_ I wrote the exam.

That’ll be easy to rectify, however.  I know what I need to study to bump my score up over the pass mark.  Now I just need what everyone needs – time.

I also have some new lab gear incoming.  I caught a steal from the Dell Outlet the other day.  I have a scratch & dent T5610 with dual 6-core Xeon E5s on its way, and an additional 64 GB of RAM just showed up on my front porch today.  I’ll talk through the build I’m planning as I’m working through it.  I figure with 24 threads and 96 total GB of RAM, I should be able to run many, many nested ESXi hosts, especially with the VMware Tools fling and the MacLearn dvfilter fling for nested ESXi.  My old ASUS RS500A ran pretty well with 2 6-node clusters and a 4-node management cluster, each nested ESXi host with 8 GB of RAM each.  And the RS500A only had 64 GB of RAM.  

So I’m going to consolidate the ASUS and the old ML370 into a single, more modern (and likely more power efficient) box.  The tradeoff is that I’m giving up the iKVM and iLO capabilities of my existing hosts.  It’ll be ok.  I guess I’ll just have to look at IP KVMs now.  

In other news, I’ve moved my edge to higher-end gear.  Nope, nothing so fancy or cost-prohibitive as Cisco, but definitely powerful.  I picked up a Ubiquiti Networks EdgeRouter Lite the other day, and it’s pretty slick.  I’ve got a complementary UniFi AC on the way, and that should be waiting for me when I get home.  The EdgeRouter is pretty slick.  EdgeOS is based on Vyatta Core 6.3 (just before the Brocade acquisition), so I’m familiar, in passing, with the OS.  This thing is fast!

So this will all end up in a home network / lab series as I do more cool stuff with it.

As it stands, I gotta get back to class – we’re just about done deploying Controllers…