NSX-T: The Manager of All Things NSX

You’ve seen it before. The monolithic NSX Manager from which all VMware SDN is spawned. The API endpoint. Provider of the UI. NSX Manager is the centerpiece of the world of NSX.

Welcome back to my adventure in moving from NSX-V to NSX-T!

NSX-T, just like NSX-V, is split into three functional planes: Managment, Control, and Data.

The Managment Plane is mostly the NSX Manager, but it also includes Managment Plane Agents on the hosts. The Managment Plane is a lot of things: my source of truth for network configuration, the persistent repository for the network state that I want, the API and UI provider, and more.

Just like in NSX-V, you deploy the NSX Manager as a virtual appliance. VMware ships the appliance in two different formats now – OVF and qcow2. You see, NSX-T is not nearly as beholden to vSphere as its cousin NSX-V. NSX-T is perfectly happy without VMware’s hypervisor and management stack. You can run happily with only RHEL or Ubuntu as your KVM platform, should you desire. This makes NSX-T a great option for those driving OpenStack for their private SDDC plaftorm.

There are so many more options in the OVF deployment now – 4 different size options (Small, Medium, Medium Large, and Large)

Small – 2 vCPU, 8 GB RAM, 140 GB disk
Medium – 4 vCPU, 16 GB RAM, 140 GB disk
Medium Large – 6 vCPU, 24 GB RAM, 140 GB disk
Large – 8 vCPU, 32 GB RAM, 140 GB disk

You get to choose your managment network (as usual), and decide whether your managment will run on IPv4 or IPv6.

3 sets of passwords for the admin, root, and audit users (yep, you have easily accessible root access here!). You can also specify different usernames for the admin and audit roles, if you don’t like the defaults.

Then you’ve got the host identity and role. Standard IP address and hostname stuff here, with the addition of the NSX role. Here, again, you have choices:

nsx-manager: This is the NSX Manager we know and love. The focal point for UI and API interaction.

nsx-policy-manager: Want to start automating security policies and the like? You need one of these, too (yep, a second appliance).

nsx-cloud-service-manager: Got NSX Cloud? Then get one of these.

nsx-manager+nsx-policy-manager – this multi-role option is only supported on VMConAWS deployments – don’t try this on-prem.

Finally, you set up your DNS configuration, NTP, and whether you want to allow SSH logins. And then you wait a minute for everything to deploy.

Once your done deploying, you can power on that bad boy of a VM. BTW, the memory is all reserved , so watch out.

Next step, logging into the web interface. Just point a browser at your NSX Manager IP or (preferably) hostname, and login with the admin credentials you just set during deployment. You’ll be presented with a beautiful Clarity-driven UI, with a dozen tiles for varying functions at the landing page.

Here, we can get into all kinds of trouble, from configuring load balancers to logical switches. But we’ve got more setup to do by deploying the Central Control Plane. We’ll get to that in another segment.

Before we get into all that, however, stay tuned for the next part in this series – I’ll take you on a tour of the NSX Manager admin CLI and show off some useful tools.

~$ history
Introduction: From NSX-V to NSX-T. An Adventure

From NSX-V to NSX-T. An Adventure.

I posted a while ago that NSX-T is the future, and the future is now.

And I entirely stand by that statement. While NSX-V is currently the software-defined networking standard at VMware, its time will come.
NSX-T is the architecture of the future. It’s the platrform for both NSX Data Center and NSX Cloud. The tooling you will use to define your networking and security capabilities and policies consistently between on-prem and off.

As it stands, today, NSX-T is really more for developer clouds. It has different capabilities than NSX-V, though the gap is shrinking dramatically with each release – NSX-T 2.2.0 can do an awful lot of cool stuff, and much of the stuff you may be accustomed to doing now with NSX-V. The proverbial “tomorrow” is close, indeed, and tomorrow, NSX-T will take the crown as king of the VMware SDN kingdom. Fortunately, this is not going to be a coup, but rather a peaceful transition (well, maybe not if you want to migrate in-place, but that’s a whole different discussion).

What I want to do in this series is to lay out the similarities and differences of the two platforms, as they stand today (NSX-V 6.4.1 and NSX-T 2.2.0). I will not cover positively everything – just what I would consider the basics. That’s still a significant list – my outline is crazy right now. Maybe it’ll become more manageable, maybe I’m just going to spend an awful lot of time writing. Hopefully, I will provide the information you need to essentially translate your NSX-V vocabulary to NSX-T.

That’s the goal. Remember, this isn’t going to be deep technical content – just a whirlwind tour through the new platform with comparisons to what you’re already familiar with.

I won’t be getting into the API, as that’s just not my wheelhouse right now – I can spell JSON, but not much more than that. So I won’t be covering cool stuff like Dashboard customizations, but there’s plenty for me to work on without that.

Join me on my journey through the wilds of NSX, here’s to hoping that we’ll both learn something!

~$ history
Introduction: From NSX-V to NSX-T. An Adventure
NSX-T: The Manager of All Things NSX
The Hall of the Mountain King. or “What Loot do We Find in nsxcli?”
Three Controllers to Rule Them All (that just doesn’t have the same ring to it, does it?)
Beyond Centralization: The Local Control Plane
Transport Zones, Logical Switchies, and Overlays! Oh, My!
Which Way Do We Go? Let’s ask the Logical Router!
If You’re Not Living on the Edge, You’re Taking Up Too Much Room
Welcome to the Edge, I’m at Your Service
Tooling and Operations

Lab Network

So I’ve had a few questions about the network in my lab, since I’m teaching almost nothing but NSX these days.  So let’s talk about it for a bit.

My network is purposefully simple.  And I’ve just rebuilt pretty much everything, so it seems like a good time to document it.

At the edge of my network is a Ubiquiti Networks EdgeRouter Lite (ERL).  It deals with all of my routing inside the network, as well as routing to the outside world.  It’s a 3 interface device – one to the outside world (cable modem), one to my default VLAN and home network, and the third interface is carved into a bunch of sub-interfaces for my VLANs in the lab.

The two internal-facing interfaces are attached to a Cisco SG300-20 that I could also use for routing, but I chose to let the router deal with that.  This is where I have several VLANs set up for my different environments, and that’s all I’ve done with the Cisco switch – no IGMP Snooping, no routing, just VLANs:

  • Local Management – this is where all of my common stuff lives – the vCenter for my physical hosts, vROps, Log Insight, etc
  • Production Management – this is where my GA-versioned vESXi hosts live, along with their relevant supporting pieces – vCenter, NSX Manager, etc
  • Production NSX Control – I set this up simply to have a dedicated network for my NSX 6.1 Controllers.  These could just as easily gone into my Production Management VLAN
  • Production NSX Transport – this is here to simulate a dedicated VXLAN transport network.  Currently, this is superfluous, as NSX 6.0/6.1 VTEPs don’t deal well with VLAN tagging in a nested environment.  Not sure what that’s all about, <sarcasm>I must be running in an unsupported config </sarcasm>
  • Production Management Branch – this network provides a simulation of a remote site
  • Production NSX Transport Branch – again, simulation of a remote site, but much like the Production NSX Transport, this one’s completely superfluous at the moment.
  • I’ve got a matching set of VLANs for my non-GA environment, so that I can have a stable and unstable environments and maintain some level of isolation.  

Since my lab is completely nested, I also have VSAN and vMotion VLANs configured on my distributed switches, but they don’t map to anything in the physical network.

On the NSX side of things, well, I’m rebuilding that right now.  My thought process, since this is a lab, is to attach my outside-facing Edge VMs to the relevant Management network, depending on where I need the Edge.  This sort of flies in the face of having a dedicated Edge cluster, but hey, this is a lab 🙂  

Inside the Edge, my DLR(s?) will attach to a common Transit network, as will the inside interfaces of the Edges.  I’ll set up some OSPF areas so that the EdgeRouter Lite can advertise some networks into the Edge.  The DLR will also advertise its routes up to the Edge, which will in turn advertise back to the ERL.  This should be a pretty simple OSPF config.  I could eliminate the need for OSPF between the Edge and the ERL simply by configuring a default route, but what fun is that?

Then my workloads will attach to whatever Logical Switches I want them attached.  The sky’s the limit inside the SDN.  

For simplicity’s sake at this point, each network segment (VLAN or VXLAN) will have its own /24, though many of them could make due with a /28 or /29 pretty easily.  But I’m not strapped for IP addresses, thanks to our friend RFC 1918, so I’m not going to make things any more complicated than I need to.  

Everything works pretty well.  Sure, I run into some goofy behavior once in a while (see the VTEP VLAN tagging thing above), but this environment is entirely unsupported.  Honestly, it’s a miracle that any of this works at all, and is a galvanizing testament to what VMware software is actually capable of doing.  

Someday, maybe I’ll draw this all up.  But today is not that day.  

New Lab Server

Here I am, procrastinating on other stuff, to talk about the new lab setup.  I promised in the last video, I’d do a writeup, and I figured “no time like the present”!

So what do I have going on?  I’ve ripped out the ML370 and ASUS RS500A boxes, and replaced them with a veritable steal from the Dell Outlet.  I decided to go all-in for a nested lab, since I can’t come up with a good reason to put together a physical lab.

So I found a Precision T5610 with a pair of Xeon E5-2620v2s.  Twelve hyper threaded cores of processor get up and go.  The Scratch and Dent unit I bought had 32 GB of RAM installed, along with a 1 TB spindle.  Not a bad start.  But I had gear to work with, and needed more RAM.  

So I ordered a nice 64 GB upgrade kit from Crucial (well, 4 16 GB kits, technically, hoping it’d give me a 96 GB box to work with.  The pre-installed RAM and the Crucial RAM didn’t play too nicely together (Windows wouldn’t load from the spindle, nor would the ESXi installer start).  Boo.  So I pulled out the factory RAM, ran memtest86 for about a day, just to be safe, and am proceeding, for the moment, with 64 GB.

Still just a start, though, as I needed storage.  I had an Icy Dock 4-bay SATA chassis and an IBM M1015 SAS RAID controller in my little HP Microserver.  That box didn’t need those things anymore, so a transplant was necessary.  4 screws later, I had lots of room for 2.5” SATA drives.  I already had a pair of 120 GB Intel 520 SSDs in Icy Dock trays, and I pulled the 2 1 TB Crucial M550 SSDs from the ASUS box.  Now I have plenty of lab storage.  If I need more, I can always take the performance hit and attach something from the DS412+ (like I’ve already done with my “ISO_images” NFS share.

**EDIT** I ran into another snag.  The LSI controller and Icy Dock combination seem to not be playing nicely with the host.  The SSDs seem to randomly just drop offline periodically, which makes me sad.  It could be a power thing (these big SSDs are kind of notorious for power problems, especially in small drive chassis or NAS units), but I’m not going to push it any further.  I pulled the controller and drive cage out of the system, and I should have a SATA power splitter after UPS shows up today (I was too lazy to leave the house LOL), so I’m just going to run the SSDs off the on-boad SATA controller channels, and be happy about it.  

At this point, I thought I was ready for ESXi, but upon installation, I hit a snag.  The T5610 has an Intel Gigabit NIC.  As such, I expected no issues, but the 82579 isn’t noticed by ESXi 5.5u2 for whatever reason.  No biggie – this thing has a nice tool-less case, and less than a minute later, I had a dual-port Intel NIC installed and ready to go (82571EB if you’re curious).  One more reboot and ESXi was installing.

Everything’s pretty happy at the moment.  Here’s what it looks like now that I’ve built up all my virtual ESXi hosts:

Screenshot 2014 09 24 08 47 22

 

Screenshot 2014 09 24 08 48 53

 

Oh, and the “Remote_External” cluster is a set of ESXi virtual machines I have running in VMware Workstation on a Precision M4800 laptop.

I’ll follow up with some network details shortly, since that’s also gotten _way_ complex recently.  All in the name of scenario-based play with NSX. More fun later!

-jk

Deploying NSX Manager

Well, another day, another post.  

Ok, that may be exaggerating a bit, but I’m trying.  Again.  Still.

I’m rebuilding my lab (more to come on that later), and with the big push toward VMware NSX in my life right now, I thought I’d capture my fun and excitement in deploying everything.  Here’s Part 1, where I deploy the NSX Manager and register it with vCenter.  Nothing earth-shattering here, but it might help someone.

Still here. Or “Meandering thoughts about the lab that time forgot”

So, I do still exist.  I am still here, and look at this!  I even update once in a while.

My focus has shifted a little bit over the past few months.  I’m working a lot with NSX, and have been teaching the NSX: Install, Configure, Manage for a few months now.  Like, a lot.  Q3 has been absolutely nuts for me, especially with the announcement of NSX 6.1 at VMworld.

So, what does all that mean for me, and this blog, exactly?

Well, for the blog, it means a little bit less vSphere stuff, and a little bit more networking.  

Which leads us to “What does this mean for me?”.  Well, I’ve got a stack of Cisco gear here in the office now.  That must mean Cisco certification prep.  Someday, I’ll have a break and be able to actually do that 🙂

So, what Cisco stuff is in the pipeline?  CCNA – Routing and Switching is first.  that’s the basic stuff.  CCNA Data Center is soon to follow.  I’ve been tossing around CCDA because design is just a thing I do.  Do I pursue those to the NP/ND level?  I don’t know.  My guess is no, but I never leave anything off the table.

I’ve also got the VCP-NV on my list, and the forthcoming VCIX-NV when it launches.  I’ve taken a swing at the VCP-NV – I did that while I was out at VMworld, but I just missed the passing mark.  I was caught completely off-guard by some of the stuff in the exam.  It was my own fault, however, as I spent a grand total of zero time preparing and studying for the exam.  I even talked to the cert devs about the Exam Blueprint, that I didn’t actually read until the evening _after_ I wrote the exam.

That’ll be easy to rectify, however.  I know what I need to study to bump my score up over the pass mark.  Now I just need what everyone needs – time.

I also have some new lab gear incoming.  I caught a steal from the Dell Outlet the other day.  I have a scratch & dent T5610 with dual 6-core Xeon E5s on its way, and an additional 64 GB of RAM just showed up on my front porch today.  I’ll talk through the build I’m planning as I’m working through it.  I figure with 24 threads and 96 total GB of RAM, I should be able to run many, many nested ESXi hosts, especially with the VMware Tools fling and the MacLearn dvfilter fling for nested ESXi.  My old ASUS RS500A ran pretty well with 2 6-node clusters and a 4-node management cluster, each nested ESXi host with 8 GB of RAM each.  And the RS500A only had 64 GB of RAM.  

So I’m going to consolidate the ASUS and the old ML370 into a single, more modern (and likely more power efficient) box.  The tradeoff is that I’m giving up the iKVM and iLO capabilities of my existing hosts.  It’ll be ok.  I guess I’ll just have to look at IP KVMs now.  

In other news, I’ve moved my edge to higher-end gear.  Nope, nothing so fancy or cost-prohibitive as Cisco, but definitely powerful.  I picked up a Ubiquiti Networks EdgeRouter Lite the other day, and it’s pretty slick.  I’ve got a complementary UniFi AC on the way, and that should be waiting for me when I get home.  The EdgeRouter is pretty slick.  EdgeOS is based on Vyatta Core 6.3 (just before the Brocade acquisition), so I’m familiar, in passing, with the OS.  This thing is fast!

So this will all end up in a home network / lab series as I do more cool stuff with it.

As it stands, I gotta get back to class – we’re just about done deploying Controllers…

Auto Deploy with the vCenter Server Appliance

Auto Deploy is probably one of my favorite new features of vSphere 5.  The ability to build an ESXi image (with Image Builder), and automate the deployment of stateless hosts quickly and seamlessly just gives me a warm fuzzy.

So how do we set this up?

There are two options:

  1. Install Auto Deploy from the vCenter DVD, set up an external DHCP and TFTP server, setup your images, and go
  2. Deploy the vCenter Server Appliance (vCSA), configure the existing DHCP server, start the DHCP and TFTP servers, setup your images, and go.

I went with option number 2, since there was that much less to install.  Just configure and run!

I started by adding a NIC to the vCSA, since I didn’t want my management network also serving up DHCP.  Since everything I have at the moment in the lab is virtual, I chose to set up a deployment vSwitch just for this purpose.  In your lab or production environment, you may attach that deployment network to an existing network.

I copied the ifcfg-eth0 file in /etc/sysconfig/networking/devices/ to ifcfg-eth1 (the 2nd NIC will be eth1) and edited the new one

# cp /etc/sysconfig/networking/devices/ifcfg-eth0 /etc/sysconfig/networking/devices/ifcfg-eth1

# vi /etc/sysconfig/networking/devices/ifcfg-eth1

It should look something like this (I’m using 10.1.1.0/24 as my deployment network):

ifcfg-eth1

Then I created a new symlink in /etc/sysconfig/network to the new file

# ln -s /etc/sysconfig/networking/devices/ifcfg-eth1 /etc/sysconfig/network/ifcfg-eth1

This provides a persistent configuration for the network device should you reboot your vCenter Server Appliance.

That finishes up the Deployment Network configuration.  Now we need to configure all of the other services.

I started with DHCP.  In poking around the /etc/ directory on the appliance, I found that VMware kindly provided a mostly pre-configured configuration template for the DHCP server: /etc/dhcpd.conf.template

/etc/dhcpd.conf.template

So, being kind of lazy, I simply backed up the existing dhcpd.conf file:

# cp /etc/dhcpd.conf /etc/dhcpd.conf.orig

And then copied the template into place as the config:

# cp /etc/dhcpd.conf/template /etc/dhcpd.conf

And got to editing.  My final config file looks like this:

edited /etc/dhcpd.conf

Once that’s done, you can start the DHCP server:

# /etc/init.d/dhcpd start

Then you need to start the TFTP server:

# /etc/init.d/atftpd start

At this point, I have an ESXi VM PXE booting and doing all the right things – SUCCESS!.

I don’t have Auto Deploy configured from PowerCLI quite yet.  I’ve got a default image loaded up, but without Auto Deploy rules waiting, it’s a wash.  I’ll update when I have things set up more completely.  You probably know more about PowerShell and PowerCLI than do I, but this is what I’m getting (even right after I Connect-VIServer). Something’s wacky with PowerCLI communications:

PowerCLI error

I’ll get it figured out, but until then, take this as a start to your Auto Deploy adventures with the vCenter Storage Appliance!

***EDIT***

Well, stilly me figured out the “cannot connect” problem with PowerCLI. Turns out the Auto Deploy services weren’t started on my vCenter Server Appliance. A quick jaunt to https://:5480, then to the Services tab, then clicking the magic “Start ESXi Services” button resolved that one. I think the “Stopped” status for ESXi Autodeploy was what gave it away 🙂 I’m off and running again!

How do you approach your virtual networking?

I ask silly questions sometimes, but I do it for a reason. As a teacher, I i try to inspire you to think. So I ask questions that may seem a little goofy, but I also try to gently guide you down a new path.

I’ve been using this for a while in my vSphere classes (everything I teach that discusses networking, at least) and thought it was worth sharing. I lead off the discussion with a simple question: do you treat an ESXi host any differently than any other physical server while planning to attach it to the network? Sure, an ESXi host likely has more interfaces to cable, but that’s not all you need to think about. A fundamental shift in thought process should occur when thinking about your vSphere hosts and your network.

If you look at the vSphere network architecture long enough, it’s clear that you’re not just connecting a host to your network. You’re actually connecting more infrastructure to your network. You’re connecting physical switches to virtual switches, not connecting hosts to physical switches. Your vmnic devices aren’t really NICs at all – they’re bridging physical Ethernet to virtual Ethernet. Once that realization is made, everything’s different.

I’ll admit, I didn’t come to this realization all on my own – a friend of mine actually introduced me to the idea. We were discussing something about a class, and he drew on the whiteboard something that could easily be described as a cabinet in the context of a physical data center, and then began to explain that it could just as easily represent an ESX host (this was a couple of years ago). And the epiphany struck.

It’s easy for us systems guys (and gals) to avoid this thought process. We were never programmed that way. But the times, they are a changin’, and we need to remember to change with them.

If you think about your networking like any old host, let me suggest, kindly, that you’re doing it wrong. Start thinking about adding a cabinet to your raised floor, and then you’ll be right on track.