One Does Not Simply Walk Into Mordor. A Migrationary Tale.

So let’s talk about the migration of my homelab from NSX-V to NSX-T. For the faint of heart, this is a daunting tale.  Brought to you by the NSX vSwitch, and a limited number of physical NICs in my ESXi hosts.

Here’s the deal – a physical NIC (vmnic) in ESXi can be owned by exactly one vSwitch. That vSwitch can be a vSphere Standard Switch, a vSphere Distributed Switch, or an NSX vSwitch. But only by one of them at any given time.

So your host has to have at least one unused vmmic to be assigned to the NSX vSwitch during implementation. If you’ve got more than two attached to any given vSwitch, you’re fine, since you can move devices around without creating a single point of failure.

But what if your hosts were built with 4 pNICs, and you have 2 for infrastructure traffic (managment, vMotion, vSAN, etc) and 2 for virtual machine traffic? That’s still not so bad, as you can deploy NSX during a maintenance window, take a NIC away from VM traffic, and be prepared for the jeopardy state you’ve put yourself in.

How about a worst-case scenario – hypervisors with only a pair of pNICs installed. Maybe you have blade servers. Maybe you have rack servers with a pair of 25 Gbps or 40 Gbps NICs. That’s an easy implementation for NSX-V, since it simply leverages the vSphere Distributed Switch for its logical switches. It’s not so simple on NSX-T, since we’d have to pull a NIC to assign to the NSX vSwitch, and then what? With only two NICs, there’s no redundancy. This, in the modern data center, is sort of a party foul.

This is the situation I find myself in. Only one of my ESXi hosts has more than two available NICs. So I had to read the NSX-T docs to find out more about the ability to move VMkernel ports to Logical Switches.  This feature was introduced as API-only in NSX-T 2.1.  2.2 brings a UI element to the table.

VMkernel ports on Logical Switches? That’s crazy talk! Isn’t that going to create some kind of horrible circular dependency? Well, yes, it would if those Logical Switches were overlay switches backed by Geneve.

But you have to remember that NSX-T can support both Geneve-backed Logical Switches and VLAN-backed Logical Switches. Think about it like this: on any other vSwitch, Port Groups are created, and one of the port group policies available is VLAN Tagging. All a VLAN-backed Logical Switch really is, is a Port Group on the NSX vSwitch with the VLAN Tagging policy set. It’s no more complicated than that.

So how did I accomplish this crazy migration? Being my home lab, I had some flexibility in shutting down all of my VMs, so I powered everything down. I had to think about how to do that well, though, as my nested environments use storage from a DellEMC Unity VSA, and my DNS server is a virtual machine. As is vCenter Server, and NSX Manager for the physical infrastructure.  I temporarily dumped all of the NICs for the VMs on some distributed port groups to get everything off the logical switches.  After juggling DNS, vCenter, and NSX Manager around a bit, I unprepared my management cluster, then unprepared the compute cluster, and finally the old T5610 that I’ve kept around for some extra resources.  I then followed these instructions from vswitchzero.com to make sure everything was gone, including the vSphere Web Client plugin.  There may still be some vSphere Client plugin baggage hanging around, but I didn’t worry too much about that.

I was about to cross the point of no return.  I took a few minutes to breathe, and to review Migrating Network Interfaces from a VSS Switch to an N-VDS Switch in the NSX-T documentation, only to realize that I needed the titular vSphere Standard Switch, and not the vSphere Distributed Switch I had my endive environment running on.  So I jumped in, and built out a vSphere Standard Switch infrastructure to support my lab environment.  I’d like to say that I was awesome and whipped out a quick PowerShell or Python script to knock that out for me, but I’ll admit that I did it the old-fashioned, brute force way of clicking my way, host by host, through the vSphere Client.  Good thing I only have 5 hosts and 4 networks that live outside of NSX.  Once everything was migrated to a single standard switch (with a single uplink attached), I tore down the vDS in my lab, deleted my old NSX manager, and started deploying NSX-T 2.2.

I’d like to say the story is over there, but it’s not.  I prepared all of my hosts as Transport Nodes (can you see where this is going yet?), then migrated all of my management VMkernel ports over to a VLAN Logical Switch.  No sweat, until I went ahead and migrated the remaining uplinks to the N-VDS.  That may seem innocent enough, but I hadn’t moved my NSX Manager over to the N-VDS.  So there I am, with NSX Manager hanging off what is now an internal virtual switch, just chilling out with no one but the Controllers to talk to.  Realizing my conundrum, I tried to move the virtual machine over to the logical switch hosting my management network, only to realize that it doesn’t show up in the list of networks to which I can connect the vNIC.  So I scrambled to move the NSX Manager to a host outside my management cluster (good thing I attacked that cluster first, I suppose), and proceeded to reverse the migration process.  At this point, the two hosts in my management cluster are happily chugging along with their two uplinks attached to a vSphere Standard Switch.

Which is actually better, because my management cluster also hosts my edges, and those need to connect to both my transport network, and a VLAN network, both of which are configured on my vSwitch.

Looking back at the trouble I’d gotten myself into, it makes a ton of sense to not allow NSX Manager on an N-VDS, because who wants to be in circular dependency hell?  Some things you just have to learn the hard way.

With all that said, everything else has gone smoothly.  Mostly.  I still need to do some digging to figure out why Mac Learning isn’t doing me any good for my nested ESXi hosts on my logical switches. I’m sure I’m missing a quick API-only configuration switch or something. But routing works in and out of the nested environment (I can at least get my nested vCenter and other virtual machines that aren’t ESXi hosts.

So, smooth migration?  Not exactly.

Lessons learned? Lots.  For example, planning is crucial.  I know this.  Hell, I preach this in my classes.  But, as my RN wife keeps telling me about her time as an ER nurse, “bad things only happen to other people”.  I still tell her to be careful, because they do happen to other people, until you _are_ other people.  In my lab, I was “other people”.  Fortunately, it’s a lab.  The most I would have been out is about 2 days of manually rebuilding stuff from vCenter up if things had gone any more sideways.  But what if this was a production environment?  That was way too big a risk to take without writing up the migration plan, test plan, and testing in a lab first.  I just jumped in, because I’m an instructor, and I get to live in my technical ivory tower devoid of maintenance windows and serious consequences.

Also learned – leave your management cluster alone.  All too often, we forget that management is actually a pretty important workload category.

All in all, it’s done.  I’m glad I did it, if nothing else for the experience of doing it.  And I got to tear down an entire nested NSX-T environment, so I freed up a ton of resources that I don’t have (my hosts live in a perpetual state of red memory alerts).  NSX-T is a big workload, even in a small lab environment.

And, my favorite part of the whole migration?  I can now manage my entire lab from my Linux box.  No more flash for me. Well, mostly.  The vSphere Client in 6.7 does enough that I don’t have to fire up the Flash-based Web Client on the Mac very often at all.  I haven’t in at least 3 weeks.  And I’m ok with that. Maybe when the next version of vSphere drops, I’ll be able to uninstall Flash Player from my Mac.

Tooling and Operations

NSX-T comes with some pretty nice tooling to support the abstractions of software-defined networking.

First up is the Port Connection tool. I have to ask – where was this in NSX-V? This is a spectacular tool that lets you choose a source and destination virtual machine or logical port, and get a topology diagram. What TN is the source on? How about the destination? What is the network layout between point A and point B? Even better? Everything is hyperlinked, so you can click on a logical port in the diagram and get the name, UUID, and status of the thing. Oh, did I mention that it also shows the connection state between TEPs on different transport nodes? Yeah, it’s pretty cool.

Then there’s the time-tested Traceflow. Excellent, as always, but better than ever. In NSX-V, for example, if you define a source VM and want to trace to an external endpoint, you just can’t. NSX-T will allow it, but naturally can only trace out to Edge Uplink. If you’re tracing from one VM to another inside the NSX domain, you get the trace results, as well as a Port Connection diagram (for unicast flows). This is the first place I go when someone tells me that two VMs can’t communicate with one another.

Port mirroring is available, and you can create Local, Remote, Remote L3 and Logical SPAN sessions.

IPFIX is still around, and allows for multiple IPFIX collectors, and different Switch or Firewall profiles to forward data out to your preferred collector. The cool thing about this is that you could, for example, define different Observation Domain IDs for different tenants to keep everyone’s flow data separate.

Like I said at the beginning of this series, we’re not looking at anything comprehensive here, nor was this intended to be a deep technical look at NSX-T. This was just a Shameless plug – you should come to one of our NSX-T: Install: Configure, Manage classes for that.

~$ history
Introduction: From NSX-V to NSX-T. An Adventure
NSX-T: The Manager of All Things NSX
The Hall of the Mountain King. or “What Loot do We Find in nsxcli?”
Three Controllers to Rule Them All (that just doesn’t have the same ring to it, does it?)
Beyond Centralization: The Local Control Plane
Transport Zones, Logical Switchies, and Overlays! Oh, My!
Which Way Do We Go? Let’s ask the Logical Router!
If You’re Not Living on the Edge, You’re Taking Up Too Much Room
Welcome to the Edge, I’m at Your Service

Welcome to the Edge, I’m at Your Service

I’ve mentioned, more than once by now, that the Edge is where the services routers exist. They do their thing, whether tied into a Tier-0 or Tier-1 router.

So what services do we have? I’ mentioned them in the last post, so let’s run through them again pretty quickly:

NAT
How much can we actually do with NAT? I mean, Source NAT and Destination NAT are pretty straightfoward functions, I think. Both have been supported in NSX for, well, quite a while now.
In NSX-T, stateful NAT is supported if the Edge is configured for Active-Standby HA.

We also support reflexive, or stateless NAT. This is useful when I’m running the Edge in Active-Active HA.

Firewall
The Edge also has a firewall, much like we’re used to in NSX-V. This is a pretty straightfoward L3 firewall, and is used for the same things we use the ESG firewall for in NSX-V – controlling north/south traffic at the software-defined network perimeter.
But we get to add a new twist – the Edge firewall is available on both Tier 0 and Tier 1 routers, so tenants can have a little more control over what they allow into the tenant perimeter.

VPN Services
We also have L2 and L3 VPNs avaialble in the Edge, but they’re completely built and managed through the APIs at the moment. Sorry folks, no real VPN love in this series, just an honorable mention. No API-fu with this software-defined joker yet.

Load Balancer
Just remember, if you’re using a VM form factor Edge, it has to be Medium or Large to configure Load Balancing. Learn from my fail, where I deployed a Small Edge, and wondered for an hour or so why I couldn’t attach the LB to the Edge.

Ultimately, we recommend Large if you’re going to use Load Blaancing. Medium supports the LB function, but really only for proof of concept use cases.

The Load Balancer in NSX-T is still ultimately a function of the Edge. No real news here. The configuration is pretty straightforward, where a Load Balancer instance is created, at least one Virtual Server is attached to the Load Balancer, and the LB is attached to a Tier 1 Logical Router.
Server Pools need to be created – the Load Balancing Algorithm is defined here, with Round Robin and Least Connections settings (along with weighted versions for those), as well as a simple IP Hash algorithm. You decide whether you’re going to do TCP multiplexing.

Then you decide whether you’re doing a Transparent load balancer, or statically or dynamically mapping the NAT. Transparent load balancing is only supported if the LB is inline with the servers in the pool.
Then you statically or dynamically choose pool members. Dynamic pools use NSGroups, where static is just that – predefined and static.

Finally, you decide what health monitors you want for pool members – these can be active or passive (or both, if you really want). NSX-T ships with default Active Health Monitors for HTTP, HTTPS, ICMP, and TCP. You can create more if you want, and decide how you want them to check for health.

When you create Virtual Servers, you get to decide whether you’re load balancing L4 or L7, TCP or UDP, and what Application Profile you’re going to use, it’s IP address, associated Server Pools, and what kind of persistence you want.

Then you just toss all that together with the LB itself, and off you go! The biggest differences here (that I see) are the things we need to create to make a load balancer are somewhat consolidated, and we need to make sure we have large enough Edges to support the Load Balancer. And Application Rules are gone, but replaced with other rewrite and redirection policies called LB Rules, managed in the Virtual Server.

L2 Bridging
We can’t forget the venerable bridging capabilities. They haven’t been mentioned anywhere else yet. In NSX-T 2.1, we had to define a Bridge Cluster and populate it with ESXi Transport Nodes to get L2 bridging. That’s still around, but in NSX-T 2.2, a far better option is available (relatively speaking, of course – it’s 2018, bridging should be a niche use case these days, but it’s here if you need it).

Today, we can create Bridge Profiles and attach them to the Edge, which does a number of things for us:
* we can use Edges rather than ESXi hosts for bridging – that sounds less expensive already!
* Edges use DPDK for forwarding – so now bridging is (potentially) faster
* Edges have a firewall, which means we can limit traffic going to or from the bridged network

I think we have a winner here, for those rare occasions that L2 adjacency is necessary, especially for things like migrations from physical networks to logical.

So, there we have it, a quick rundown of things that need Services Routers on the Edge.

 

~$ history
Introduction: From NSX-V to NSX-T. An Adventure
NSX-T: The Manager of All Things NSX
The Hall of the Mountain King. or “What Loot do We Find in nsxcli?”
Three Controllers to Rule Them All (that just doesn’t have the same ring to it, does it?)
Beyond Centralization: The Local Control Plane
Transport Zones, Logical Switchies, and Overlays! Oh, My!
Which Way Do We Go? Let’s ask the Logical Router!
If You’re Not Living on the Edge, You’re Taking Up Too Much Room

If You’re Not Living on the Edge, You’re Taking Up Too Much Room

We talked about the Edge last time.  But we really didn’t get into the differences that we might expect.  I just said “we’ll get to that later”.  I suppose it’s later by now.

The Edge in NSX-T is little more than a container.  No, not “container” in a Docker kind of sense, but in a “pool of resources for network services” kind of sense.

See, the Edge is still a virtual machine, except when it isn’t.  In NSX-T, we have our choice of form factors – 3 sizes of virtual machine, and then bare metal.  Need a really, really big Edge?  Get out that server, we’re making a big network device!

So what are the form factors for an Edge these days?

  • Small – 2 vCPU, 4 GB RAM, 120 GB disk
  • Medium – 4 vCPU, 8 GB RAM, 120 GB disk
  • Large – 8 vCPU, 16 GB RAM, 120 GB disk
  • Bare Metal – 8 CPU, 32 GB RAM, 200 GB disk (minimums, naturally)

The virtual machine Edges come with 4 vmxnet3 vNICs installed.  Bare metal, well, we’ve got to watch out.  Chances are, your Intel X520, X540, X550, or X710s will work.  Check the docs for specifics – it’s all spelled out there in the System Requirements section of the Installation Guide

If you give a mouse an Edge, he’ll probably realize that it’s a single point of failure, and he’ll ask for another.

We can’t use Edges right after they’ve been deployed.  They’re really just Fabric Nodes at that point, joined to the Management Plane and just taking up resources.  They need to be promoted to Transport Nodes before they’ll be useful.

Because, while an Edge will still act as the North/South perimeter of our Software-Defined Network, it’s more than that.  An Edge is actually an active participant in the network – NSX host switches and TEPs will be installed on the Edge as part of this process.  And it runs distributed routing processes so that traffic coming through the Edge destined to a workload on a logical switch has an efficient routing path to get there. But TEPs and distributed routers are not all that are deployed to the Edge.

If you recall, from the logical routing post, the services router (SR) was mentioned.  The SR always lives on an Edge, whether the SR belongs to a Tier 0 or a Tier 1 router.  You certainly can influence on _which_ Edge a SR is running, but it’ll always be on one.

Let’s recap the kinds of things that might be deployed as a SR:

  • NAT
  • BGP (Tier 0 only)
  • Firewall
  • Load Balancer

That’s an awful lot of stuff in just 4 bullet points.  We’ll get into the specifics of those later.  These services are generally stateful, and generally need to be centralized.  So we did.

Because some things are centralized, there should probably be some measures taken for high availability (thus my nod to Laura Numeroff and Felicia Bond a couple of paragraphs ago, in case you missed the reference and thought I was simply going mad).  For HA of SRs, we need an Edge Cluster.

An Edge Cluster is simply a grouping of 1 or more identical form factor Edge transport nodes (maximum of 8) put together as a larger logical construct.  A container for your network services containers, if you will.  It’s pretty straightforward to create an Edge Cluster, simply add one in the Edge Clusters tab of Fabric > Nodes, add the Edge Transport Nodes you want participating, define the cluster profile, and off you go.

There’s not much to an Edge Cluster, really.  And not much to the cluster profile, either.  The profile simply defines the BFD probe interval, how many hops are allows, and how many probes have to be lost before we declare an Edge officially dead.

Most services operate statefully, and support only Active/Standby failover.  

We can configure routers as Active/Active or Active/Standby, and that will define what you can do with that router.  It’s worth noting that, if you configure a router as Active/Active, it is essentially a stateless device.  NAT is only available as Reflexive (or Stateless) NAT. The Edge Firewall is also stateless.

In Active/Standby, however, you can so stateful NAT and firewalling.  You can attach a Load Balancer.  But you have to choose your failover mode. If you ask me, you’re really choosing your fallback mode, but that’s not how it’s called out in the UI.  You have two failover mode choices:  Preemptive and Non-Preemptive.  What does that even mean?

It’s actually pretty simple.  In preemptive mode, you define a preferred member of your Edge cluster. This is the Edge we want to use.  If it fails, we’ve got others, so we’re not out for long.  What preemptive means, however, is that when our preferred Edge returns to service, NSX will preempt the service to move it back to the preferred node.  An automatic fallback, if you will.

In a non-preemptive configuration, we do not define a preferred node, so there’s no drive by NSX to move a service back to a preferred location.  No automatic fallback of the service.

Is there more to talk about? Of course there is.  We’ve still got services to talk about, and tooling, and all kinds of good stuff.  Stay tuned!

 

~$ history
Introduction: From NSX-V to NSX-T. An Adventure
NSX-T: The Manager of All Things NSX
The Hall of the Mountain King. or “What Loot do We Find in nsxcli?”
Three Controllers to Rule Them All (that just doesn’t have the same ring to it, does it?)
Beyond Centralization: The Local Control Plane
Transport Zones, Logical Switchies, and Overlays! Oh, My!
Which Way Do We Go? Let’s ask the Logical Router!

Which Way Do We Go? Let’s ask the Logical Router!

Moving around the data plane, we need to think about how, exactly, we’re going to get packets off of one logical switch and onto another.  Or out to the physical infrastructure, even.  

Logical routing in NSX-T is so very similar to what we have in NSX-V, but it’s entirely new at the same time.  

We still have the concept of a distributed router, embedded in the kernel to make routing decisions.  Pretty similar so far.  

Logical routers have interfaces, called Downlinks,  connected to the logical switches to enable routing between L2 domains. Still pretty much the same as we’re used to.

Logical routers no longer have a DLR Control VM. Well, there’s a bit of a departure from NSX-V.

Let’s reel things back into similarities.  We can still maintain two tiers of routing to keep tenants separated.  But we don’t refer to the tiers as the Distributed Logical Router and Edge Services Gateway any longer.  Now, it’s Tier-1 and Tier-0 routing, respectively.  This is where we have to start unlearning NSX-V things and relearning NSX-T things.

The NSX-T routers are all distributed, meaning the Tier-0 and Tier-1 routers are programmed on all the transport nodes throughout the transport zone.

So think about this logical topology:

 

Sample 2-Tier Routing Topology

Everything in the diagram, save for the physical router, is distributed across the transport zone.  

Except when they’re not.  Without diving too deep here, each of these routers, Tier 0, Tenant A Tier 1, and Tenant B Tier 1, are actually comprised of two objects: the distributed router (DR), and the services router (SR).

So let’s talk about these for a minute.  The distributed router component is the part that lives on each transport node.  This is the part that makes the routing decisions.  If I had two workloads: tenantA-Web and tenantB-Web, and those workloads were instantiated on the same hypervisor, traffic between them would not have to leave the host.  If tenantA-Web sent a ping to tenantB-Web, the traffic would go VM -> Tenant A Tier 1 -> Tier 0 -> Tenant B Tier 1 -> VM.

Perhaps breaking the environment up into tenants was a poor choice, as I likely wouldn’t allow traffic between them like that.  But that’s where other capabilities come in – the distributed firewall, for example.  We’ll talk about more of that kind of stuff as we continue through this series.

Anyway, we have our DR, but what’s this services router thing?  Simply put, it’s the component that we use for centralized or non-distributed services, such as dynamic routing, NAT, Firewalling, Load Balancing, L2 Bridging, etc.  

I know, you’re wondering, at this point, just what we’re thinking with a whole new routing structure when it sounds a lot like we’ve taken the model from NSX-V, and just distributed the ESG routing. Which is sort of true.  But it’s not.  Because the Tier 1 routers can run services just as well as the Tier 0 routers, which means everyone gets a services router component, assuming you’ve turned up one of these services.

That’s right, I can do NAT at Tier 0 or Tier 1, I can firewall at either tier, and on and on and on.  

But we’re not getting into the specifics of all of that yet.  

So where do these services routers live?  Since they’re centralized, we need someplace to put them.  How about on the Edge? We still have Edges in NSX-T, though they’re definitely no longer “Edge Services Gateways” – just Edges.  We’ll talk more about the specifics of them in the next installment.  For right now, just remember that all of the stateful or centralized services will live on an Edge node.

Another thing of note, is that I don’t need to deploy a two-tier routing topology.  I can attach a Tier 0 logical router to a logical switch.  If I don’t need a complex topology, you don’t have to build a complex topology.  Just cut Tier 1 completely out of the picture, and you’ll be fine.

This is a lot of information.  It may not seem like it, but it really is.  We’ll dive into the Edge next in an effort to complete the picture here.  

 

~$ history
Introduction: From NSX-V to NSX-T. An Adventure
NSX-T: The Manager of All Things NSX
The Hall of the Mountain King. or “What Loot do We Find in nsxcli?”
Three Controllers to Rule Them All (that just doesn’t have the same ring to it, does it?)
Beyond Centralization: The Local Control Plane
Transport Zones, Logical Switchies, and Overlays! Oh, My!

Transport Zones, Logical Switches, and Overlays! Oh, My!

Logical Switching

Everyone loves Logical Swtiching, right?! The ability to spin up a Layer 2 network whenever you need is pretty darned cool. Arguably, VXLAN is groovy, too, taking an L2 frame from a VM and wrapping it up in VXLAN goodness to shoot over the underlay network. So VMware left that alone, right?

Wrong. VXLAN is yesterday’s overlay protocol. The future is here with GENEVE.

GENEVE, you say? What in the world is GENEVE? GENEVE stands for GEneric NEtwork Virtualization Encapsulation, and is still being standardised. Wanna know more? Check is out here.

Let’s summarize the draft, shall we? Every other network virtualization overlay out there (VXLAN, NVGRE, etc.) is fixed. They all do one thing. Not to disparage any of the others – they do what they do very well. But the data center is not a fixed thing. It needs flexibility. It changes, it grows, it has to support new use cases. And that’s why GENEVE was born. See, GENEVE supports all the same stuff as everything else for network virtualization – taking that important L2 frame and wrapping it up in a new set of headers to send across the underlay “backplane” to the destination TEP (Tunnel End Point) so that delivery to the destination VM can occur. What makes GENEVE powerful, though, is extensibility through a proposed set of TLV options that can be set. Did I mention that the Options field in the GENEVE header is variable-length? So all manner of things could possibly be done.

From a day-to-day perspective, the difference between GENEVE and VXLAN is negligible. GENEVE uses UDP/6081, where we’re used to VXLAN using UDP/4789. The Tunnel Endpoints are referred to as “TEPs” or “Tunnel Endpoints” rather than “VTEPs” or “VXLAN Tunnel Endpoints”. Wireshark, as a packet analyzer example, already recognizes and understands GENEVE. So you just keep doing things the way you’ve been doing them. Except that now your logical networks can span both ESXi and KVM hypervisors. See, we’re growing.

In addition to the change in encapsulation protocol, NSX-T no longer has a dependency on the vSphere Distributed Switch. This does mean a couple of things, though.

First, we no longer have to worry about those crazy long vxw-dvs-83-virtualwire-7-sid-10007-transitNetwork kinda port group names. Logical Networks show up as simply the network name, though they do have a pretty new icon signifying that they’re opaque objects to vCenter (meaning that vCenter really can’t do anything with them, but it knows they exist).

Second, KVM virtual machine attachment to a logical switch isn’t quite as easy as in vSphere. For those, we have to identify the UUID of the virtual NIC (hint: look at the VM’s XML file – “virsh dumpxml <VM domain>”, and look for “interfaceId”), and then create the Logical Port by attaching the VIF (Virtual Interface) in NSX Manager > Switching > Ports > Add, where we specify the VIF UUID we discovered. Not difficult, just a little tedious, and probably worth scripting if you’re adding a bunch of KVM virtual machines to a Logical Switch.

Finally, don’t chew up all of your physical uplinks – NSX needs a couple of them for the NSX vSwitch that’s installed when you promote the host to a Transport Node.  I have a story I’ll tell about that later in this series.

If you recall from NSX-V, a logical switch exists in the concept of a transport zone.  NSX-T has transport zones as well, and they do precisely the same thing – define a compute scope for our logical networks.  But they’re not exactly the same all around.

In NSX-T we have two different types of transport zone.  One for overlay networks, and another for VLAN-backed networks. In other words, I have the option to build VLAN-backed logical switches.  That sounds a bit crazy, if you ask me.  Why in the world would we want to do that?! Well, the big reason is northbound connectivity to physical routers from the Edge.  I know, we haven’t talked about the Edge yet (that’s coming really soon, now that we’re talking about things in the data plane), so I’ll keep this brief.  The Edge will be configured with one or more VLAN transport zones to connect to upstream VLANs.  I’ll use this analogy again, but it’s essentially like creating VLAN-backed port groups on an NSX vSwitch.

So there you have it, a quick summary of what’s different in Logical Switching.