Welcome to the Edge, I’m at Your Service

I’ve mentioned, more than once by now, that the Edge is where the services routers exist. They do their thing, whether tied into a Tier-0 or Tier-1 router.

So what services do we have? I’ mentioned them in the last post, so let’s run through them again pretty quickly:

NAT
How much can we actually do with NAT? I mean, Source NAT and Destination NAT are pretty straightfoward functions, I think. Both have been supported in NSX for, well, quite a while now.
In NSX-T, stateful NAT is supported if the Edge is configured for Active-Standby HA.

We also support reflexive, or stateless NAT. This is useful when I’m running the Edge in Active-Active HA.

Firewall
The Edge also has a firewall, much like we’re used to in NSX-V. This is a pretty straightfoward L3 firewall, and is used for the same things we use the ESG firewall for in NSX-V – controlling north/south traffic at the software-defined network perimeter.
But we get to add a new twist – the Edge firewall is available on both Tier 0 and Tier 1 routers, so tenants can have a little more control over what they allow into the tenant perimeter.

VPN Services
We also have L2 and L3 VPNs avaialble in the Edge, but they’re completely built and managed through the APIs at the moment. Sorry folks, no real VPN love in this series, just an honorable mention. No API-fu with this software-defined joker yet.

Load Balancer
Just remember, if you’re using a VM form factor Edge, it has to be Medium or Large to configure Load Balancing. Learn from my fail, where I deployed a Small Edge, and wondered for an hour or so why I couldn’t attach the LB to the Edge.

Ultimately, we recommend Large if you’re going to use Load Blaancing. Medium supports the LB function, but really only for proof of concept use cases.

The Load Balancer in NSX-T is still ultimately a function of the Edge. No real news here. The configuration is pretty straightforward, where a Load Balancer instance is created, at least one Virtual Server is attached to the Load Balancer, and the LB is attached to a Tier 1 Logical Router.
Server Pools need to be created – the Load Balancing Algorithm is defined here, with Round Robin and Least Connections settings (along with weighted versions for those), as well as a simple IP Hash algorithm. You decide whether you’re going to do TCP multiplexing.

Then you decide whether you’re doing a Transparent load balancer, or statically or dynamically mapping the NAT. Transparent load balancing is only supported if the LB is inline with the servers in the pool.
Then you statically or dynamically choose pool members. Dynamic pools use NSGroups, where static is just that – predefined and static.

Finally, you decide what health monitors you want for pool members – these can be active or passive (or both, if you really want). NSX-T ships with default Active Health Monitors for HTTP, HTTPS, ICMP, and TCP. You can create more if you want, and decide how you want them to check for health.

When you create Virtual Servers, you get to decide whether you’re load balancing L4 or L7, TCP or UDP, and what Application Profile you’re going to use, it’s IP address, associated Server Pools, and what kind of persistence you want.

Then you just toss all that together with the LB itself, and off you go! The biggest differences here (that I see) are the things we need to create to make a load balancer are somewhat consolidated, and we need to make sure we have large enough Edges to support the Load Balancer. And Application Rules are gone, but replaced with other rewrite and redirection policies called LB Rules, managed in the Virtual Server.

L2 Bridging
We can’t forget the venerable bridging capabilities. They haven’t been mentioned anywhere else yet. In NSX-T 2.1, we had to define a Bridge Cluster and populate it with ESXi Transport Nodes to get L2 bridging. That’s still around, but in NSX-T 2.2, a far better option is available (relatively speaking, of course – it’s 2018, bridging should be a niche use case these days, but it’s here if you need it).

Today, we can create Bridge Profiles and attach them to the Edge, which does a number of things for us:
* we can use Edges rather than ESXi hosts for bridging – that sounds less expensive already!
* Edges use DPDK for forwarding – so now bridging is (potentially) faster
* Edges have a firewall, which means we can limit traffic going to or from the bridged network

I think we have a winner here, for those rare occasions that L2 adjacency is necessary, especially for things like migrations from physical networks to logical.

So, there we have it, a quick rundown of things that need Services Routers on the Edge.

 

~$ history
Introduction: From NSX-V to NSX-T. An Adventure
NSX-T: The Manager of All Things NSX
The Hall of the Mountain King. or “What Loot do We Find in nsxcli?”
Three Controllers to Rule Them All (that just doesn’t have the same ring to it, does it?)
Beyond Centralization: The Local Control Plane
Transport Zones, Logical Switchies, and Overlays! Oh, My!
Which Way Do We Go? Let’s ask the Logical Router!
If You’re Not Living on the Edge, You’re Taking Up Too Much Room

If You’re Not Living on the Edge, You’re Taking Up Too Much Room

We talked about the Edge last time.  But we really didn’t get into the differences that we might expect.  I just said “we’ll get to that later”.  I suppose it’s later by now.

The Edge in NSX-T is little more than a container.  No, not “container” in a Docker kind of sense, but in a “pool of resources for network services” kind of sense.

See, the Edge is still a virtual machine, except when it isn’t.  In NSX-T, we have our choice of form factors – 3 sizes of virtual machine, and then bare metal.  Need a really, really big Edge?  Get out that server, we’re making a big network device!

So what are the form factors for an Edge these days?

  • Small – 2 vCPU, 4 GB RAM, 120 GB disk
  • Medium – 4 vCPU, 8 GB RAM, 120 GB disk
  • Large – 8 vCPU, 16 GB RAM, 120 GB disk
  • Bare Metal – 8 CPU, 32 GB RAM, 200 GB disk (minimums, naturally)

The virtual machine Edges come with 4 vmxnet3 vNICs installed.  Bare metal, well, we’ve got to watch out.  Chances are, your Intel X520, X540, X550, or X710s will work.  Check the docs for specifics – it’s all spelled out there in the System Requirements section of the Installation Guide

If you give a mouse an Edge, he’ll probably realize that it’s a single point of failure, and he’ll ask for another.

We can’t use Edges right after they’ve been deployed.  They’re really just Fabric Nodes at that point, joined to the Management Plane and just taking up resources.  They need to be promoted to Transport Nodes before they’ll be useful.

Because, while an Edge will still act as the North/South perimeter of our Software-Defined Network, it’s more than that.  An Edge is actually an active participant in the network – NSX host switches and TEPs will be installed on the Edge as part of this process.  And it runs distributed routing processes so that traffic coming through the Edge destined to a workload on a logical switch has an efficient routing path to get there. But TEPs and distributed routers are not all that are deployed to the Edge.

If you recall, from the logical routing post, the services router (SR) was mentioned.  The SR always lives on an Edge, whether the SR belongs to a Tier 0 or a Tier 1 router.  You certainly can influence on _which_ Edge a SR is running, but it’ll always be on one.

Let’s recap the kinds of things that might be deployed as a SR:

  • NAT
  • BGP (Tier 0 only)
  • Firewall
  • Load Balancer

That’s an awful lot of stuff in just 4 bullet points.  We’ll get into the specifics of those later.  These services are generally stateful, and generally need to be centralized.  So we did.

Because some things are centralized, there should probably be some measures taken for high availability (thus my nod to Laura Numeroff and Felicia Bond a couple of paragraphs ago, in case you missed the reference and thought I was simply going mad).  For HA of SRs, we need an Edge Cluster.

An Edge Cluster is simply a grouping of 1 or more identical form factor Edge transport nodes (maximum of 8) put together as a larger logical construct.  A container for your network services containers, if you will.  It’s pretty straightforward to create an Edge Cluster, simply add one in the Edge Clusters tab of Fabric > Nodes, add the Edge Transport Nodes you want participating, define the cluster profile, and off you go.

There’s not much to an Edge Cluster, really.  And not much to the cluster profile, either.  The profile simply defines the BFD probe interval, how many hops are allows, and how many probes have to be lost before we declare an Edge officially dead.

Most services operate statefully, and support only Active/Standby failover.  

We can configure routers as Active/Active or Active/Standby, and that will define what you can do with that router.  It’s worth noting that, if you configure a router as Active/Active, it is essentially a stateless device.  NAT is only available as Reflexive (or Stateless) NAT. The Edge Firewall is also stateless.

In Active/Standby, however, you can so stateful NAT and firewalling.  You can attach a Load Balancer.  But you have to choose your failover mode. If you ask me, you’re really choosing your fallback mode, but that’s not how it’s called out in the UI.  You have two failover mode choices:  Preemptive and Non-Preemptive.  What does that even mean?

It’s actually pretty simple.  In preemptive mode, you define a preferred member of your Edge cluster. This is the Edge we want to use.  If it fails, we’ve got others, so we’re not out for long.  What preemptive means, however, is that when our preferred Edge returns to service, NSX will preempt the service to move it back to the preferred node.  An automatic fallback, if you will.

In a non-preemptive configuration, we do not define a preferred node, so there’s no drive by NSX to move a service back to a preferred location.  No automatic fallback of the service.

Is there more to talk about? Of course there is.  We’ve still got services to talk about, and tooling, and all kinds of good stuff.  Stay tuned!

 

~$ history
Introduction: From NSX-V to NSX-T. An Adventure
NSX-T: The Manager of All Things NSX
The Hall of the Mountain King. or “What Loot do We Find in nsxcli?”
Three Controllers to Rule Them All (that just doesn’t have the same ring to it, does it?)
Beyond Centralization: The Local Control Plane
Transport Zones, Logical Switchies, and Overlays! Oh, My!
Which Way Do We Go? Let’s ask the Logical Router!

Which Way Do We Go? Let’s ask the Logical Router!

Moving around the data plane, we need to think about how, exactly, we’re going to get packets off of one logical switch and onto another.  Or out to the physical infrastructure, even.  

Logical routing in NSX-T is so very similar to what we have in NSX-V, but it’s entirely new at the same time.  

We still have the concept of a distributed router, embedded in the kernel to make routing decisions.  Pretty similar so far.  

Logical routers have interfaces, called Downlinks,  connected to the logical switches to enable routing between L2 domains. Still pretty much the same as we’re used to.

Logical routers no longer have a DLR Control VM. Well, there’s a bit of a departure from NSX-V.

Let’s reel things back into similarities.  We can still maintain two tiers of routing to keep tenants separated.  But we don’t refer to the tiers as the Distributed Logical Router and Edge Services Gateway any longer.  Now, it’s Tier-1 and Tier-0 routing, respectively.  This is where we have to start unlearning NSX-V things and relearning NSX-T things.

The NSX-T routers are all distributed, meaning the Tier-0 and Tier-1 routers are programmed on all the transport nodes throughout the transport zone.

So think about this logical topology:

 

Sample 2-Tier Routing Topology

Everything in the diagram, save for the physical router, is distributed across the transport zone.  

Except when they’re not.  Without diving too deep here, each of these routers, Tier 0, Tenant A Tier 1, and Tenant B Tier 1, are actually comprised of two objects: the distributed router (DR), and the services router (SR).

So let’s talk about these for a minute.  The distributed router component is the part that lives on each transport node.  This is the part that makes the routing decisions.  If I had two workloads: tenantA-Web and tenantB-Web, and those workloads were instantiated on the same hypervisor, traffic between them would not have to leave the host.  If tenantA-Web sent a ping to tenantB-Web, the traffic would go VM -> Tenant A Tier 1 -> Tier 0 -> Tenant B Tier 1 -> VM.

Perhaps breaking the environment up into tenants was a poor choice, as I likely wouldn’t allow traffic between them like that.  But that’s where other capabilities come in – the distributed firewall, for example.  We’ll talk about more of that kind of stuff as we continue through this series.

Anyway, we have our DR, but what’s this services router thing?  Simply put, it’s the component that we use for centralized or non-distributed services, such as dynamic routing, NAT, Firewalling, Load Balancing, L2 Bridging, etc.  

I know, you’re wondering, at this point, just what we’re thinking with a whole new routing structure when it sounds a lot like we’ve taken the model from NSX-V, and just distributed the ESG routing. Which is sort of true.  But it’s not.  Because the Tier 1 routers can run services just as well as the Tier 0 routers, which means everyone gets a services router component, assuming you’ve turned up one of these services.

That’s right, I can do NAT at Tier 0 or Tier 1, I can firewall at either tier, and on and on and on.  

But we’re not getting into the specifics of all of that yet.  

So where do these services routers live?  Since they’re centralized, we need someplace to put them.  How about on the Edge? We still have Edges in NSX-T, though they’re definitely no longer “Edge Services Gateways” – just Edges.  We’ll talk more about the specifics of them in the next installment.  For right now, just remember that all of the stateful or centralized services will live on an Edge node.

Another thing of note, is that I don’t need to deploy a two-tier routing topology.  I can attach a Tier 0 logical router to a logical switch.  If I don’t need a complex topology, you don’t have to build a complex topology.  Just cut Tier 1 completely out of the picture, and you’ll be fine.

This is a lot of information.  It may not seem like it, but it really is.  We’ll dive into the Edge next in an effort to complete the picture here.  

 

~$ history
Introduction: From NSX-V to NSX-T. An Adventure
NSX-T: The Manager of All Things NSX
The Hall of the Mountain King. or “What Loot do We Find in nsxcli?”
Three Controllers to Rule Them All (that just doesn’t have the same ring to it, does it?)
Beyond Centralization: The Local Control Plane
Transport Zones, Logical Switchies, and Overlays! Oh, My!

Transport Zones, Logical Switches, and Overlays! Oh, My!

Logical Switching

Everyone loves Logical Swtiching, right?! The ability to spin up a Layer 2 network whenever you need is pretty darned cool. Arguably, VXLAN is groovy, too, taking an L2 frame from a VM and wrapping it up in VXLAN goodness to shoot over the underlay network. So VMware left that alone, right?

Wrong. VXLAN is yesterday’s overlay protocol. The future is here with GENEVE.

GENEVE, you say? What in the world is GENEVE? GENEVE stands for GEneric NEtwork Virtualization Encapsulation, and is still being standardised. Wanna know more? Check is out here.

Let’s summarize the draft, shall we? Every other network virtualization overlay out there (VXLAN, NVGRE, etc.) is fixed. They all do one thing. Not to disparage any of the others – they do what they do very well. But the data center is not a fixed thing. It needs flexibility. It changes, it grows, it has to support new use cases. And that’s why GENEVE was born. See, GENEVE supports all the same stuff as everything else for network virtualization – taking that important L2 frame and wrapping it up in a new set of headers to send across the underlay “backplane” to the destination TEP (Tunnel End Point) so that delivery to the destination VM can occur. What makes GENEVE powerful, though, is extensibility through a proposed set of TLV options that can be set. Did I mention that the Options field in the GENEVE header is variable-length? So all manner of things could possibly be done.

From a day-to-day perspective, the difference between GENEVE and VXLAN is negligible. GENEVE uses UDP/6081, where we’re used to VXLAN using UDP/4789. The Tunnel Endpoints are referred to as “TEPs” or “Tunnel Endpoints” rather than “VTEPs” or “VXLAN Tunnel Endpoints”. Wireshark, as a packet analyzer example, already recognizes and understands GENEVE. So you just keep doing things the way you’ve been doing them. Except that now your logical networks can span both ESXi and KVM hypervisors. See, we’re growing.

In addition to the change in encapsulation protocol, NSX-T no longer has a dependency on the vSphere Distributed Switch. This does mean a couple of things, though.

First, we no longer have to worry about those crazy long vxw-dvs-83-virtualwire-7-sid-10007-transitNetwork kinda port group names. Logical Networks show up as simply the network name, though they do have a pretty new icon signifying that they’re opaque objects to vCenter (meaning that vCenter really can’t do anything with them, but it knows they exist).

Second, KVM virtual machine attachment to a logical switch isn’t quite as easy as in vSphere. For those, we have to identify the UUID of the virtual NIC (hint: look at the VM’s XML file – “virsh dumpxml <VM domain>”, and look for “interfaceId”), and then create the Logical Port by attaching the VIF (Virtual Interface) in NSX Manager > Switching > Ports > Add, where we specify the VIF UUID we discovered. Not difficult, just a little tedious, and probably worth scripting if you’re adding a bunch of KVM virtual machines to a Logical Switch.

Finally, don’t chew up all of your physical uplinks – NSX needs a couple of them for the NSX vSwitch that’s installed when you promote the host to a Transport Node.  I have a story I’ll tell about that later in this series.

If you recall from NSX-V, a logical switch exists in the concept of a transport zone.  NSX-T has transport zones as well, and they do precisely the same thing – define a compute scope for our logical networks.  But they’re not exactly the same all around.

In NSX-T we have two different types of transport zone.  One for overlay networks, and another for VLAN-backed networks. In other words, I have the option to build VLAN-backed logical switches.  That sounds a bit crazy, if you ask me.  Why in the world would we want to do that?! Well, the big reason is northbound connectivity to physical routers from the Edge.  I know, we haven’t talked about the Edge yet (that’s coming really soon, now that we’re talking about things in the data plane), so I’ll keep this brief.  The Edge will be configured with one or more VLAN transport zones to connect to upstream VLANs.  I’ll use this analogy again, but it’s essentially like creating VLAN-backed port groups on an NSX vSwitch.

So there you have it, a quick summary of what’s different in Logical Switching.