So I’ve had a few questions about the network in my lab, since I’m teaching almost nothing but NSX these days. So let’s talk about it for a bit.
My network is purposefully simple. And I’ve just rebuilt pretty much everything, so it seems like a good time to document it.
At the edge of my network is a Ubiquiti Networks EdgeRouter Lite (ERL). It deals with all of my routing inside the network, as well as routing to the outside world. It’s a 3 interface device – one to the outside world (cable modem), one to my default VLAN and home network, and the third interface is carved into a bunch of sub-interfaces for my VLANs in the lab.
The two internal-facing interfaces are attached to a Cisco SG300-20 that I could also use for routing, but I chose to let the router deal with that. This is where I have several VLANs set up for my different environments, and that’s all I’ve done with the Cisco switch – no IGMP Snooping, no routing, just VLANs:
- Local Management – this is where all of my common stuff lives – the vCenter for my physical hosts, vROps, Log Insight, etc
- Production Management – this is where my GA-versioned vESXi hosts live, along with their relevant supporting pieces – vCenter, NSX Manager, etc
- Production NSX Control – I set this up simply to have a dedicated network for my NSX 6.1 Controllers. These could just as easily gone into my Production Management VLAN
- Production NSX Transport – this is here to simulate a dedicated VXLAN transport network. Currently, this is superfluous, as NSX 6.0/6.1 VTEPs don’t deal well with VLAN tagging in a nested environment. Not sure what that’s all about, <sarcasm>I must be running in an unsupported config </sarcasm>
- Production Management Branch – this network provides a simulation of a remote site
- Production NSX Transport Branch – again, simulation of a remote site, but much like the Production NSX Transport, this one’s completely superfluous at the moment.
- I’ve got a matching set of VLANs for my non-GA environment, so that I can have a stable and unstable environments and maintain some level of isolation.
Since my lab is completely nested, I also have VSAN and vMotion VLANs configured on my distributed switches, but they don’t map to anything in the physical network.
On the NSX side of things, well, I’m rebuilding that right now. My thought process, since this is a lab, is to attach my outside-facing Edge VMs to the relevant Management network, depending on where I need the Edge. This sort of flies in the face of having a dedicated Edge cluster, but hey, this is a lab 🙂
Inside the Edge, my DLR(s?) will attach to a common Transit network, as will the inside interfaces of the Edges. I’ll set up some OSPF areas so that the EdgeRouter Lite can advertise some networks into the Edge. The DLR will also advertise its routes up to the Edge, which will in turn advertise back to the ERL. This should be a pretty simple OSPF config. I could eliminate the need for OSPF between the Edge and the ERL simply by configuring a default route, but what fun is that?
Then my workloads will attach to whatever Logical Switches I want them attached. The sky’s the limit inside the SDN.
For simplicity’s sake at this point, each network segment (VLAN or VXLAN) will have its own /24, though many of them could make due with a /28 or /29 pretty easily. But I’m not strapped for IP addresses, thanks to our friend RFC 1918, so I’m not going to make things any more complicated than I need to.
Everything works pretty well. Sure, I run into some goofy behavior once in a while (see the VTEP VLAN tagging thing above), but this environment is entirely unsupported. Honestly, it’s a miracle that any of this works at all, and is a galvanizing testament to what VMware software is actually capable of doing.
Someday, maybe I’ll draw this all up. But today is not that day.