VCIX-NV

VCIX-NV Objective 2.2 – Configure and Manage Layer 2 Bridging

Underneath todays objective we have the following topics:

  • Add Layer 2 Bridging
  • Connect Layer 2 Bridging to the appropriate distributed virtual port group

So first let’s do a little background on what we are doing and more importantly the why. Layer 2 bridging in this case, is an ability we have in NSX to take a workload that currently only exists in our NSX world and “bridge” that with the outside world. We could use this to reach a physical server being used as a proxy or gateway for example. We create this bridge between a logical switch inside NSX and it routes out to a VLAN. I am going to borrow the picture from the NSX guide to try to simplify it a bit more. (credit to VMware for the pic)

In the above picture you have the NSX VXLAN 5001 that is running our workload inside ESXi. We have a need to communicate to a physical workload labeled as such. In order to do that, we have an NSX Edge logical router that has L2 bridging enabled on it. The Bridging tab itself will allow us to choose a logical switch and distributed port group that will be connected together. To do this here are the following steps:

  1. If you don’t already have one, you will need to deploy a new Logical Router. To do that, you will need to go to the NSX Edges subsection of the Networking and Security screen of NSX.
  2. Click on the green + icon on the middle pane
  3. The first information you will need to fill out will be Install Type, and Name. The rest of the options we won’t go over in this walkthrough.
  4. We will need to select Logical Router as the Install Type and then type in a name.
  5. On the next screen, we will need to input a password for the Edge device.
  6. On the Configure Deployment Screen, we will need to add an actual appliance here by clicking on the green + icon.
  7. This popups with a screen for us to select where we wish to place the device’s compute and storage resources.
  8. On the Configure Interfaces screen, I’ve chosen to connect it to my management network. You don’t really need to configure an interface as the actual bridging work will be done by a different mechanism.
  9. You can click past the Default Gateway screen.
  10. Click Finish on the Ready to Complete screen and away you go.

Now the actual bridging mechanism is found by going into the Edge itself

  1. Double click on the Edge device you are going to use for bridging.
  2. Click on Manage, then on Bridging tabs in the center pane.
  3. To add a bridge, click on the green + arrow
  4. Give the Bridge a name, select the Logical Switch you are bridging, and the VLAN Port Group you will be bridging to. (Just as a side note, none of the normal dv Port Groups will show up unless you have a VLAN assigned to them. Something I discovered while writing this )
  5. Once you click Ok, you will exit out to the Bridging screen again, and you will now need to publish your changes to make it work.
  6. Once published, you will have a Bridging ID
  7. You can have more than one Bridge using the same Edge device, but they won’t be able to bridge the same networks. In otherwords you can’t use a bridge to connect the same VXLAN to two different VLAN Port Groups.

And that covers this objective. Stay tuned for the next objective!

Mike

VCIX-NV Objective 2.1 – Create and Manage Logical Switches

Recovering from dual hernia surgery and changing job roles…….it’s me and I’m back. Moving back into the Blueprint, we are working on Objective 2.1 – Create and Manage Logical Switches. We will be covering the following points in this blog post.

  • Create and Delete Logical Switches
  • Assign and configure IP addresses
  • Connect a Logical Switch to an NSX edge
  • Deploy services on a Logical Switch
  • Connect/Disconnect virtual machines to/from a Logical Switch
  • Test Logical Switch connectivity

First it would probably be appropriate to make sure that we know what a logical switch can do. Just like its physical counterpart, an NSX switch can create a logical broadcast domain and segment. This keeps broadcasts from one switch from spilling over to another and saving network bandwidth. Feasibly you can argue that the network bandwidth is a bit more precious than real network bandwidth because it requires not only real network bandwidth but also requires processing on the side of the hosts (whereas normal network bandwidth would be processed by the ASIC on the physical network switch).

A logical switch is mapped to a unique VXLAN which then encapsulates the traffic and carries it over the physical network medium. The NSX controllers are the main center where all the logical switches are managed.

In order to add a logical switch, you must obviously have all the needed components setup and installed (NSX manager, controllers, etc) I am guessing you have already done that.

  1. In the vSphere Web Client, navigate to Home > Networking & Security > Logical Switches.
  2. If your environment has more than one NSX Manager, you will need to select the one you wish to create the switch on, and if you are creating a Universal Logical Switch, you will need to select the primary NSX Manager.
  3. Click on the green ‘+’ symbol.
  4. Give it a name and optional description
  5. Select the transport zone where you wish this logical switch to reside. If you select a Universal Transport Zone, it will create a Universal Logical Switch.
  6. You can click Enable IP Discovery if you wish to enable ARP suppression. This setting is enabled by default. This setting will minimize ARP flooding on this segment.
  7. You can click Enable MAC learning if you have VMs that have multiple MAC addresses or Virtual NICs that are trunking VLANs.

The next point, assign and configure IP addresses, is a bit confusing. There is no IP address you can “assign” to just the logical switch. There is no interface on the switch itself. What I am guessing they meant to say here was that you should be familiar with adding an Edge Gateway interface to a switch, and adding a VM to the switch. Both of these would in a roundabout way assign and configure a subnet or IP address to a logical switch. That’s the only thing I can think of anyways.

The next bullet point is, connecting a logical switch to an NSX Edge. This is done quickly and easily.

  1. While you are in the Logical Switches section (Home > Networking & Security > Logical Switches), you would then click on the switch you want to add the Edge device to.
  2. Next, click the Connect an Edge icon.
  3. Select the Edge device that you wish to connect to the switch.
  4. Select the interface that you want to use.
  5. Type a name for the interface
  6. Select whether the link will be internal or uplink
  7. Select the connectivity status. (Connected or not)
  8. If the NSX Edge you are connecting has Manual HA Configuration selected, you will need to input both management IP addresses in CIDR format.
  9. Optionally, edit the MTU
  10. Click Next and then Finish

The next bullet point covers deploying services on a logical switch. This is accomplished easily by:

  1. Click on Networking & Security and then click on Logical Switches.
  2. Select the logical switch you wish to deploy services on.
  3. Click on the Add Service Profile Icon.
  4. Select the service and service profile that you wish to apply.

There is an important caveat here, the icon will not show up unless you have already installed the third party virtual appliance in your environment. Otherwise your installation will look like mine and not have that icon.

The next bullet point, Connecting and Disconnecting VMs from a Logical Switch is also simply done.

  1. While in the Logical Switch section (kind of a theme here huh?), right click on the switch you wish to add the VM to.
  2. You have the option to Add or Remove VMs from that switch – as shown here in the pic

The final point, testing connectivity, can be done numerous ways. The simplest way would just be to test a ping from one VM to another. This could be done on pretty much any VM with an OS on it. You can even test connectivity between switches (provided there is some sort of routing setup between them. If you only had one VM on that segment (switch) but you had a Edge on it as well, you could pin the Edge interface from the VM as well. There are many ways to test connectivity. And with that, this post draws to a close. I will be back soon with the next Objective Point 2.2 Configure and Manage Layer 2 Bridging.

VCIX-NV Objective 1.3 Configure and Manage Transport Zones

Covering Objective 1.3 now we will be covering the following topics

  • Create Transport Zones according to a deployment plan
  • Configure the control plane mode for a Transport Zone
  • Add clusters to Transport Zones
  • Remove clusters from Transport Zones

So, beginning with the first point, Create Transport Zones according to a deployment plan. What is a transport zone? Well simply, a transport zone is a virtual fence around the clusters that can talk to each other over NSX. If you want a cluster to be able to talk to other clusters that are on NSX, they must be included in the same transport zone. It is important to note that all VMs included in a cluster that is part of a transport zone will have access to that transport zone. Another thing to be careful of is that while a transport zone can span multiple VDSs, you should be sure that all the clusters that are on that VDS are included in the transport zone. You may run into situations where machines won’t be able to talk to each other otherwise if you have improper alignment.

Shown in the above example, you can see that even though you have the DVS Compute_DVS that spans across 2 clusters, since you add to a transport zone by cluster, it is possible to have just half of the clusters that make up that DVS on the transport zone. This leaves the hosts in Cluster A unable to talk to anyone on the NSX networks.

On to the next point. Configure the control plane mode for a Transport Zone. You can choose between three different control plane modes available.

  • Multicast
  • Unicast
  • Hybrid

These modes control how BUM (Broadcast, Unicast, Multicast) traffic is distributed and more.

Multicast replication mode depends on the underlaying architecture being a full Multicast implementation. The VTEPs on each host join a Multicast group so when BUM traffic is sent, they will receive it. The advantage of this is BUM traffic is only distributed to hosts that participate, possibly cutting the traffic down. Downsides of this are, IGMP, PIM, and Layer 3 Multicast routing are required at the hardware layer adding complexity to the original design.

Unicast replication mode, is everything multicast is not. More specifically, when a BUM packet is sent out, it is sent to every other host on the VXLAN segment. It will then pick a host on the other VXLAN segments and designate it a Unicast Tunnel End Point or UTEP and it will forward the frame to that and then the UTEP will forward it to all other hosts on its VXLAN segment. The advantages of this are not caring about the underlying hardware at all. This is a great thing from the decoupling from hardware standpoint, on the other hand the downside to it is, it uses a lot more bandwidth.

Hybrid replication mode is exactly that. Hybrid. It is a good mix between the above. Instead of needing all the things in multicast, only IGMP is used. Unicast is used between the VXLAN segments to avoid the need for PIM and Layer 3 routing, but internally on the VXLAN segment, IGMP is used and it cuts down on the bandwidth quite a bit. With Hybrid mode, instead of a UTEP being used between segments, it is now called a MTEP or Multicast Tunnel Endpoint.

Unicast is what is used most commonly on smaller networks and Hybrid in larger networks.

As far as adding and removing clusters from Transport Zones, you can do that a different times (adding). You can add when you initially create the transport zone, or you can do it afterwards. If you do it afterwards you will need to be in the Installation sub menu on the navigation menu on the left side of the screen. You then will need to click on the Transport Zones tab and then click on the transport zone you wish to expand. Then click on the Add Cluster icon, which looks like three little computers with a + symbol on the left side. Then select the clusters you wish to add. To remove a cluster, you need to be in the same place, but click on the Remove Clusters icon instead.

That’s the end of section 1. Next up. Section 2. Create and Manage VMware NSX Virtual Networks.

Objective 1.2 – Prepare Host Clusters for Network Virtualization

As mentioned above the next objective is preparing your environment for network virtualization. We will cover the following topics specified in the blueprint.

  • Prepare vSphere Distributed Switching for NSX
  • Prepare a cluster for NSX
    • Add / Remove Hosts from cluster
  • Configure appropriate teaming parameters for a given implementation
  • Configure VXLAN Transport parameters according to a deployment plan

Kicking off with preparing the distributed switching for NSX… First, we need to cover a little about distributed switches. A lot of people, myself included, just use standard switches due to the simplicity of them. Like an unmanaged hardware switch, there isn’t much that can go wrong with it. It either works or it doesn’t. There are a number of things you are missing out with however, by not using distributed switches.

Distributed Switches can:

  • Shape Inbound traffic
  • Be managed through a central location (vCenter)
  • Support PVLANs (yeah I don’t know anybody using these)
  • Netflow
  • Port Mirroring
  • Support LLDP (Link Layer Discovery Protocol)
  • SR-IOV and 40GB NIC support
  • Addtl types of Link Aggregation
  • Port Security
  • NIOC (v6.x)
  • Multicast Snooping (v6.x)
  • Multiple TCP/IP stacks for vMotion (v6.x)

These are the main improvements. You can see a better detailed list here – https://kb.vmware.com/s/article/1010555

The main takeaway though, if you didn’t already know, is that NSX won’t be able to do its job without distributed switches.

To prepare for NSX you will need to make sure that all the distributed switches are created and hosts are joined to them. There will be different setups that will all be dependent on environments. You can join hosts to multiple distributed switches if need be. Most sample setups will have you separate out your compute and management hosts and keep them on separate switches. There are advantages to doing it this way but it can add complexity. Just make sure if you are doing it this way you know the reasons why and it makes sense for you. The other main thing to realize is that a minimum MTU frame size of 1600 bytes is required. This is due to the additional overhead that VXLAN encapsulation creates.

For the purposes of the test I am going to assume that they will want you to know about the MTU, and how to add and remove hosts/vmkernel ports/VMs from a distributed switch. This IS something you should probably already know if you have gone through VCP level studies. If you don’t feel free to reach out to me and we’ll talk, or reference one of the VMware books, Hands on Labs, or other materials that can assist.

Next objective is preparing the cluster/s for NSX.

What are we doing when we prepare the cluster? The VMware installation bundles are loaded onto the hosts and installed. The number of VIBs installed depends on the version of NSX and ESXi installed. If you do need to look for them these are what they will be called, and in the following groups.

esx-vxlan, esx-dvfilter-switch-security, esx-vsip
esx-vxlan, esx-vsip
esx-nsxv

The control and management planes are also built.

When we click on Host Preparation tab in Installation, we are presented with clusters. Select the cluster desired, and then click on Actions and Install. This will kick off the installation. -Note: If you are using stateless mode (non-persistent state across reboots) you will need to manually add them to the image.

A few other housekeeping things. I’d imagine you already have things like DNS sorted. But if you didn’t before, make sure the little stuff is sorted. If you don’t weird issues can pop up at the worst time.

To check to see the VIB installed on your ESXi hosts, open SSH on them and type in the following:

Esxcli software vib list | grep esx

This will, regardless of version, give you all the installed VIBs with ESX in the name.

In order to add a new host to an already prepared cluster, do the following:

  1. Add the server as a regular host
  2. Add the host to the distributed switch that the other hosts are part of and that is used for NSX
  3. Place the host into maintenance mode
  4. Add the host to the cluster
  5. Remove the host from maintenance mode

The host, when it is added to the cluster will automatically be installed with the necessary VIBs for NSX. DRS will also balance machines over to the new host.

To remove a host from a prepared cluster:

  1. Place the host in maintenance mode
  2. Remove host from the cluster
  3. Make sure VIBs are removed and then place host how you want it.

Configure appropriate teaming policy for a given implementation is next. I am going to lift some information from a Livefire class I just went through for this. First, when NSX is deployed to the cluster, a VXLAN port-group is created automatically. The teaming option on this should be the same across all ESXi hosts and across all clusters using that VDS. You can see the port group in my environment that is created for the VTEPs

You choose the teaming option when you configure the VXLAN in the Host Preparation tab. The Teaming mode determines the number of VTEPs you can use.

  • Route based on originating port = Multi VTEP = Uplinks both active
  • Route based on MAC hash = Multi VTEP = Uplinks both active
  • LACP = Single VTEP = Flow Based
  • Route Based on IP Hash = Single VTEP = Flow based
  • Explicit failover = Single VTEP = One Active

It is recommended you use source port. The reasoning behind this is so you don’t have a single point of failure. Single VTEPs would essentially cripple the host and VMs that resided on it until failover occurred or it was brought back online.

Configure VXLAN Transport parameters according to deployment plan is last in this objective. This most likely covers configuring VXLAN on the Host Preparation page and then configuring a Segment ID range on the Logical Network tab.

When you prepare the VXLAN on the host prep tab, this involves setting the VDS you are going to use, a VLAN ID (even if default), an MTU size, and a NIC teaming policy. One interesting thing is if your VDS switch is set to a lower MTU size, by changing here, it will also change the VDS to match the VXLAN MTU. The number of VTEPs are not editable in the UI here. You can set the VTEPs to be assigned an IP with an IP Pool that can be setup during this. You can go back later to add or change parameters of the IP Pool or even add IP Pools by going to the NSX Manager, managing it, and then going to Grouping Objects.

When everything is configured it will look similar to this:

Going to the next button, takes you the Segment ID. You can create one here, if you need to create more than one segment ID, you will need to do it via API. Remember Segment IDs are essentially the number of Logical Switches you can create. While you can technically create more than 16 million, you are limited to 10,000 dvPortGroups in vCenter. A much smaller subset is usually used. Here is mine. Since it’s a home lab I’m not likely going to be butting up against that 10k limit any time soon.

And that’s the end of 1.2 Objective. Next up is the exciting world of Transport Zones in 1.3.

VCIX-NV Objective 1.1

So I started this journey a while ago, I let things get in the way of me getting it, and here we are. Trying to get back on track once again. This cert has eluded me longer than it should have.

I am going to try to do a little bit of mixed media in this Blog series, just to try to mix it up, but also to see if it helps me a little bit more. Hopefully these will help other people but most of all myself. Starting at the beginning, this is for Objective 1.1 which covers the following:

-Deploy the NSX Manager virtual appliance
-Integrate the NSX Manager with vCenter Server
– Configure Single Sign On
– Specify a Syslog Server
-Implement and configure NSX Controllers
-Exclude virtual machines from firewall protection

Starting with the first piece, deploying the NSX Manager OVA. First thing you will need to check is availability of resources for the manager. The manager requires 4 vCPUs and 16GB of RAM. It also needs 60GB of diskspace. This holds true all the way up to environments with 256 hosts. When the environment has 256 or more hosts or hypervisors, it is recommended to increase vCPUs to 8 and RAM to 24 GB of RAM.

The rest of the installation of the OVA is run of the mill. Same as every other OVA deployment. Once done with that, you will need to connect the NSX Manager to a vCenter. The NSX Manager has a 1:1 relationship with the vCenter so you will only need to do this once, most of the time.

You will need to log on using admin and the password you set during setup. Once the site opens, click on the Manager vCenter Registration button to continue the installation.

Once the Registration page pulls up, you will need to enter your vCenter information to properly register it.

As you can see I’ve already connected it to my vCenter. Once I’ve done this, it should inject the Networking and Security Plugin so that you will be able to manage NSX. You will want to make sure that bot is connected status. You can log into the vSphere Web Client and go to Administration and then Client Plugins to see it there.

The next step was to setup a syslog server. This is easy since it is right in the UI. If you are still logged in from the vCenter registration, you want to click on Manage Appliance settings and then General on the left side. And you will see the below:

I have set mine up for my Log Insight server in my environment. 514 is the standard port. It can be over UDP or TCP or IPv6 UDP or TCP. Once that is taken care of, next piece is installing the controllers. This is taken care of in the web client. Once in the web client, you need to click on Networking and Security under Home. When Networking and Security opens, you will want to click on Installation on the left side.

In the center pane, at the top you will see NSX Managers, and under that, NSX Controller nodes. I have already installed two in my environment. To add another, you will need to click on the green + icon.

When you click on the green + the following will popup.

You will need to fill out all the information that has asterixis in front of it. Once you click OK, it will start to deploy. It will take a few minutes to finish. You will want to make sure you have enough resources for it before you start the above. Each controller will want 4 vCPUs and 4GB of RAM and 28 GB of Hard disk space. One cool thing to notice is once the controllers are done deploying they each have a little box on the side letting you the other ones are online. Just one of the things I think is really cool about NSX – how easy they make it to keep tabs on things.

The last part we need to address now is excluding virtual machines from the firewall on each host. To do this you will need to click on the NSX Manager in the navigation pane, all the way at the bottom.

Once you click on that you will then need to click on the NSX manager instance.

Then in the middle, click on Manage. Then click on Exclusion List.

To add a virtual machine to the list, click on the green + icon. Then click on the virtual machine and move it from the left pane to the right. I would show that…but I have no virtual machine in my environment yet. And that is the end of the first Objective. Stay tuned for the next.