Objective 1.2 – Prepare Host Clusters for Network Virtualization

As mentioned above the next objective is preparing your environment for network virtualization. We will cover the following topics specified in the blueprint.

  • Prepare vSphere Distributed Switching for NSX
  • Prepare a cluster for NSX
    • Add / Remove Hosts from cluster
  • Configure appropriate teaming parameters for a given implementation
  • Configure VXLAN Transport parameters according to a deployment plan

Kicking off with preparing the distributed switching for NSX… First, we need to cover a little about distributed switches. A lot of people, myself included, just use standard switches due to the simplicity of them. Like an unmanaged hardware switch, there isn’t much that can go wrong with it. It either works or it doesn’t. There are a number of things you are missing out with however, by not using distributed switches.

Distributed Switches can:

  • Shape Inbound traffic
  • Be managed through a central location (vCenter)
  • Support PVLANs (yeah I don’t know anybody using these)
  • Netflow
  • Port Mirroring
  • Support LLDP (Link Layer Discovery Protocol)
  • SR-IOV and 40GB NIC support
  • Addtl types of Link Aggregation
  • Port Security
  • NIOC (v6.x)
  • Multicast Snooping (v6.x)
  • Multiple TCP/IP stacks for vMotion (v6.x)

These are the main improvements. You can see a better detailed list here – https://kb.vmware.com/s/article/1010555

The main takeaway though, if you didn’t already know, is that NSX won’t be able to do its job without distributed switches.

To prepare for NSX you will need to make sure that all the distributed switches are created and hosts are joined to them. There will be different setups that will all be dependent on environments. You can join hosts to multiple distributed switches if need be. Most sample setups will have you separate out your compute and management hosts and keep them on separate switches. There are advantages to doing it this way but it can add complexity. Just make sure if you are doing it this way you know the reasons why and it makes sense for you. The other main thing to realize is that a minimum MTU frame size of 1600 bytes is required. This is due to the additional overhead that VXLAN encapsulation creates.

For the purposes of the test I am going to assume that they will want you to know about the MTU, and how to add and remove hosts/vmkernel ports/VMs from a distributed switch. This IS something you should probably already know if you have gone through VCP level studies. If you don’t feel free to reach out to me and we’ll talk, or reference one of the VMware books, Hands on Labs, or other materials that can assist.

Next objective is preparing the cluster/s for NSX.

What are we doing when we prepare the cluster? The VMware installation bundles are loaded onto the hosts and installed. The number of VIBs installed depends on the version of NSX and ESXi installed. If you do need to look for them these are what they will be called, and in the following groups.

esx-vxlan, esx-dvfilter-switch-security, esx-vsip
esx-vxlan, esx-vsip
esx-nsxv

The control and management planes are also built.

When we click on Host Preparation tab in Installation, we are presented with clusters. Select the cluster desired, and then click on Actions and Install. This will kick off the installation. -Note: If you are using stateless mode (non-persistent state across reboots) you will need to manually add them to the image.

A few other housekeeping things. I’d imagine you already have things like DNS sorted. But if you didn’t before, make sure the little stuff is sorted. If you don’t weird issues can pop up at the worst time.

To check to see the VIB installed on your ESXi hosts, open SSH on them and type in the following:

Esxcli software vib list | grep esx

This will, regardless of version, give you all the installed VIBs with ESX in the name.

In order to add a new host to an already prepared cluster, do the following:

  1. Add the server as a regular host
  2. Add the host to the distributed switch that the other hosts are part of and that is used for NSX
  3. Place the host into maintenance mode
  4. Add the host to the cluster
  5. Remove the host from maintenance mode

The host, when it is added to the cluster will automatically be installed with the necessary VIBs for NSX. DRS will also balance machines over to the new host.

To remove a host from a prepared cluster:

  1. Place the host in maintenance mode
  2. Remove host from the cluster
  3. Make sure VIBs are removed and then place host how you want it.

Configure appropriate teaming policy for a given implementation is next. I am going to lift some information from a Livefire class I just went through for this. First, when NSX is deployed to the cluster, a VXLAN port-group is created automatically. The teaming option on this should be the same across all ESXi hosts and across all clusters using that VDS. You can see the port group in my environment that is created for the VTEPs

You choose the teaming option when you configure the VXLAN in the Host Preparation tab. The Teaming mode determines the number of VTEPs you can use.

  • Route based on originating port = Multi VTEP = Uplinks both active
  • Route based on MAC hash = Multi VTEP = Uplinks both active
  • LACP = Single VTEP = Flow Based
  • Route Based on IP Hash = Single VTEP = Flow based
  • Explicit failover = Single VTEP = One Active

It is recommended you use source port. The reasoning behind this is so you don’t have a single point of failure. Single VTEPs would essentially cripple the host and VMs that resided on it until failover occurred or it was brought back online.

Configure VXLAN Transport parameters according to deployment plan is last in this objective. This most likely covers configuring VXLAN on the Host Preparation page and then configuring a Segment ID range on the Logical Network tab.

When you prepare the VXLAN on the host prep tab, this involves setting the VDS you are going to use, a VLAN ID (even if default), an MTU size, and a NIC teaming policy. One interesting thing is if your VDS switch is set to a lower MTU size, by changing here, it will also change the VDS to match the VXLAN MTU. The number of VTEPs are not editable in the UI here. You can set the VTEPs to be assigned an IP with an IP Pool that can be setup during this. You can go back later to add or change parameters of the IP Pool or even add IP Pools by going to the NSX Manager, managing it, and then going to Grouping Objects.

When everything is configured it will look similar to this:

Going to the next button, takes you the Segment ID. You can create one here, if you need to create more than one segment ID, you will need to do it via API. Remember Segment IDs are essentially the number of Logical Switches you can create. While you can technically create more than 16 million, you are limited to 10,000 dvPortGroups in vCenter. A much smaller subset is usually used. Here is mine. Since it’s a home lab I’m not likely going to be butting up against that 10k limit any time soon.

And that’s the end of 1.2 Objective. Next up is the exciting world of Transport Zones in 1.3.