Nov, 2017

Objective 1.2 – Prepare Host Clusters for Network Virtualization

As mentioned above the next objective is preparing your environment for network virtualization. We will cover the following topics specified in the blueprint.

  • Prepare vSphere Distributed Switching for NSX
  • Prepare a cluster for NSX
    • Add / Remove Hosts from cluster
  • Configure appropriate teaming parameters for a given implementation
  • Configure VXLAN Transport parameters according to a deployment plan

Kicking off with preparing the distributed switching for NSX… First, we need to cover a little about distributed switches. A lot of people, myself included, just use standard switches due to the simplicity of them. Like an unmanaged hardware switch, there isn’t much that can go wrong with it. It either works or it doesn’t. There are a number of things you are missing out with however, by not using distributed switches.

Distributed Switches can:

  • Shape Inbound traffic
  • Be managed through a central location (vCenter)
  • Support PVLANs (yeah I don’t know anybody using these)
  • Netflow
  • Port Mirroring
  • Support LLDP (Link Layer Discovery Protocol)
  • SR-IOV and 40GB NIC support
  • Addtl types of Link Aggregation
  • Port Security
  • NIOC (v6.x)
  • Multicast Snooping (v6.x)
  • Multiple TCP/IP stacks for vMotion (v6.x)

These are the main improvements. You can see a better detailed list here – https://kb.vmware.com/s/article/1010555

The main takeaway though, if you didn’t already know, is that NSX won’t be able to do its job without distributed switches.

To prepare for NSX you will need to make sure that all the distributed switches are created and hosts are joined to them. There will be different setups that will all be dependent on environments. You can join hosts to multiple distributed switches if need be. Most sample setups will have you separate out your compute and management hosts and keep them on separate switches. There are advantages to doing it this way but it can add complexity. Just make sure if you are doing it this way you know the reasons why and it makes sense for you. The other main thing to realize is that a minimum MTU frame size of 1600 bytes is required. This is due to the additional overhead that VXLAN encapsulation creates.

For the purposes of the test I am going to assume that they will want you to know about the MTU, and how to add and remove hosts/vmkernel ports/VMs from a distributed switch. This IS something you should probably already know if you have gone through VCP level studies. If you don’t feel free to reach out to me and we’ll talk, or reference one of the VMware books, Hands on Labs, or other materials that can assist.

Next objective is preparing the cluster/s for NSX.

What are we doing when we prepare the cluster? The VMware installation bundles are loaded onto the hosts and installed. The number of VIBs installed depends on the version of NSX and ESXi installed. If you do need to look for them these are what they will be called, and in the following groups.

esx-vxlan, esx-dvfilter-switch-security, esx-vsip
esx-vxlan, esx-vsip
esx-nsxv

The control and management planes are also built.

When we click on Host Preparation tab in Installation, we are presented with clusters. Select the cluster desired, and then click on Actions and Install. This will kick off the installation. -Note: If you are using stateless mode (non-persistent state across reboots) you will need to manually add them to the image.

A few other housekeeping things. I’d imagine you already have things like DNS sorted. But if you didn’t before, make sure the little stuff is sorted. If you don’t weird issues can pop up at the worst time.

To check to see the VIB installed on your ESXi hosts, open SSH on them and type in the following:

Esxcli software vib list | grep esx

This will, regardless of version, give you all the installed VIBs with ESX in the name.

In order to add a new host to an already prepared cluster, do the following:

  1. Add the server as a regular host
  2. Add the host to the distributed switch that the other hosts are part of and that is used for NSX
  3. Place the host into maintenance mode
  4. Add the host to the cluster
  5. Remove the host from maintenance mode

The host, when it is added to the cluster will automatically be installed with the necessary VIBs for NSX. DRS will also balance machines over to the new host.

To remove a host from a prepared cluster:

  1. Place the host in maintenance mode
  2. Remove host from the cluster
  3. Make sure VIBs are removed and then place host how you want it.

Configure appropriate teaming policy for a given implementation is next. I am going to lift some information from a Livefire class I just went through for this. First, when NSX is deployed to the cluster, a VXLAN port-group is created automatically. The teaming option on this should be the same across all ESXi hosts and across all clusters using that VDS. You can see the port group in my environment that is created for the VTEPs

You choose the teaming option when you configure the VXLAN in the Host Preparation tab. The Teaming mode determines the number of VTEPs you can use.

  • Route based on originating port = Multi VTEP = Uplinks both active
  • Route based on MAC hash = Multi VTEP = Uplinks both active
  • LACP = Single VTEP = Flow Based
  • Route Based on IP Hash = Single VTEP = Flow based
  • Explicit failover = Single VTEP = One Active

It is recommended you use source port. The reasoning behind this is so you don’t have a single point of failure. Single VTEPs would essentially cripple the host and VMs that resided on it until failover occurred or it was brought back online.

Configure VXLAN Transport parameters according to deployment plan is last in this objective. This most likely covers configuring VXLAN on the Host Preparation page and then configuring a Segment ID range on the Logical Network tab.

When you prepare the VXLAN on the host prep tab, this involves setting the VDS you are going to use, a VLAN ID (even if default), an MTU size, and a NIC teaming policy. One interesting thing is if your VDS switch is set to a lower MTU size, by changing here, it will also change the VDS to match the VXLAN MTU. The number of VTEPs are not editable in the UI here. You can set the VTEPs to be assigned an IP with an IP Pool that can be setup during this. You can go back later to add or change parameters of the IP Pool or even add IP Pools by going to the NSX Manager, managing it, and then going to Grouping Objects.

When everything is configured it will look similar to this:

Going to the next button, takes you the Segment ID. You can create one here, if you need to create more than one segment ID, you will need to do it via API. Remember Segment IDs are essentially the number of Logical Switches you can create. While you can technically create more than 16 million, you are limited to 10,000 dvPortGroups in vCenter. A much smaller subset is usually used. Here is mine. Since it’s a home lab I’m not likely going to be butting up against that 10k limit any time soon.

And that’s the end of 1.2 Objective. Next up is the exciting world of Transport Zones in 1.3.

VCIX-NV Objective 1.1

So I started this journey a while ago, I let things get in the way of me getting it, and here we are. Trying to get back on track once again. This cert has eluded me longer than it should have.

I am going to try to do a little bit of mixed media in this Blog series, just to try to mix it up, but also to see if it helps me a little bit more. Hopefully these will help other people but most of all myself. Starting at the beginning, this is for Objective 1.1 which covers the following:

-Deploy the NSX Manager virtual appliance
-Integrate the NSX Manager with vCenter Server
– Configure Single Sign On
– Specify a Syslog Server
-Implement and configure NSX Controllers
-Exclude virtual machines from firewall protection

Starting with the first piece, deploying the NSX Manager OVA. First thing you will need to check is availability of resources for the manager. The manager requires 4 vCPUs and 16GB of RAM. It also needs 60GB of diskspace. This holds true all the way up to environments with 256 hosts. When the environment has 256 or more hosts or hypervisors, it is recommended to increase vCPUs to 8 and RAM to 24 GB of RAM.

The rest of the installation of the OVA is run of the mill. Same as every other OVA deployment. Once done with that, you will need to connect the NSX Manager to a vCenter. The NSX Manager has a 1:1 relationship with the vCenter so you will only need to do this once, most of the time.

You will need to log on using admin and the password you set during setup. Once the site opens, click on the Manager vCenter Registration button to continue the installation.

Once the Registration page pulls up, you will need to enter your vCenter information to properly register it.

As you can see I’ve already connected it to my vCenter. Once I’ve done this, it should inject the Networking and Security Plugin so that you will be able to manage NSX. You will want to make sure that bot is connected status. You can log into the vSphere Web Client and go to Administration and then Client Plugins to see it there.

The next step was to setup a syslog server. This is easy since it is right in the UI. If you are still logged in from the vCenter registration, you want to click on Manage Appliance settings and then General on the left side. And you will see the below:

I have set mine up for my Log Insight server in my environment. 514 is the standard port. It can be over UDP or TCP or IPv6 UDP or TCP. Once that is taken care of, next piece is installing the controllers. This is taken care of in the web client. Once in the web client, you need to click on Networking and Security under Home. When Networking and Security opens, you will want to click on Installation on the left side.

In the center pane, at the top you will see NSX Managers, and under that, NSX Controller nodes. I have already installed two in my environment. To add another, you will need to click on the green + icon.

When you click on the green + the following will popup.

You will need to fill out all the information that has asterixis in front of it. Once you click OK, it will start to deploy. It will take a few minutes to finish. You will want to make sure you have enough resources for it before you start the above. Each controller will want 4 vCPUs and 4GB of RAM and 28 GB of Hard disk space. One cool thing to notice is once the controllers are done deploying they each have a little box on the side letting you the other ones are online. Just one of the things I think is really cool about NSX – how easy they make it to keep tabs on things.

The last part we need to address now is excluding virtual machines from the firewall on each host. To do this you will need to click on the NSX Manager in the navigation pane, all the way at the bottom.

Once you click on that you will then need to click on the NSX manager instance.

Then in the middle, click on Manage. Then click on Exclusion List.

To add a virtual machine to the list, click on the green + icon. Then click on the virtual machine and move it from the left pane to the right. I would show that…but I have no virtual machine in my environment yet. And that is the end of the first Objective. Stay tuned for the next.