May, 2015

Objective 3.2: Configure Software Defined Storage

Back again!! This time we are going to go over the relatively new VSAN. Now VMWare Virtual SAN originally came out in 5.5 U1, but has been radically overhauled for 6.0 (massive jump from VSAN 1.0 -6 J ) So what are we going to go over and have to know for the exam?

  • Configure/Manage VMware Virtual SAN
  • Create/Modify VMware Virtual Volumes (VVOLs)
  • Configure Storage Policies
  • Enable/Disable Virtual SAN Fault Domains

Now this is not going to be an exhaustive guide and information about VSAN and its use, abilities, and administration. Cormac Hogan and Rawlinson Rivera already do that so well, there is no point. I have Cormac’s Blog to the right. He has more info than you probably can process. So we will concern ourselves with a high overview of the product and the objectives.

Here comes my 50 mile high overview. VSAN is Software Defined Storage. What does this mean? While you still have physical drives and cards, you are pulling them together and creating logical containers (virtual disks) and such through software and the vmkernel. You can setup VSAN as a hybrid or all flash cluster. In the hybrid approach you have magnetic media used as the storage media and the flash is the cache. In all flash, the flash disks are used for both jobs.

When you setup the cluster, you can do it on a new cluster or you can add the feature to an existing cluster. When you do that it will take the disks and aggregate them into a single datastore available to all hosts in the VSAN cluster. You can later expand this by adding more disks or additional hosts with disks to the cluster. The cluster will run much better if all the hosts in the cluster are as close as possible to each other, just like your regular cluster. You can have machines that are just compute resources and have no local datastore or disk groups and still be able to use the VSAN datastore.

In order for a host to contribute its disks it has to have at least one SSD and one spindle disk. Those disks form what is known as a disk group. You can have more than one disk group per machine, but each one needs at least the above combination.

Virtual SAN manages the data in the form of flexible data containers called object. VSAN is known as Object Based Storage. An object is a logical volume that has its data and metadata spread distributed across the cluster. There are the following types of objects:

  • VM Home Namespace = this is where all configuration files are stored such as the .vmx, log files, vmdks and snapshot delta description files and so on.
  • VMDK = The .vmdk stores the contents of the Virtual Machine’s HD
  • VM Swap Object = This is created when the VM turns on, just like normal
  • Snapshot Delta VMDKs =Are created when snapshots are taken of the VM, each Delta is an object
  • Memory Object = Are the objects created when memory is selected as well when the snapshot is taken

Along with the objects, you have metadata that VSAN uses called a Witness. This is a component that serves as a tiebreaker when a decision needs to be made regarding the availability of surviving datastore components, after a potential failure. There may be more than one witness depending on your policy for the VM. Fortunately, this doesn’t take up much space – approximately 2MB on the old VSAN 1.0 and 4MB for version 2.0/6.0

So the part of the larger overall picture is being able to apply policies granularly. You are able to specify on a per VM basis how many copies of something you want, vs a RAID 1 where you have a blanket copy of everything regardless of its importance. SPBM (Storage Based Policy Management) allows you to define performance, and availability in the form of this policy. VSAN ensure that you have a policy for every VM. Whether it is the default or a specific one for the VM. For best results you should create and use your own, even if the requirements are the same as the default.

So those of us who used and read about VSAN 1.0 how does the new version differ? Quite a lot. This part is going to be lifted from Cormac’s Site. (Just the highlights)

  1. Scalability – Because vSphere 6.0 can now support 64 hosts in a cluster, so can VSAN
  2. Scalability – Now supports 62TB VMDK
  3. New on-disk Format (v2) – This allows a lot more components per host to be supported. It leverages VirtsoFS
  4. Support for All-Flash configuration
  5. Performance Improvement using the new Disk File System
  6. Availability improvements – You can separate racks of machines into Fault Domains
  7. New Re-Balance mechanism – rebalances components across disks, disk groups, and hosts
  8. Allowed to create your own Default VM Storage Policy
  9. Disk Evacuation granularity – You can evacuate a single disk now instead of a whole disk group
  10. Witnesses are now smarter – They can exercise more than a single vote instead of needing multiple witnesses
  11. Ability to light LEDs on disks for identification
  12. Ability to mark disks as SSD via UI
  13. VSAN supports being deployed on Routed networks
  14. Support of external disk enclosures.

As you can see this is a huge list of improvements. Now that we have a small background and explanation of the feature, let’s dig into the bullet points.

Configure/Manage VMware Virtual VSAN

So first, as mentioned before, there are a few requirements that need to be met in order for you to be able to create and configure VSAN.

  • Cache = You need one SAS or SATA SSD or PCIe Flash Device that is at least 10% of the total storage capacity. They can’t be formatted with VMFS or any other file system
  • Virtual Machine Data Storage = For Hybrid group configurations, make sure you have at least one NL-SAS, SAS, or SATA magnetic drive (sorry PATA owners). For All Flash disk groups, make sure you have at least one SAS, SATA, or PCIe Flash Device
  • Storage Controller = One SAS or SATA Host Bus Adapter that is configured in pass-through or RAID 0 mode.
  • Memory = this depends on the number of disk groups and devices that are managed by the hypervisor. Each host should contain a minimum of 32GB of RAM to accommodate for the maximum 5 disk groups and max 7 capacity devices per group
  • CPU = VSAN doesn’t take more than about 10% CPU overhead
  • If booting from SD, or USB device, the device needs to be at least 4GB
  • Hosts = You must have a minimum of 3 hosts for a cluster
  • Network = 1GB for Hybrid solutions, 10GB for all flash solutions. Multicast must be enabled on the switches – Only IPv4 is supported at this time
  • Valid License for VSAN

Now that we got all those pesky requirements out of the way. Let’s get started on actually creating the VSAN. The first thing we will need to do is create a VMkernel port for it. There is a new option for traffic as of 5.5U1 which is ….. VSAN. You can see it here:

After you are done, it will show up as being enabled you can check by looking here:

Now that is done, you will need to enable the cluster for VSAN as well. This is done under the cluster settings or when you create the cluster to begin with.

You have the option to automatically add the disks to the VSAN cluster, or if you leave in manual you will need to add the disks yourself, and new devices are not added when they are installed. After you create it you can check the status of it on the Summary page.

You can also check on the individual disks and health and configure disk groups and Fault Domains under the Manage > Settings > Virtual SAN location.

Here is a shot from my EVO:RAIL with VSAN almost fully configured

The errors are because I don’t have the VLANs fully configured for them to communicate yet. There is a lot more we could work on with VSAN but I don’t have the blog space nor the time. So moving on….

Create/Modify VMware Virtual Volumes (VVOLs)

First a quick primer on VVols. What are these things called Virtual Volumes? Why do we want them when LUNs have served us well for so long? So if you remember, one of the cool advantages of VSAN is the ability to assign policies on a per VM basis. But VSAN is limited to only certain capabilities. What if we want more? In comes VVols. Use VVols and a storage SAN that supports them, you can apply any abilities that SAN has on a per VM basis. Stealing from a VMWare blog for the definition, “VVols offer per-VM management of storage that helps deliver a software defined datacenter”. So what does this all mean? In the past you have had SANs with certain capabilities, such as deduplication, or a specific RAID type, etc. You would need to have a really good naming system or DB somewhere to list which LUN was which. However now, we have the ability for us to just set a specific set of rules for the VM in a policy and then it will find the LUN matching that set of rules for us. Pretty nifty huh?

So how do you create and modify these things now? The easiest way is to create a new datastore just like you would a regular VMFS or NFS.

  1. Select vCenter Inventory Lists > Datastores
  2. Click on the create new datastore icon
  3. Click on Placement for the datastore
  4. Click on VVol as the type

  5. Now put in the name you wish to give it, and also choose the Storage Container that is going to back it. (kind of like a LUN – you would have needed to add a Protocol Endpoint and Storage Container before getting to this point)
  6. Select the Hosts that are going to have access to it
  7. Finish

Kind of working this backwards but how do you configure them? You can do the following 4 things:

  1. Register the Storage Provider for VVols = Using VASA you configure communication between the SAN and vSphere. Without this communication nothing will work in VVols.
  2. Create a Virtual Datastore = this is to be able to create the VVol
  3. Review and Manage Endpoints = this is a logical proxy used to communicate between the virtual volumes and the virtual disks that they encapsulate. Protocol endpoints are exported along with their associated storage containers by the VASA provider.
  4. (Optional) If you host uses ISCSI-based transport to communicate with protocol endpoints representing a storage array, you can modify the default multipathing policy associated with it.

Configure Storage Policies

At the heart of all these changes is the storage policy. The storage policy is what enables all this wonderful magic to happen behind the scenes with you, the administrator, blissfully unaware. Let’s go ahead and define it as VMWare would like it defined: “A vSphere storage profile defines storage requirements for virtual machines and storage capabilities of storage providers. You use storage policies to manage the association between virtual machines and datastores.”

Where is it found? On the home page in your web client under…. Policies and Profiles. Anti-Climatic I know. Here is a picture of that when you click on it.

This gives you a list of all the profiles and polices associated with your environment. We currently are interested only in the Storage Policies so let us click on that. Depending on what products you have setup, yours might look a little different.

You can have Storage policies based off one of the following:

  • Rules based on Storage-Specific Data Services = these are based on data services that entities such as VSAN and VVols can provide, for example deduplication
  • Rules based on Tags = these are tags you, as an administrator, associate with specific datastores. You can apply more than one per datastore

Now we dig in. First thing we are going to need to do is to make sure that storage policies are enabled for the resources we want to apply them to. We do that by clicking on the Enable button underneath storage policies

When enabled you will see the next screen look like this (with your own resource names in there of course)

We can go ahead and create a storage policy now and be able to apply it to our resources. When you click on Create New VM Storage Policy, you will be presented with this screen:

Go ahead and give it a name and optionally a description. On the next screen we will define the rules that are based on our capabilities

In this one I am creating one for a Thick provisioned LUN


Unfortunately none of my datastores are compatible. You can also configure based off of tags you associate on your datastores.

Enable/Disable Virtual SAN Fault Domains

This is going to be a quick one as I am a bit tired of this post alreadyJ. In order to work with Fault Domains you will need to go to the VSAN Cluster and then click on Manage and Settings. Next on the left hand side you will see a Fault Domains. Click on it. You now have the ability to segregate hosts into specific fault domains. Click on add (+) to create a fault domain and then add the hosts to it you want. You will end up with a screen like this:

Onwards and Upwards to the next post!!


Objective 3.1: Manage vSphere Storage Virtualization

Wait, wait wait…. Where did objective 2.2 go? I know… I didn’t include it since I have already gone over in previous objectives all the questions asked on it. So moving on to Storage.

So we are going to cover the following objective points.

  • Identify storage adapters and devices
  • Identify storage naming conventions
  • Identify hardware/dependent hardware/software iSCSI initiator requirements
  • Compare and contrast array thin provisioning and virtual disk thin provisioning
  • Describe zoning and LUN masking practices
  • Scan/Rescan storage
  • Configure FC/iSCSI LUNs as ESXi boot devices
  • Create an NFS share for use with vSphere
  • Enable/Configure/Disable vCenter Server storage filters
  • Configure/Edit hardware/dependent hardware initiators
  • Enable/Disable software iSCSI initiator
  • Configure/Edit software iSCSI initiator settings
  • Configure iSCSI port binding
  • Enable/Configure/Disable iSCSI CHAP
  • Determine use case for hardware/dependent hardware/software iSCSI initiator
  • Determine use case for and configure array thin provisioning

So let’s get started

Identify storage adapters and devices

Identifying storage adapters is easy. You have your own list to refer to, the Storage Devices view. To navigate to it do the following:

  1. Browse to the Host in the navigation pane
  2. Click on the Manage tab and click Storage
  3. Click Storage Adapters
    This is what you will see

As you can see, identification is relatively easy. Each adapter is assigned a VMHBAXX address. They are under larger categories and also you are given the description of the name of the hardware, i.e. Broadcom ISCSI adapter. You can find out a number of details about the device by looking down below and going through the tabs. As you can see, one of the tabs is devices that is under that particular controller. Which brings us to our next item, storage devices.

Storage Devices is one more selection down from Storage Adapters, so naturally you would navigate the same way to it and then just click on Storage Devices instead of adapters. And now that we are seeing those, now it’s time to move on to naming conventions to understand why they are named how they are.

Identify Storage Naming Conventions

You are going to have multiple names for each type of storage and device. Depending on the type of storage, ESXi will use a different convention or method to name each device. The first type is SCSI Inquiry identifiers. The host uses a command to query the device and uses the response to generate a unique name. They are unique, persistent, and will have one of the following formats:

  • Naa.number = naa stands for Network Address Authority and then it is followed by a bunch of hex number that identify vendor, device, and lun
  • T10.number = T10 is a technical committee tasked with SCSI storage interfaces and their standards (also plenty of other disk standards as well)
  • Eui.number – Stands for Extended Unique Identifier.

You also have the path based identifier. This is created if the device doesn’t provide the needed information to create the above identifiers. It will look like the following: mpx.vmhbaxx:C0:T0:L0 – this can be used just the same as the above identifiers. The C is for the channelr, the T is for target, and L is for the LUN. It should be noted that this identifier type is neither unique nor persistent and could change every reboot.

There is also a legacy identifier that is created. This is in the format vml.number

You can see these identifiers on the pages mentioned above and also at the command line by typing the following.

  • “esxcli storage core device list”

Identify hardware/dependent hardware/software iSCSI initiator requirements

You can use three types of ISCSI initiators in VMware. These are independent hardware, dependent hardware, and software initiators. The differences are as follows

  • Software ISCSI = this is code built into the VMkernel. It allows your host to connect to ISCSI targets without having special hardware. You can use a standard NIC for this. This requires a VMkernel adapter. This is also able to use all CHAP levels
  • Dependent Hardware ISCSI initiator – this device still depends on VMware for networking, ISCSI configuration, and management interfaces. This is basically ISCSI offloading. An example of this is Broadcom 5709 NIC. This requires a VMkernel adapter (this will show up as a NIC and a Storage adapter). This is able to use all CHAP levels
  • Independent Hardware ISCSI initiator – this device implements its own networking, ISCSI config and management interfaces. An example of this is the Qlogic QLA4052 adapter. This does not require a VMkernel adapter (this will show up as a Storage Controller). This is only able to use unidirectional CHAP and use unidirectional unless prohibited by target CHAP levels

Compare and contrast array thin provisioning and virtual disk thin provisioning

You have two types of thin provisioning. The biggest difference between these are where the provisioning is going on at. The array can thinly provision the LUN. In this case it will present the total logical size to the ESXi host which may be more than the real physical capacity. If this is the case then there is really no way for your Esxi host to know if you are running out of space. This obviously can be a problem. Because of this, Storage APIs – Array Integration was created. Using this feature and a SAN that supports it, your hosts are aware of the underlying storage and are able to tell how your LUNs are configured. The requirements for this are simply ESXi 5.0 or later and a SAN that supports Storage APIs for VMware.

Virtual Disk Thin Provisioning is the same concept but done for the Virtual Hard Disk of the Virtual Machine. You are creating a disk and telling it that it has more space than it actually might. Due to this, you will need to monitor the status of that disk in case your VM Operating System starts trying to use that space.

Describe zoning and LUN masking practices

Zoning is a Fiber Channel Concept meant to restrict servers from seeing storage arrays. Zone will define which HBA’s, or Cards in the Server, can connect to which Storage Processors on the SAN. LUN Masking on the other hand will only allow certain hosts to see certain LUNs.

With ESXi hosts you want to use single initiator zoning or single initiator-single target zoning. The latter is preferred. This can help prevent misconfigurations and access problems.

Rescan Storage

This one is pretty simple. In order to take count of new storage or see changes on existing storage, you may want to rescan your storage. You can do two different operations from the gui client: 1) Scan for New Storage Devices, 2) Scan for new VMFS Volumes. The 1st will take longer that the second. You can also rescan for storage at the command line by performing the following command at CLI

esxcli storage core adapter rescan –all – this will rescan for new Storage Devices
vmkfstools –V
– This will scan for new VMFS volumes

I have included a picture of the webclient with the button circled in red that is to rescan.

Configure FC/iSCSI LUNs as ESXi boot devices

ESXi supports booting from Fiber Channel or FCoE LUNs as well as ISCSI. First we will go over Fiber Channel.

Why would you want to do this first off? There are a number of reasons. Among them you remove the need to have storage included in the servers. This makes them cheaper and less prone to failure as hard drives are the most likely component to fail. It is also easier to replace the servers. One server could die and you can drop a new one in its place, change zoning and away you go. You also access the boot volume through multiple paths, whereas if it was local, you would generally have one cable to go through and if that fails, you have no backup.

You do have to be aware of requirements though. The biggest one is to have a separate boot LUN for each server. You can’t use one for all servers. You also can’t multipath to an active-passive array. Ok so how do you do it? On a FC setup:

  1. Configure the array zoning and also create the LUNs and assign them to the proper servers
  2. Then using the FC card’s BIOS you will need to point the card to the LUN and add in any CHAP credentials needed
  3. Boot to install media and install to the LUN

On ISCSI, you have a little different setup depending on what kind of initiator you are using. If you are using an Independent Hardware ISCSI initiator, you will need to go into the card’s BIOS to configure booting from the SAN. Otherwise with a Software or Dependent, you will need to use a network adapter that supports iBFT. Good recommendations from VMware include

  1. Follow Storage Vendor Recommendations (yes I got a sensible chuckle out of that too)
  2. Use Static IPs to reduce the chances of DHCP conflicts
  3. Use different LUNs for VMFS and boot partitions
  4. Configure proper ACLs. Make sure the only machine able to see the boot LUN is that machine
  5. Configure a diagnostic partition – with independent you can set this up on the boot LUN. If iBFT, you cannot

Create an NFS share for use with vSphere

Back in 5.5 and before you were restricted to using NFS v3. Starting with vSphere 6, you can now use NFS 4.1 VMware has some recommendations for you about this as well.

  1. Make sure the NFS servers you use are listed in the HCL
  2. Follow recommendations of your storage vendor
  3. You can export a share as v3 or v4.1 but you can’t do both
  4. Ensure it’s exported using NFS over TCP/IP
  5. Ensure you have root access to the volume
  6. If you are exporting a read-only share, make sure it is consistent. Export it as RO and make sure when you add it to the ESXi host, you add it as Read Only.

To create a share do the following:

  1. On each host that is going to access the storage, you will need to create a VMkernel Network port for NFS traffic
  2. If you are going to use Kerberos authentication, make sure your host is setup for it
  3. In the Web Client navigator, select vCenter Inventory Lists and then Datastores
  4. Click the Create a New Datastore icon
  5. Select Placement for the datastore
  6. Type the datastore name
  7. Select NFS as the datastore type
  8. Specify an NFS version (3 or 4.1)
  9. Type the server name or IP address and mount point folder name (or multiple IP’s if v4.1)
  10. Select Mount NFS read only – if you are exporting it that way
  11. You can select which hosts that will mount it
  12. Click Finish




Enable/Configure/Disable vCenter Server storage filters

When you look at your storage and when you add more or do other like operations, VMware by default has a set of storage filters it employs. Why? Well there are four filters. The filters and explanation for them are as follows

  1. config.vpxd.filter.vmfsFilter = this filters out LUNs that are already used by a VMFS datastore on any host managed by a vCenter server. This is to keep you from reformatting them by mistake
  2. config.vpxd.filter.rdmFilter = this filters out LUNs already referenced by a RDM on any host managed by a vCenter server. Again this is protection so that you don’t reformat by mistake
  3. config.vpxd.filter.SameHostAndTransportsFilter = this filters out LUNs ineligible for use due to storage type or host incompatibility. For instance you can’t use Fiber Channel Extent and attach it to an ISCSI LUN
  4. config.vpxd.filter.hostRescanFilter = this prompts you to rescan anytime you perform datastore management operations. This tries to make sure you maintain a consistent view of your storage

And in order to turn them off you will need to do it on the vCenter (makes sense huh?) You will need to navigate to the vCenter object and then to the Manage tab.

You will then need to add these settings since they are not in there for you to change willy nilly by default. So click on the Edit and then type in the appropriate filter and type in false for the value. Like so

Configure/Edit hardware/dependent hardware initiators

A dependent hardware ISCSI still uses VMware networking and ISCSI configuration and management interfaces provided by VMware. This device will present two devices to VMware. Both a NIC and an ISCSI engine. The ISCSI will show up under Storage Adapters (vmhba). In order for it to work though, you still need to create a VMKernel port for it and to a physical network port. Here is a picture of how it looks underneath your storage adapters for a host.

There are a few things to be aware of while using a dependent initiator.

  1. When you are using the TCP Offload engine, you may not show activity or little activity on the NIC associated with the adapter. This is due to the host passing all the ISCSI to the engine and it bypasses the regular network stack
  2. If using the TCP offload engine, it has to reassemble the packets in hardware and there is a finite amount of buffer space. You should enable flow control in order to be able to better manage the traffic (pause frames anyone?)
  3. Dependent adapters will support IPv4 and IPv6

To setup and configure them you will need to do the following:

  1. You can change the alias or the IQN name if you want by going to the host and Manage > Storage > Storage Adapters and highlightling the adapter and then clicking Edit
  2. I am assuming you have already created a VMKernel port by this point. The next thing to do would be to bind the card to the VMKernel
  3. You do this by clicking on the ISCSI adapter in the list and then clicking on Network Port Binding below
  4. Now click on the add icon to associate the NIC and it will give you this window
  5. Click on the VMkernel you created and click Ok
  6. Go back to the Targets section now and add your ISCSI target. Then rescan and voila.

Enable/Disable software iSCSI initiator

For a lot of reasons you might want to use the software ISCSI initiator instead of the hardware one. For instance you might want to maintain an easier configuration where the NIC doesn’t matter. Or you just might not have the availability of dependent cards. Either way, you can use the software initiator to work your bits. “But Wait” you say, “Software will be much slower than hardware!” You would be correct perhaps with older revisions of ESX. However the software initiator is so fast at this point that there is not much difference between them. By default, however, the software initiator is not installed. You will need to install it manually. To do this you will need to go to the same place we were before, under Storage Adapters. Then while there, click on the add icon (+) and Click on Add Software ISCSI adapter. Once you do that, it will show up in the adapters and allow you to add targets and bind NICs to it just like the hardware ISCSI will. To disable it, just click on it and then under properties, down below, click on Disable.

Configure/Edit software iSCSI initiator settings
Configure iSCSI port binding
Enable/Configure/Disable iSCSI CHAP

I’m going to cover all these at the same time since they flow together pretty well. To configure your ISCSI initiator settings you would navigate to the ISCSI adapter. Once there all your options are down at the bottom under the Adapter Details. If you want to edit one, you would click on the tab that has the settings and click on Edit. Your Network Port Bindings is under there and is configured the same as we did before. The Targets is there as well. Finally CHAP is something we haven’t talked about yet. But it is there underneath the Properties tab under Authentication. Depending on what type of adapter you have, will change the type of CHAP available to you. Click on Edit under the Authentication section and you will get this window

As you probably noticed this is done on a per storage adapter level. So you can change it for different initiators. Keep in mind this is all sent via plain text so the security is not really that great. This is better used to mask LUNs off from certain hosts.

We have already pretty much covered why we would use certain initiators over others and also thin provisioning. So I will leave those be and sign off this post. (Which keep getting more and more loquacious)

Objective 2.2 – Configure Network I/O Control

After the last objective this one should be a piece of cake. There are only 4 bullet points that we need to cover for this objective. They are:

  • Identify Network I/O Control Requirements
  • Identify Network I/O Control Capabilities
  • Enable / Disable Network I/O Control
  • Monitor Network I/O Control

So first off what is Network I/O control? NIOC is a tool that allows you to reserve and divide bandwidth up in a manner you see fit for the VMs you deem. You can choose to reserve a certain amount of bandwidth or a larger percentage of the network resources for an important VM for when there is contention. You can only do this on a vDS. With vSphere v6, we introduce a new version of NIOC. NIOC v3. This new version of Network I/O control allows us to reserve a specific amount of memory for an individual VM. It also still uses the old standbys of reserves, limits, and shares. This also works in conjunction with DRS and admission control and HA to be sure that wherever the VM is moved to, it is able to maintain those characteristics. So let’s get in a little deeper.

vSphere 6.0 is able to use both NIOC version 2 and version 3 at the same time. One of the big differences is in v2 you are setting up bandwidth for the VM at the physical adapter level. Version 3, on the other hand, allows you to go in deeper and set bandwidth allocation at the entire Distributed switch level. Version 2 is compatible with all versions from 5.1 to 6.0. Version 3 though, is only compatible with vSphere 6.0. You can upgrade a Distributed Switch to version 6.0 without upgrading NIOC to v3.

Identify NIOC Control Requirements

As mentioned before, you need at least vSphere 5.1 for NIOC v2 and vSphere 6.0 for v3 of NIOC. You also need a Distributed Switch. The rest of the things are expected. You will need a vCenter in order to manage the whole thing, and rather important you need to have a plan. You should know what you want to do with your traffic before you just rush headlong into it, so you don’t end up “redesigning” it 10 times.

Identify NIOC control Capabilities

Using NIOC you can control and shape traffic using shares, reservations, and limits. You can also specify based on certain types of traffic. Using the built in types of traffic, you can adjust network bandwidth and adjust priorities. The types of traffic are as follows:

  • Management
  • Fault Tolerance
  • iSCSI
  • NFS
  • Virtual SAN
  • vMotion
  • vSphere Replication
  • vSphere Data Protection Backup
  • Virtual Machine

So we keep mentioning shares, reservations, and limits. Let’s go and define these now so we know how to apply them.

  • Shares = this is a number from 1-100 to reflect the priority of the system traffic type, against the other types active on the same physical adapter. For example, you have three types of traffic, ISCSI, FT, and Replication. You assign ISCSI and FT 100 shares and Replication 50 shares. If the link is saturated it will give 40% of the link to ISCSI and 40% to FT and 20% to Replication.
  • Reservation = This is the guaranteed bandwidth on a single physical adapter measured in Mbps. This cannot exceed 75% of the bandwidth that the physical adapter with the smallest bandwidth can provide. For example, if you have 2x 10Gbps NICs and 1x 1Gbps NIC the max amount of bandwidth you can reserve is 750Mbps. If the network type doesn’t use all of this bandwidth, the host will free it up for other things to use it – This does not include allowing new VMs to be placed on that host however. This is just in case the system actually does need it for the reserved type.
  • Limit = this is the maximum amount of bandwidth in Mbps or Gbps, that a system traffic type can consume on a single physical adapter.

So what has changed? The following functionality has been removed if you upgrade from 2 to 3:

  • All user defined network resource pools including associations between them and existing port groups.
  • Existing Associations between ports and user defined network resource pools. Version 3 doesn’t support overriding the resource allocation at a port level
  • CoS tagging of the traffic that is associated with a network resource pool. NIOC v3 doesn’t support marking traffic with CoS tags. In v2 you could apply a QoS tag (which would apply a CoS tag) signifying that one type of traffic is more important than the others. If you keep your NIOC v2 on the distributed switch you can still apply this.

Also be aware that changing a distributed switch from NIOCv2 to NIOCv3 is disruptive. Your ports will go down.

Another new thing in NIOCv3 is being able to configure bandwidth for individual virtual machines. You can apply this using a Network resource pool and allocation on the physical adapter that carries the traffic for the virtual machine.

Bandwidth integrates tightly with admission control. A physical adapter must be able to supply minimum bandwidth to the VM network adapters. And the reservation for the new VM must be less than the free quota in the pool. If these conditions don’t match the VM won’t power on. Likewise, DRS will not move a VM unless it can satisfy the above. DRS will migrate a VM in order to satisfy bandwidth reservations in certain situations.

Enable / Disable Network I/O Control

This is simple enough to do:

  1. Navigate to the distributed switch via Networking
  2. Right Click on the distributed switch and click on Edit Settings
  3. From the Network I/O Control drop down menu, select Enable
  4. Click OK.

To disable do the above but click, wait for it……Disable.

Monitor Network I/O Control

There are many ways to monitor your networking. You can go as deep as getting packet captures, or you can go as light as just checking out the performance graphs in the web client. This is all up to you of course. I will list a few ways and what to expect from them here.

  1. Packet Capture – VMware has a packet capture tool included with it. This tool is pktcap-uw. You can use this to output to .pcap and .pcapng files. You can then use a tool like Wireshark to analyze the data.
  2. NetFlow – You can configure a distributed switch to send reports to a NetFlow collector. Version 5.1 and later support IPFIX (NetFlow version 10)
  3. Port Mirroring – This allows you to take one port and send all the data that flows across it to another. This also requires a distributed switch

 

%d bloggers like this: