Objective 3.2: Configure Software Defined Storage

Back again!! This time we are going to go over the relatively new VSAN. Now VMWare Virtual SAN originally came out in 5.5 U1, but has been radically overhauled for 6.0 (massive jump from VSAN 1.0 -6 J ) So what are we going to go over and have to know for the exam?

  • Configure/Manage VMware Virtual SAN
  • Create/Modify VMware Virtual Volumes (VVOLs)
  • Configure Storage Policies
  • Enable/Disable Virtual SAN Fault Domains

Now this is not going to be an exhaustive guide and information about VSAN and its use, abilities, and administration. Cormac Hogan and Rawlinson Rivera already do that so well, there is no point. I have Cormac’s Blog to the right. He has more info than you probably can process. So we will concern ourselves with a high overview of the product and the objectives.

Here comes my 50 mile high overview. VSAN is Software Defined Storage. What does this mean? While you still have physical drives and cards, you are pulling them together and creating logical containers (virtual disks) and such through software and the vmkernel. You can setup VSAN as a hybrid or all flash cluster. In the hybrid approach you have magnetic media used as the storage media and the flash is the cache. In all flash, the flash disks are used for both jobs.

When you setup the cluster, you can do it on a new cluster or you can add the feature to an existing cluster. When you do that it will take the disks and aggregate them into a single datastore available to all hosts in the VSAN cluster. You can later expand this by adding more disks or additional hosts with disks to the cluster. The cluster will run much better if all the hosts in the cluster are as close as possible to each other, just like your regular cluster. You can have machines that are just compute resources and have no local datastore or disk groups and still be able to use the VSAN datastore.

In order for a host to contribute its disks it has to have at least one SSD and one spindle disk. Those disks form what is known as a disk group. You can have more than one disk group per machine, but each one needs at least the above combination.

Virtual SAN manages the data in the form of flexible data containers called object. VSAN is known as Object Based Storage. An object is a logical volume that has its data and metadata spread distributed across the cluster. There are the following types of objects:

  • VM Home Namespace = this is where all configuration files are stored such as the .vmx, log files, vmdks and snapshot delta description files and so on.
  • VMDK = The .vmdk stores the contents of the Virtual Machine’s HD
  • VM Swap Object = This is created when the VM turns on, just like normal
  • Snapshot Delta VMDKs =Are created when snapshots are taken of the VM, each Delta is an object
  • Memory Object = Are the objects created when memory is selected as well when the snapshot is taken

Along with the objects, you have metadata that VSAN uses called a Witness. This is a component that serves as a tiebreaker when a decision needs to be made regarding the availability of surviving datastore components, after a potential failure. There may be more than one witness depending on your policy for the VM. Fortunately, this doesn’t take up much space – approximately 2MB on the old VSAN 1.0 and 4MB for version 2.0/6.0

So the part of the larger overall picture is being able to apply policies granularly. You are able to specify on a per VM basis how many copies of something you want, vs a RAID 1 where you have a blanket copy of everything regardless of its importance. SPBM (Storage Based Policy Management) allows you to define performance, and availability in the form of this policy. VSAN ensure that you have a policy for every VM. Whether it is the default or a specific one for the VM. For best results you should create and use your own, even if the requirements are the same as the default.

So those of us who used and read about VSAN 1.0 how does the new version differ? Quite a lot. This part is going to be lifted from Cormac’s Site. (Just the highlights)

  1. Scalability – Because vSphere 6.0 can now support 64 hosts in a cluster, so can VSAN
  2. Scalability – Now supports 62TB VMDK
  3. New on-disk Format (v2) – This allows a lot more components per host to be supported. It leverages VirtsoFS
  4. Support for All-Flash configuration
  5. Performance Improvement using the new Disk File System
  6. Availability improvements – You can separate racks of machines into Fault Domains
  7. New Re-Balance mechanism – rebalances components across disks, disk groups, and hosts
  8. Allowed to create your own Default VM Storage Policy
  9. Disk Evacuation granularity – You can evacuate a single disk now instead of a whole disk group
  10. Witnesses are now smarter – They can exercise more than a single vote instead of needing multiple witnesses
  11. Ability to light LEDs on disks for identification
  12. Ability to mark disks as SSD via UI
  13. VSAN supports being deployed on Routed networks
  14. Support of external disk enclosures.

As you can see this is a huge list of improvements. Now that we have a small background and explanation of the feature, let’s dig into the bullet points.

Configure/Manage VMware Virtual VSAN

So first, as mentioned before, there are a few requirements that need to be met in order for you to be able to create and configure VSAN.

  • Cache = You need one SAS or SATA SSD or PCIe Flash Device that is at least 10% of the total storage capacity. They can’t be formatted with VMFS or any other file system
  • Virtual Machine Data Storage = For Hybrid group configurations, make sure you have at least one NL-SAS, SAS, or SATA magnetic drive (sorry PATA owners). For All Flash disk groups, make sure you have at least one SAS, SATA, or PCIe Flash Device
  • Storage Controller = One SAS or SATA Host Bus Adapter that is configured in pass-through or RAID 0 mode.
  • Memory = this depends on the number of disk groups and devices that are managed by the hypervisor. Each host should contain a minimum of 32GB of RAM to accommodate for the maximum 5 disk groups and max 7 capacity devices per group
  • CPU = VSAN doesn’t take more than about 10% CPU overhead
  • If booting from SD, or USB device, the device needs to be at least 4GB
  • Hosts = You must have a minimum of 3 hosts for a cluster
  • Network = 1GB for Hybrid solutions, 10GB for all flash solutions. Multicast must be enabled on the switches – Only IPv4 is supported at this time
  • Valid License for VSAN

Now that we got all those pesky requirements out of the way. Let’s get started on actually creating the VSAN. The first thing we will need to do is create a VMkernel port for it. There is a new option for traffic as of 5.5U1 which is ….. VSAN. You can see it here:

After you are done, it will show up as being enabled you can check by looking here:

Now that is done, you will need to enable the cluster for VSAN as well. This is done under the cluster settings or when you create the cluster to begin with.

You have the option to automatically add the disks to the VSAN cluster, or if you leave in manual you will need to add the disks yourself, and new devices are not added when they are installed. After you create it you can check the status of it on the Summary page.

You can also check on the individual disks and health and configure disk groups and Fault Domains under the Manage > Settings > Virtual SAN location.

Here is a shot from my EVO:RAIL with VSAN almost fully configured

The errors are because I don’t have the VLANs fully configured for them to communicate yet. There is a lot more we could work on with VSAN but I don’t have the blog space nor the time. So moving on….

Create/Modify VMware Virtual Volumes (VVOLs)

First a quick primer on VVols. What are these things called Virtual Volumes? Why do we want them when LUNs have served us well for so long? So if you remember, one of the cool advantages of VSAN is the ability to assign policies on a per VM basis. But VSAN is limited to only certain capabilities. What if we want more? In comes VVols. Use VVols and a storage SAN that supports them, you can apply any abilities that SAN has on a per VM basis. Stealing from a VMWare blog for the definition, “VVols offer per-VM management of storage that helps deliver a software defined datacenter”. So what does this all mean? In the past you have had SANs with certain capabilities, such as deduplication, or a specific RAID type, etc. You would need to have a really good naming system or DB somewhere to list which LUN was which. However now, we have the ability for us to just set a specific set of rules for the VM in a policy and then it will find the LUN matching that set of rules for us. Pretty nifty huh?

So how do you create and modify these things now? The easiest way is to create a new datastore just like you would a regular VMFS or NFS.

  1. Select vCenter Inventory Lists > Datastores
  2. Click on the create new datastore icon
  3. Click on Placement for the datastore
  4. Click on VVol as the type

  5. Now put in the name you wish to give it, and also choose the Storage Container that is going to back it. (kind of like a LUN – you would have needed to add a Protocol Endpoint and Storage Container before getting to this point)
  6. Select the Hosts that are going to have access to it
  7. Finish

Kind of working this backwards but how do you configure them? You can do the following 4 things:

  1. Register the Storage Provider for VVols = Using VASA you configure communication between the SAN and vSphere. Without this communication nothing will work in VVols.
  2. Create a Virtual Datastore = this is to be able to create the VVol
  3. Review and Manage Endpoints = this is a logical proxy used to communicate between the virtual volumes and the virtual disks that they encapsulate. Protocol endpoints are exported along with their associated storage containers by the VASA provider.
  4. (Optional) If you host uses ISCSI-based transport to communicate with protocol endpoints representing a storage array, you can modify the default multipathing policy associated with it.

Configure Storage Policies

At the heart of all these changes is the storage policy. The storage policy is what enables all this wonderful magic to happen behind the scenes with you, the administrator, blissfully unaware. Let’s go ahead and define it as VMWare would like it defined: “A vSphere storage profile defines storage requirements for virtual machines and storage capabilities of storage providers. You use storage policies to manage the association between virtual machines and datastores.”

Where is it found? On the home page in your web client under…. Policies and Profiles. Anti-Climatic I know. Here is a picture of that when you click on it.

This gives you a list of all the profiles and polices associated with your environment. We currently are interested only in the Storage Policies so let us click on that. Depending on what products you have setup, yours might look a little different.

You can have Storage policies based off one of the following:

  • Rules based on Storage-Specific Data Services = these are based on data services that entities such as VSAN and VVols can provide, for example deduplication
  • Rules based on Tags = these are tags you, as an administrator, associate with specific datastores. You can apply more than one per datastore

Now we dig in. First thing we are going to need to do is to make sure that storage policies are enabled for the resources we want to apply them to. We do that by clicking on the Enable button underneath storage policies

When enabled you will see the next screen look like this (with your own resource names in there of course)

We can go ahead and create a storage policy now and be able to apply it to our resources. When you click on Create New VM Storage Policy, you will be presented with this screen:

Go ahead and give it a name and optionally a description. On the next screen we will define the rules that are based on our capabilities

In this one I am creating one for a Thick provisioned LUN


Unfortunately none of my datastores are compatible. You can also configure based off of tags you associate on your datastores.

Enable/Disable Virtual SAN Fault Domains

This is going to be a quick one as I am a bit tired of this post alreadyJ. In order to work with Fault Domains you will need to go to the VSAN Cluster and then click on Manage and Settings. Next on the left hand side you will see a Fault Domains. Click on it. You now have the ability to segregate hosts into specific fault domains. Click on add (+) to create a fault domain and then add the hosts to it you want. You will end up with a screen like this:

Onwards and Upwards to the next post!!


Objective 3.1: Manage vSphere Storage Virtualization

Wait, wait wait…. Where did objective 2.2 go? I know… I didn’t include it since I have already gone over in previous objectives all the questions asked on it. So moving on to Storage.

So we are going to cover the following objective points.

  • Identify storage adapters and devices
  • Identify storage naming conventions
  • Identify hardware/dependent hardware/software iSCSI initiator requirements
  • Compare and contrast array thin provisioning and virtual disk thin provisioning
  • Describe zoning and LUN masking practices
  • Scan/Rescan storage
  • Configure FC/iSCSI LUNs as ESXi boot devices
  • Create an NFS share for use with vSphere
  • Enable/Configure/Disable vCenter Server storage filters
  • Configure/Edit hardware/dependent hardware initiators
  • Enable/Disable software iSCSI initiator
  • Configure/Edit software iSCSI initiator settings
  • Configure iSCSI port binding
  • Enable/Configure/Disable iSCSI CHAP
  • Determine use case for hardware/dependent hardware/software iSCSI initiator
  • Determine use case for and configure array thin provisioning

So let’s get started

Identify storage adapters and devices

Identifying storage adapters is easy. You have your own list to refer to, the Storage Devices view. To navigate to it do the following:

  1. Browse to the Host in the navigation pane
  2. Click on the Manage tab and click Storage
  3. Click Storage Adapters
    This is what you will see

As you can see, identification is relatively easy. Each adapter is assigned a VMHBAXX address. They are under larger categories and also you are given the description of the name of the hardware, i.e. Broadcom ISCSI adapter. You can find out a number of details about the device by looking down below and going through the tabs. As you can see, one of the tabs is devices that is under that particular controller. Which brings us to our next item, storage devices.

Storage Devices is one more selection down from Storage Adapters, so naturally you would navigate the same way to it and then just click on Storage Devices instead of adapters. And now that we are seeing those, now it’s time to move on to naming conventions to understand why they are named how they are.

Identify Storage Naming Conventions

You are going to have multiple names for each type of storage and device. Depending on the type of storage, ESXi will use a different convention or method to name each device. The first type is SCSI Inquiry identifiers. The host uses a command to query the device and uses the response to generate a unique name. They are unique, persistent, and will have one of the following formats:

  • Naa.number = naa stands for Network Address Authority and then it is followed by a bunch of hex number that identify vendor, device, and lun
  • T10.number = T10 is a technical committee tasked with SCSI storage interfaces and their standards (also plenty of other disk standards as well)
  • Eui.number – Stands for Extended Unique Identifier.

You also have the path based identifier. This is created if the device doesn’t provide the needed information to create the above identifiers. It will look like the following: mpx.vmhbaxx:C0:T0:L0 – this can be used just the same as the above identifiers. The C is for the channelr, the T is for target, and L is for the LUN. It should be noted that this identifier type is neither unique nor persistent and could change every reboot.

There is also a legacy identifier that is created. This is in the format vml.number

You can see these identifiers on the pages mentioned above and also at the command line by typing the following.

  • “esxcli storage core device list”

Identify hardware/dependent hardware/software iSCSI initiator requirements

You can use three types of ISCSI initiators in VMware. These are independent hardware, dependent hardware, and software initiators. The differences are as follows

  • Software ISCSI = this is code built into the VMkernel. It allows your host to connect to ISCSI targets without having special hardware. You can use a standard NIC for this. This requires a VMkernel adapter. This is also able to use all CHAP levels
  • Dependent Hardware ISCSI initiator – this device still depends on VMware for networking, ISCSI configuration, and management interfaces. This is basically ISCSI offloading. An example of this is Broadcom 5709 NIC. This requires a VMkernel adapter (this will show up as a NIC and a Storage adapter). This is able to use all CHAP levels
  • Independent Hardware ISCSI initiator – this device implements its own networking, ISCSI config and management interfaces. An example of this is the Qlogic QLA4052 adapter. This does not require a VMkernel adapter (this will show up as a Storage Controller). This is only able to use unidirectional CHAP and use unidirectional unless prohibited by target CHAP levels

Compare and contrast array thin provisioning and virtual disk thin provisioning

You have two types of thin provisioning. The biggest difference between these are where the provisioning is going on at. The array can thinly provision the LUN. In this case it will present the total logical size to the ESXi host which may be more than the real physical capacity. If this is the case then there is really no way for your Esxi host to know if you are running out of space. This obviously can be a problem. Because of this, Storage APIs – Array Integration was created. Using this feature and a SAN that supports it, your hosts are aware of the underlying storage and are able to tell how your LUNs are configured. The requirements for this are simply ESXi 5.0 or later and a SAN that supports Storage APIs for VMware.

Virtual Disk Thin Provisioning is the same concept but done for the Virtual Hard Disk of the Virtual Machine. You are creating a disk and telling it that it has more space than it actually might. Due to this, you will need to monitor the status of that disk in case your VM Operating System starts trying to use that space.

Describe zoning and LUN masking practices

Zoning is a Fiber Channel Concept meant to restrict servers from seeing storage arrays. Zone will define which HBA’s, or Cards in the Server, can connect to which Storage Processors on the SAN. LUN Masking on the other hand will only allow certain hosts to see certain LUNs.

With ESXi hosts you want to use single initiator zoning or single initiator-single target zoning. The latter is preferred. This can help prevent misconfigurations and access problems.

Rescan Storage

This one is pretty simple. In order to take count of new storage or see changes on existing storage, you may want to rescan your storage. You can do two different operations from the gui client: 1) Scan for New Storage Devices, 2) Scan for new VMFS Volumes. The 1st will take longer that the second. You can also rescan for storage at the command line by performing the following command at CLI

esxcli storage core adapter rescan –all – this will rescan for new Storage Devices
vmkfstools –V
– This will scan for new VMFS volumes

I have included a picture of the webclient with the button circled in red that is to rescan.

Configure FC/iSCSI LUNs as ESXi boot devices

ESXi supports booting from Fiber Channel or FCoE LUNs as well as ISCSI. First we will go over Fiber Channel.

Why would you want to do this first off? There are a number of reasons. Among them you remove the need to have storage included in the servers. This makes them cheaper and less prone to failure as hard drives are the most likely component to fail. It is also easier to replace the servers. One server could die and you can drop a new one in its place, change zoning and away you go. You also access the boot volume through multiple paths, whereas if it was local, you would generally have one cable to go through and if that fails, you have no backup.

You do have to be aware of requirements though. The biggest one is to have a separate boot LUN for each server. You can’t use one for all servers. You also can’t multipath to an active-passive array. Ok so how do you do it? On a FC setup:

  1. Configure the array zoning and also create the LUNs and assign them to the proper servers
  2. Then using the FC card’s BIOS you will need to point the card to the LUN and add in any CHAP credentials needed
  3. Boot to install media and install to the LUN

On ISCSI, you have a little different setup depending on what kind of initiator you are using. If you are using an Independent Hardware ISCSI initiator, you will need to go into the card’s BIOS to configure booting from the SAN. Otherwise with a Software or Dependent, you will need to use a network adapter that supports iBFT. Good recommendations from VMware include

  1. Follow Storage Vendor Recommendations (yes I got a sensible chuckle out of that too)
  2. Use Static IPs to reduce the chances of DHCP conflicts
  3. Use different LUNs for VMFS and boot partitions
  4. Configure proper ACLs. Make sure the only machine able to see the boot LUN is that machine
  5. Configure a diagnostic partition – with independent you can set this up on the boot LUN. If iBFT, you cannot

Create an NFS share for use with vSphere

Back in 5.5 and before you were restricted to using NFS v3. Starting with vSphere 6, you can now use NFS 4.1 VMware has some recommendations for you about this as well.

  1. Make sure the NFS servers you use are listed in the HCL
  2. Follow recommendations of your storage vendor
  3. You can export a share as v3 or v4.1 but you can’t do both
  4. Ensure it’s exported using NFS over TCP/IP
  5. Ensure you have root access to the volume
  6. If you are exporting a read-only share, make sure it is consistent. Export it as RO and make sure when you add it to the ESXi host, you add it as Read Only.

To create a share do the following:

  1. On each host that is going to access the storage, you will need to create a VMkernel Network port for NFS traffic
  2. If you are going to use Kerberos authentication, make sure your host is setup for it
  3. In the Web Client navigator, select vCenter Inventory Lists and then Datastores
  4. Click the Create a New Datastore icon
  5. Select Placement for the datastore
  6. Type the datastore name
  7. Select NFS as the datastore type
  8. Specify an NFS version (3 or 4.1)
  9. Type the server name or IP address and mount point folder name (or multiple IP’s if v4.1)
  10. Select Mount NFS read only – if you are exporting it that way
  11. You can select which hosts that will mount it
  12. Click Finish




Enable/Configure/Disable vCenter Server storage filters

When you look at your storage and when you add more or do other like operations, VMware by default has a set of storage filters it employs. Why? Well there are four filters. The filters and explanation for them are as follows

  1. config.vpxd.filter.vmfsFilter = this filters out LUNs that are already used by a VMFS datastore on any host managed by a vCenter server. This is to keep you from reformatting them by mistake
  2. config.vpxd.filter.rdmFilter = this filters out LUNs already referenced by a RDM on any host managed by a vCenter server. Again this is protection so that you don’t reformat by mistake
  3. config.vpxd.filter.SameHostAndTransportsFilter = this filters out LUNs ineligible for use due to storage type or host incompatibility. For instance you can’t use Fiber Channel Extent and attach it to an ISCSI LUN
  4. config.vpxd.filter.hostRescanFilter = this prompts you to rescan anytime you perform datastore management operations. This tries to make sure you maintain a consistent view of your storage

And in order to turn them off you will need to do it on the vCenter (makes sense huh?) You will need to navigate to the vCenter object and then to the Manage tab.

You will then need to add these settings since they are not in there for you to change willy nilly by default. So click on the Edit and then type in the appropriate filter and type in false for the value. Like so

Configure/Edit hardware/dependent hardware initiators

A dependent hardware ISCSI still uses VMware networking and ISCSI configuration and management interfaces provided by VMware. This device will present two devices to VMware. Both a NIC and an ISCSI engine. The ISCSI will show up under Storage Adapters (vmhba). In order for it to work though, you still need to create a VMKernel port for it and to a physical network port. Here is a picture of how it looks underneath your storage adapters for a host.

There are a few things to be aware of while using a dependent initiator.

  1. When you are using the TCP Offload engine, you may not show activity or little activity on the NIC associated with the adapter. This is due to the host passing all the ISCSI to the engine and it bypasses the regular network stack
  2. If using the TCP offload engine, it has to reassemble the packets in hardware and there is a finite amount of buffer space. You should enable flow control in order to be able to better manage the traffic (pause frames anyone?)
  3. Dependent adapters will support IPv4 and IPv6

To setup and configure them you will need to do the following:

  1. You can change the alias or the IQN name if you want by going to the host and Manage > Storage > Storage Adapters and highlightling the adapter and then clicking Edit
  2. I am assuming you have already created a VMKernel port by this point. The next thing to do would be to bind the card to the VMKernel
  3. You do this by clicking on the ISCSI adapter in the list and then clicking on Network Port Binding below
  4. Now click on the add icon to associate the NIC and it will give you this window
  5. Click on the VMkernel you created and click Ok
  6. Go back to the Targets section now and add your ISCSI target. Then rescan and voila.

Enable/Disable software iSCSI initiator

For a lot of reasons you might want to use the software ISCSI initiator instead of the hardware one. For instance you might want to maintain an easier configuration where the NIC doesn’t matter. Or you just might not have the availability of dependent cards. Either way, you can use the software initiator to work your bits. “But Wait” you say, “Software will be much slower than hardware!” You would be correct perhaps with older revisions of ESX. However the software initiator is so fast at this point that there is not much difference between them. By default, however, the software initiator is not installed. You will need to install it manually. To do this you will need to go to the same place we were before, under Storage Adapters. Then while there, click on the add icon (+) and Click on Add Software ISCSI adapter. Once you do that, it will show up in the adapters and allow you to add targets and bind NICs to it just like the hardware ISCSI will. To disable it, just click on it and then under properties, down below, click on Disable.

Configure/Edit software iSCSI initiator settings
Configure iSCSI port binding
Enable/Configure/Disable iSCSI CHAP

I’m going to cover all these at the same time since they flow together pretty well. To configure your ISCSI initiator settings you would navigate to the ISCSI adapter. Once there all your options are down at the bottom under the Adapter Details. If you want to edit one, you would click on the tab that has the settings and click on Edit. Your Network Port Bindings is under there and is configured the same as we did before. The Targets is there as well. Finally CHAP is something we haven’t talked about yet. But it is there underneath the Properties tab under Authentication. Depending on what type of adapter you have, will change the type of CHAP available to you. Click on Edit under the Authentication section and you will get this window

As you probably noticed this is done on a per storage adapter level. So you can change it for different initiators. Keep in mind this is all sent via plain text so the security is not really that great. This is better used to mask LUNs off from certain hosts.

We have already pretty much covered why we would use certain initiators over others and also thin provisioning. So I will leave those be and sign off this post. (Which keep getting more and more loquacious)

Objective 2.2 – Configure Network I/O Control

After the last objective this one should be a piece of cake. There are only 4 bullet points that we need to cover for this objective. They are:

  • Identify Network I/O Control Requirements
  • Identify Network I/O Control Capabilities
  • Enable / Disable Network I/O Control
  • Monitor Network I/O Control

So first off what is Network I/O control? NIOC is a tool that allows you to reserve and divide bandwidth up in a manner you see fit for the VMs you deem. You can choose to reserve a certain amount of bandwidth or a larger percentage of the network resources for an important VM for when there is contention. You can only do this on a vDS. With vSphere v6, we introduce a new version of NIOC. NIOC v3. This new version of Network I/O control allows us to reserve a specific amount of memory for an individual VM. It also still uses the old standbys of reserves, limits, and shares. This also works in conjunction with DRS and admission control and HA to be sure that wherever the VM is moved to, it is able to maintain those characteristics. So let’s get in a little deeper.

vSphere 6.0 is able to use both NIOC version 2 and version 3 at the same time. One of the big differences is in v2 you are setting up bandwidth for the VM at the physical adapter level. Version 3, on the other hand, allows you to go in deeper and set bandwidth allocation at the entire Distributed switch level. Version 2 is compatible with all versions from 5.1 to 6.0. Version 3 though, is only compatible with vSphere 6.0. You can upgrade a Distributed Switch to version 6.0 without upgrading NIOC to v3.

Identify NIOC Control Requirements

As mentioned before, you need at least vSphere 5.1 for NIOC v2 and vSphere 6.0 for v3 of NIOC. You also need a Distributed Switch. The rest of the things are expected. You will need a vCenter in order to manage the whole thing, and rather important you need to have a plan. You should know what you want to do with your traffic before you just rush headlong into it, so you don’t end up “redesigning” it 10 times.

Identify NIOC control Capabilities

Using NIOC you can control and shape traffic using shares, reservations, and limits. You can also specify based on certain types of traffic. Using the built in types of traffic, you can adjust network bandwidth and adjust priorities. The types of traffic are as follows:

  • Management
  • Fault Tolerance
  • iSCSI
  • NFS
  • Virtual SAN
  • vMotion
  • vSphere Replication
  • vSphere Data Protection Backup
  • Virtual Machine

So we keep mentioning shares, reservations, and limits. Let’s go and define these now so we know how to apply them.

  • Shares = this is a number from 1-100 to reflect the priority of the system traffic type, against the other types active on the same physical adapter. For example, you have three types of traffic, ISCSI, FT, and Replication. You assign ISCSI and FT 100 shares and Replication 50 shares. If the link is saturated it will give 40% of the link to ISCSI and 40% to FT and 20% to Replication.
  • Reservation = This is the guaranteed bandwidth on a single physical adapter measured in Mbps. This cannot exceed 75% of the bandwidth that the physical adapter with the smallest bandwidth can provide. For example, if you have 2x 10Gbps NICs and 1x 1Gbps NIC the max amount of bandwidth you can reserve is 750Mbps. If the network type doesn’t use all of this bandwidth, the host will free it up for other things to use it – This does not include allowing new VMs to be placed on that host however. This is just in case the system actually does need it for the reserved type.
  • Limit = this is the maximum amount of bandwidth in Mbps or Gbps, that a system traffic type can consume on a single physical adapter.

So what has changed? The following functionality has been removed if you upgrade from 2 to 3:

  • All user defined network resource pools including associations between them and existing port groups.
  • Existing Associations between ports and user defined network resource pools. Version 3 doesn’t support overriding the resource allocation at a port level
  • CoS tagging of the traffic that is associated with a network resource pool. NIOC v3 doesn’t support marking traffic with CoS tags. In v2 you could apply a QoS tag (which would apply a CoS tag) signifying that one type of traffic is more important than the others. If you keep your NIOC v2 on the distributed switch you can still apply this.

Also be aware that changing a distributed switch from NIOCv2 to NIOCv3 is disruptive. Your ports will go down.

Another new thing in NIOCv3 is being able to configure bandwidth for individual virtual machines. You can apply this using a Network resource pool and allocation on the physical adapter that carries the traffic for the virtual machine.

Bandwidth integrates tightly with admission control. A physical adapter must be able to supply minimum bandwidth to the VM network adapters. And the reservation for the new VM must be less than the free quota in the pool. If these conditions don’t match the VM won’t power on. Likewise, DRS will not move a VM unless it can satisfy the above. DRS will migrate a VM in order to satisfy bandwidth reservations in certain situations.

Enable / Disable Network I/O Control

This is simple enough to do:

  1. Navigate to the distributed switch via Networking
  2. Right Click on the distributed switch and click on Edit Settings
  3. From the Network I/O Control drop down menu, select Enable
  4. Click OK.

To disable do the above but click, wait for it……Disable.

Monitor Network I/O Control

There are many ways to monitor your networking. You can go as deep as getting packet captures, or you can go as light as just checking out the performance graphs in the web client. This is all up to you of course. I will list a few ways and what to expect from them here.

  1. Packet Capture – VMware has a packet capture tool included with it. This tool is pktcap-uw. You can use this to output to .pcap and .pcapng files. You can then use a tool like Wireshark to analyze the data.
  2. NetFlow – You can configure a distributed switch to send reports to a NetFlow collector. Version 5.1 and later support IPFIX (NetFlow version 10)
  3. Port Mirroring – This allows you to take one port and send all the data that flows across it to another. This also requires a distributed switch

 

Objective 2.1 Configure Advanced Policies/Features and Verify Network Virtualization Implementation (Part 2)

Wrapping up this objective we are going to cover the following:

  • Configure LACP on Uplink Port Groups
  • Describe vDS Security Policies / Settings
  • Configure dvPort group blocking policies
  • Configure Load Balancing and failover policies
  • Configure VLAN / PVLAN settings
  • Configure traffic shaping policies
  • Enable TCP Segmentation Offload Support for a Virtual Machine
  • Enable Jumbo Frames support on appropriate components
  • Determine appropriate VLAN configuration for a vSphere implementation

Configure LACP on Uplink Port Groups

You most likely already know what LACP is. For those of you that don’t, however we will go over a brief definition of it. LACP stands for Link Aggregation Control Protocol and is part of the 802.3ad IEEE specification. This protocol allows you to take multiple links and bind them into a single link. “But wait!” you might say, isn’t that basically what load balancing does? Nope. The links used inside load balancing are still separate links. Meaning at any one time, your data stream will never equal more than the bandwidth of a single link. So if you had 2x1Gb Connections, you are only ever using 1Gb of speed for any data stream. LACP also known in some groups as EtherChannel or Bonding, or even occasionally Trunking (I don’t like using Trunking because it can mean a number of things), on the other hand, will try to use all the links for a single stream. Distributing it over all the links in the group.

LACP is only supported on a vDS. And you must configure your uplink Port Group a specific way. There are also a couple of other restrictions. LACP does not support Port Mirroring, does not exist in Host Profiles, and you can’t set one up between two nested hosts. One also important thing to note, was although under ESXi 5.1 you can only support LACP with IP Hash load balancing, starting with 5.5, all of the load balancing methods are supported. Now without further ado, let’s see how we create one.

  1. Go ahead to the Networking view from the Home screen
  2. Click on the vDS you are going to add the LAG to
  3. Then click on Manage and then LACP as in the picture
  4. Click on the plus symbol to add one (+)
  5. On the new screen that pops up now you will need to enter a name for the LAG
  6. You will also need to set up how many ports will be in it (these are going to be associated with your physical NICs) and the mode of the LACP (Active or Passive)
  7. You also have the ability to setup the Load balancing mode and if you need to attach to a VLAN or trunk it if there will be multiple VLANs going over this link – Here is the picture
  8. When all said and done, it will look like this
  9. In order to delete the LAG, highlight the one you wish to get rid of and click the red ‘X’

Describe vDS Security Policies / Settings

vDS Security Policies were already covered in a previous blog post, so I won’t go too far in depth of them here. But a basic listing is as follows:

  1. Promiscuous Mode – This when set to Accept allows you to detect all frames passed on the vDS that are allowed through the VLANs specified
  2. MAC Address Changes – If set to Reject, if the MAC address doesn’t match what is in the .vmx file, it drops the frame from coming in
  3. Forged Transmits – If set to Reject, any outbound frame that doesn’t match the adapter’s MAC is going to be dropped

Configure dvPort group blocking policies

DvPort Group blocking. This is the ability to go ahead and shut down all the ports on a Port Group. You can also block all traffic from a single port on a dvPort Group. Why would you want to do this? In my opinion this is meant if there is possibly a VM that has a virus on it and you need to shut it down quick, or you can use it for troubleshooting, or you can use it for testing purposes. This is obviously going to disrupt network flow to whichever you decide to apply this policy. I won’t go too far into it since it’s not really a difficult concept and it can be done at a port group level if you right click on the port group you are looking to block then click on Settings. Next go to Miscellaneous and click on the drop down for Block All Ports and choose ‘Yes’ and then click Ok. Here is your picture.

You can also navigate to the Port right click on that and go to settings and block that port.

Configure Load Balancing and failover policies

On the load balancing menu, we can choose from the following wonderful items:

  1. Route based on the originating virtual port – VMware will assign virtual ports with a physical network card and traffic from that virtual machine will always be forwarded to that physical card unless there is a failure of that adapter. Traffic for that virtual machine will also be received on that same physical card.
  2. Route based on IP Hash – This will take the source and destination IP address of each packet sent from the VM and associate that with an uplink. This creates CPU overhead.
  3. Route based on Source MAC Hash – In this policy the virtual machine’s MAC address will be used to map to a physical uplink. Traffic once again will use that same uplink for incoming and outgoing traffic unless something goes kaboom.
  4. Route based on physical NIC load – This is only available on vDS. What happens here is that if the uplink being used is at a load of 75% for more than 30 seconds it will move some of the traffic over to another uplink that has available bandwidth.

These can be accessed at a switch level or at a port group level (on a vDS in the web client you will need to configure on the port group level) you can access the settings on a Standard Switch and choose your Load balancing policy there or on the port group – Here is the needed picture on the vDS side

For Failover Policies you have the following options, Network Failure Detection, Notify Switches, Failback, and you also have the ability to choose your Active, Standby, and Unused uplinks. We will go over each of those to get a good description of what they are.

  1. Network Failure Detection – you have the option of Link Status only and Beacon Probing. Link Status will just rely on what the physical NIC reports to ESXi. This can detect failures as removed cables and physical switch power failure. Beacon Probing will send out beacon probes and receive them back on the other NICs. Using this information will allow it to determine link failure. NICs must be in either Active/Standby or Active/Active mode not Unused
  2. Notify Switches – All this option does is notify the physical switch if there is a failover. This allows for faster convergence on the switch when it has to switch traffic to a different uplink
  3. Failback – This option will determine whether you allow the NIC to return to active status after recovering from a failure. If set to yes, the current originally standby adapter will return to standby and the recovered NIC will go Active again
  4. Failover Order – determines in what order the NICs will failover, and if they are used at all

Configure VLAN / PVLAN settings

VLANs and PVLANs are both tools to do the same thing. Segregate your network. Using them, you can separate your networks into multiple pieces just like partitioning your hard drive. And just like chopping up your hard drive up into multiple pieces, you run into the same limitations. You have not increased the number or speed of the underlying structure. You still have one physical hard drive to serve your I/O needs. Likewise with your VLAN you have sectioned your network into multiple pieces, but you have not increased your bandwidth or made it any faster. So make sure when you use these tools that you are using them for the proper purpose. VLANs can be applied at multiple places. The three places are as follows:

  1. External Switch Tagging – All the tagging is performed by the physical switch. The ESXi host is oblivious of what is going on.
  2. Virtual Switch Tagging (VST) – This tagging is done by the virtual switch before leaving. You will need to set the VLAN ID specified on the port group
  3. Virtual Guest Tagging (VGT) – This is where the vnic inside the VM will do the tagging. The virtual port group will need to have an ID of 4095 to set the port group to trunking

Ok so that covers VLANs, what are PVLANs? PVLANs are an extension of VLANs. They have to have a special physical switch that can support them, and can only reside on a vDS. They are used to further segment the network. Seems a bit redundant, but hang in with me as I go into the types and it will perhaps explain a bit more why you might want to use them.

  1. Primary PVLAN – This is the original VLAN that is being divided. All other groups exist in the secondary. The only group in the primary PVLAN is the Promiscuous
  2. Secondary PVLAN – This only exists inside the primary. Each secondary has a specific PVLAN id associated with it and each packet traveling through it is tagged with an ID like it were a normal VLAN. The physical switch associates the behavior depending on the VLAN ID found in each packet
  3. Promiscuous PVLAN – exists in the Primary and can communicate with any of the secondary PVLANs and the outside network. Routers are typically placed here to route traffic
  4. Isolated – This is a type of secondary PVLAN that can only send packets to and from the promiscuous PVLAN. They can’t send packets to any other computers in the Isolated, or other PVLANs
  5. Community – this is the last sub-type of PVLAN left. With this one you can communicate to any virtual machine in the community PVLAN and also with the promiscuous VLAN

Alright we got the wall of text out of the way. So how do we configure these? Well to configure VLANs on a standard switch we will need to go to the host that we want to change the networking for and then go to manage and networking. Then we will click on the pencil to edit the port group we want to add VLANs to. When we do we get this screen here:

We can type directly in the VLAN ID box and away we go. We can also set it to trunk (4095) if we are going to give the VLAN duty to the virtual machine NIC.

For Distributed switches it is roughly the same way just where we need to access the port group is a little different. We need to navigate to the Distributed switch and then click on the port group that we need to tag. Then click on manage, then settings and click edit. This Window will now manifest itself:

And since we are already here, you can also set the PVLANs here as well. By clicking on the VLAN type you can get the additional types. The switch must be configured on the Distributed switch first however. You do that by right clicking on the Distributed switch and then click on Edit Private VLAN settings. Here is that picture:

As you can see above, you can choose the VLAN ID and also the IDs of the secondary PVLANs. You can also choose what type go to which ID. After this is done, you can now associate the port group with the VLAN, here is that picture:

VLAN ID is now able to choose which PVLAN you are associating with it.

Configure traffic shaping policies

Traffic shaping policies is a fancy way of saying we are going to direct traffic to do our bidding. Depending on the switch you are working on, you have the ability of working with Ingress (incoming) or Ingress and Egress (outgoing) traffic. I shouldn’t need to tell you at this point which has the expanded capability.

On Standard switches you can work with it on the switch, or the port group. To get there you would just need to edit the settings for either. Obligatory picture inserted here:

It gives you the options for Average Bandwidth, Peak Bandwidth, and Burst Size. I will first give you the really dry definitions of each and then try to simplify it a little bit.

  1. Average Bandwidth – This is the number of bits averaged over time to allow across a port
  2. Peak Bandwidth – Is the number of bits per second to allow across a port when it is sending a burst of traffic. This is not allowed to be smaller than the average
  3. Burst Size – is the maximum amount of bytes to allow in a burst. So if a port needs more bandwidth than specified by the average, it may be allowed to temporarily transmit data at a higher speed for the amount you allow

So to put this in other words, you are restricting a port to your average bandwidth unless you have a peak bandwidth that is higher. If you do, the virtual machine is allowed to hit the peak bandwidth for the amount of KB allowed in the burst size. And now for the picture of the dVS port group traffic shaping:

You can find this by navigating to the dVS under Networking and then choosing the dvPort Group you are interested in applying this to and editing the settings. You notice on the above that you have both incoming and outgoing traffic.

Enable TCP Segmentation Offload support for a virtual machine

What is TCP Segmentation Offloading? In normal TCP operation on your machine, the CPU will take large data chunks and split them into TCP segments. This is one more job for the CPU to do and can add up over time. If TSO is enabled on the host and transmission path however, this job is handed off to the NIC to do and frees CPU cycles up to work on more important things. Obviously your NICs will need to support this technology. By default, VMware is configured to have this on if your NICs support it. Occasionally you may want to toggle it. In order to do that you will need to go to Manage for the host. Then click on Advanced System Settings and go to the Net.UseHWTSO parameter to 1 to enable or 0 to disable.

Enable Jumbo Frames support on appropriate component

Very quick definition of Jumbo Frames, it is a MTU larger than 1500. Yes that means that 1501 MTU is technically a jumbo frame. Why do we want Jumbo frames? When we increase the size of each frame we send, the CPU has less of them to deal with for the same amount of data. This of course allows our CPU to give more attention to our VMs. This is a good thing.

There are a few things we need to make sure of when we enable Jumbo frames. We can’t just have them enabled on one particular device. We need it enabled end-to-end on every device or it won’t work. You will also run into problems with things like WAN accelerators and other such devices because a lot of them like to fragment the packets. You will also need to know what the particular settings are for your network devices and storage devices. Occasionally some of them will need you to have a larger frame set than you are pushing through ESXi in order to accommodate the frame. For Example in general you would set your MTU on a Force10 to 12k Frame size and on your ESXi host you would set it to 9k frame size. This would be to accommodate overhead on the frames.

Ok so where can we enable them and where in the ESXi host? We can actually enable them in three places. We can enable on a Switch (Standard or Distributed), vmkernel adapter, and on virtual machine’s NIC. So let’s start posting pictures

Distributed Switch
1. Navigate to the Networking and click on the Distributed Switch you want to modify
2. Right Click on the switch and click on settings and then Edit Settings
3. Click on Advanced and change MTU to size desired

Standard Switch
1. From the Home screen click on Hosts and Clusters and then navigate to the host you want to modify
2. Click on the Manage and then Networking sub-tab. Then click on the Virtual switches on the left
3. In the middle, click on the switch you wish to modify and then click on the pencil for it

4. Change MTU as desired

VMkernel Adapter
1. From where we were just a minute ago, take a small step down to VMkernel adapters (Hosts&Clusters ->Host->Manage->Networking)
2. Click on the VMkernel adapter and then click on the pencil
3. On the screen that pops up, choose NIC settings and change to the desired MTU

Virtual Machine
1. You will need to make sure that you have a VMXNet2 or 3 adapter set on the VM.
2. You will need to set the MTU inside the VM.
3. If you currently have an e1000 adapter you will need to copy the MAC and create a new VMXNet3 and copy the MAC to that and disable the old.

Determine appropriate VLAN configuration for a vSphere implementation

A lot of this is going to be based on an existing configuration (if you have one) and your network administrator will need to be involved. You know what the purpose of the VLAN and PVLANs are and should be able to figure out if they are needed. Is there too much broadcast traffic and you need to create a separate collision domain to cut down on that chatter? Then sure, but keep in mind the limitations of them as well. You are not increasing bandwidth or capacity to your network.


Objective 2.1 Configure Advanced Policies/Features and Verify Network Virtualization Implementation (Part 1)

Welcome once again. We are going to go over the following points under this objective.

  • Identify vSphere Distributed Switch capabilities
  • Create / Delete a vSphere Distributed Switch
  • Add / Remove ESXi hosts from a vSphere Distributed Switch
  • Add / Configure / Remove dvPort Groups
  • Add /Remove uplink adapters to dvUplink groups
  • Configure vSphere Distributed Switch general and dvPort group Settings
  • Create / Configure / Remove virtual adapters
  • Migrate virtual machines to/from a vSphere Distributed Switch
  • Configure LACP on Uplink Port Groups
  • Describe vDS Security Policies / Settings
  • Configure dvPort group blocking policies
  • Configure Load Balancing and failover policies
  • Configure VLAN / PVLAN settings
  • Configure traffic shaping policies
  • Enable TCP Segmentation Offload Support for a Virtual Machine
  • Enable Jumbo Frames support on appropriate components
  • Determine appropriate VLAN configuration for a vSphere implementation

So most of what we are going to go over is going to be pictures (yayy!!). Most of the above will stick better with you if you go over it a few times in the client. I know it does for me. Following along with my screenshots should give you a better and faster experience. SO without further ado,

Identify vSphere Distributed Switch Capabilities

So I will first bore you with the long winded explanation of what a vDS is. With a standard switch, both the management plane and the data plane exist together. You have to control the configuration on every host individually. The Distributed switch on the other hand will take the management plane and the data plane and separate them. What does this mean for you? It means that you can create the configuration just once, and push it down to every host that you have attached to that switch. The data plane still exists on each host. This piece is called a host proxy switch.

The Distributed Switch is made up of two abstractions that you use to create your configuration. These are:

  • Uplink Port Group: This is the physical connection on each host you create. You create the number of uplinks that you want for each host to have. For example. If you create 2 uplinks in this group, you can map 2 physical NICs on each host to the Distributed Switch. You can set failover and load balancing on this and have it apply to all the hosts.
  • Distributed Port Group: This is to provide your network connectivity to your VMs. You can configure teaming, load balancing, failover, VLAN, security, traffic shaping, and more on them. These will get pushed to every host that is part of the Distributed Switch

So as far as the abilities of a vDS vs a standard switch, here is a quick list of things that vDS can do.

  • Inbound Traffic Shaping= this allows you throttle bandwidth to the switch.
  • VM Port Blocking= You can block VM ports in case of viruses or troubleshooting
  • PVLANS= You can use these to further segregate your traffic and increase security
  • Load-Based Teaming= An additional load balancing that works off the amount of traffic a queue is sending
  • Central Management= As mentioned before you can create the config once and push it to all attached hosts
  • Per Port Policy Settings= You can override policies at a port level giving you fine grained control
  • Port State Monitoring= Each port can be monitored separate from other ports
  • LLDP= Supports Link Layer Discovery Protocol
  • Network IO Control= Allows you the ability to set priority on port groups and now VMs even reserving bandwidth per VM
  • NetFlow= Used for troubleshooting, grabs a configurable number of samples of network traffic for monitoring
  • LACP= The ability to aggregate links together into a single link (must be used in conjunction with the physical switch)
  • Backing and Restoring of Network Configuration= You can save and restore configurations
  • Port Mirroring= Also used for monitoring you can send all traffic from one port to another
  • Statistics move with the Machine= Even after vMotioning, your statistics can stay with the VM

So that is all the reasons why you would want to use a vDS. There are a lot of cool features and capabilities that is makes available and if you want to go even further, NSX is built on top of vDS as well. So it would behoove anyone that wants to get into Software Defined Networking with VMware, get cozy with vDS tech. Let’s go ahead and move onto the next point!

Create / Delete a vSphere Distributed Switch

So the easiest way to create a Distributed Switch is to do the following:

  1. From the Home Screen click on Networking in the Middle Pane, or you can also click on Networking in the Object Navigator
  2. Right Click on the Datacenter and this will be the menu that pops up
  3. Click on Distributed Switch and then click on New Distributed Switch
  4. You are now presented with the following Box
  5. Choose a name for your Distributed Switch
  6. You are now asked for which version of Distributed switch you want to create. Each of them correspond to the ESXi version. This also equals whether certain features will be available. For example on the version 6.0 Switch, NIOC v3 is available but wouldn’t be if you chose version 5.5
  7. The next screen that is presented to you, is going to present you with some options. Among these are Number of Uplinks, Enable or Disable Network IO Control, if you want to create a Default Port Group and what the name of it will be
  8. We already mentioned what each of those options are, so I won’t go over them again here. The next screen is just a recap of what you have already chosen
  9. When it is all done it will show up on your screen like this
  10. The Distributed Switch has two groups underneath it. The first is the Port Group, the second is the Uplink group
  11. To Delete the Distributed Switch, you just need to right click on the switch and click Delete. Pretty simple huh?

Add / Remove ESXi hosts from a vSphere Distributed Switch

In order to add or remove hosts to your Distributed Switch, follow these directions:

  1. Click on Networking from the Home Screen
  2. Right Click on your Distributed Switch and see the following menu
  3. Click on Add and Manage Hosts – You are now given this menu
  4. Click the action you wish to perform, and then click “Next”
  5. You can now either add or remove hosts as you need
  6. You also have the ability to migrate Virtual Machines and VMKernel adapters on the next screens
  7. The last screen you have that is relevant to this objective is “Analyzing Impact” and then “Ready to complete”
  8. Click Finish and you have now accomplished your task

Add / Configure / Remove dvPort Groups

So after you click on Networking from the Home screen (which you should be quite familiar with at this point) you are presented with your Distributed switch. If you chose to create a default port group when you created the dvSwitch, you should be presented with that on the networking screen underneath your vDS. For Example

Now if you need to configure that port group that you already have, you would just need to click on that port group and then click on manage. This will allow you all sorts of options. You can choose the one you want and then click on edit.

To add or remove a port group, you step one level back up.

To Add:

  1. Right click on your vDS and then click on Distributed Port Group or hover over it, and then you are presented with the following options
  2. Click on New Distributed Port Group and you are then asked to provide a name for it
  3. Click next and the next screen you are asked to configure the port group
  4. Next screen is your “Ready to complete” and click finish

To remove a port group:

  1. Right Click on the port group you wish to remove and then …….wait for it, delete it –That’s all there is to that

Add /Remove uplink adapters to dvUplink groups

There are a number of ways you can assign or remove adapters to a distributed switch. I think the easiest way is just right clicking on the Distributed Switch and then Add and Manage Hosts. You will need to assign hosts vmnics to an uplink. To do that do the following:

  1. Right Click on the Distributed switch and click on Add and Manage Hosts
  2. You will now need to select the host or hosts you want to assign to uplinks. You do that on this screen by clicking on the plus sign (+)
  3. Once it the host is selected it will look like the screen shot above
  4. Click on next and then you will be presented with this screen
  5. Manage Physical Adapters is the important thing we are looking for here – Go ahead and click next
  6. We now have the following screen
  7. Now we can click on one of the vmnics shown here to assign an uplink to a physical adapter
  8. Click on the uplink you are interested in assigning and then click Assign Uplink on the top- that will bring up this screen
  9. Choose the uplink you want to assign and click OK
  10. It will now show on your screen like this
  11. Go ahead through the remaining screens if there is anything else you need to change, do so
  12. Click Finish and you have now assigned the uplink.
  13. To remove, go through the above but instead of assigning uplink, choose the uplink and then “Unassign adapter”
  14. That’s all there is to it

Migrate virtual machines to / from a vSphere Distributed Switch

We are going to stay in the same place a while longer, but it is getting long so I have unilaterally decided to split this objective in two parts. The last point we are going to cover in this part is migrating virtual machines in and out of our Distributed Switch. We should be able to accomplish this without any packet drops or loss of connectivity on the part of the virtual machine. We are going to do this in the same place as before, under networking and then right click on our vDS. This time, choose “Migrate Virtual Machine Networking” though. This is the screen you will now be presented with.

From this point it’s relatively straightforward. You choose the network you are coming from, if any, and choose the destination network you want to go to. Then go ahead and click next. This is the next screen.

You can click on the VMs you want to move here. It will only let you do it if the virtual machine can be moved there. In this case all of my other virtual machines can’t be moved to there because they are on hosts that are not added to the vDS. Click on next and then finish and you are done.

Good Lord this took me a while to write up between case load and correcting 5th Grade homework (not mine of course). Next up on Part 2 we will go ahead and cover the rest of the points under this objective.

Objective 1.3: Enable SSO and Active Directory Integration

Under this objective we have the following topics to cover

  • Configure / Manage Active Directory Authentication
  • Configure / Manage Platform Services Controller (PSC)
  • Configure / Manage VMware Certificate Authority (VMCA)
  • Enable / Disable Single Sign-On (SSO)
  • Identify available authentication methods with VMware vCenter

Just like before we will try to cover this in order.

Configure / Manage Active Directory Authentication

When you are logged into the ESXi host directly or are in the Web Client and go to the settings, you can set the host up to register in Active Directory, but this is not the same as being able to use AD as an authentication source in SSO. To set that up we will need to make sure we login as an administrator in the SSO domain. Then do the following:

  1. From the Home Screen, Click on Administration
  2. On the Navigator panel, click on Configuration under Single Sign On
  3. Then Click on Identity Sources in the center
  4. This will be what you see
  5. Fill out the appropriate source (AD or otherwise) and then click on Test Connection
  6. If successful, then go ahead and add it
  7. Once added, you can now choose the AD domain as a source for your users and groups

Configure / Manage Platform Services Controller (PSC)

Starting with vSphere 6, there has been a bit of a shakeup as to the services and how they can be deployed in vCenter. Before, the major services to contend with were SSO, Web Client, Inventory Service, and vCenter Server. Now though, there are two main types of groups of services are vCenter Server and Platform Services Controller. They are busted out like the following:

  • Platform Services Controller = VMware Single Sign-On, VMware License Server, Lookup Service, Certificate Authority, Certificate Store
  • vCenter Server = vCenter Server, Web Client, Inventory Service, Auto Deploy, Dump Collector, Syslog Server

You cannot break these services up into different servers now. When you install vCenter Server, you get all these features. You have the ability of an embedded installation, or a distributed vCenter Server System Configuration. With the embedded install, the Platform Services Controller and the vCenter Server are installed on a single machine. A single PSC is suitable for about 8 vCenter machines or services. When you go larger you will need to add additional servers. With a Distributed vCenter Server System Configuration, you will need to first deploy the PSC on a machine and then you can deploy your vCenter Server. Once you select one particular deployment method, it can’t be undone.

The first thing you will need to do when configuring a fresh install (and you have chosen just the Platform Services Controller option) is choose and create a SSO domain. If this is an additional PSC, then you would join a vCenter Single Sign-On domain. Then enter a password for the administrator, and a Site Name. The next screen will be to accept the default ports or chose new ones. Then go through the next screens of Destination Paths and then confirmation screen. The screens will be roughly the same for the appliance with a few added ones, such as choosing your datastore and network settings.

As far as managing the PSC, most has to do with your setup, and making sure that you setup the proper deployment method to begin with. After that, you can do things such as load balance and make your PSCs Highly available. The way you do this will depend a lot what kind of servers you deployed. Was it the appliance or the Windows machine. There are many different methods to do this, and you can even mix and match depending on customer requirements, especially since the new appliance is capable of Linked Mode in vSphere 6. Then of course you can configure the individual services making up the PSC.

Configure / Manage VMware Certificate Authority (VMCA)

Generally you won’t need to do any sort of certificate management in vSphere 6. The reasoning behind this is that the certificates that the VMCA issues are valid for 10 yrs. However, occasionally you might run into a situation where the user needs to either comply with some sort of HIPAA compliance or a user that got over zealous.

So we know that previous versions of VMware used self-signed certificates -this new version of vSphere now has a full on Certificate Authority. What does this mean? This means that we can now generate our own certificates or we can configure this as a subordinate certificate authority to use a third-party certificate such as GoDaddy or some other recognizable authority. When you do need to change certificates for some reason, you can do so using the certificate manager utility. On the Windows machine you just pull up an Admin CMD Prompt and type in certificate-manager. In order to make the VMCA a subordinate CA you will need to do the following steps.

  1. Log into the PSC and using openssl, generate a certificate request.
    openssl genrsa –out c:\certs\psc001.key 2048

    openssl req –new –key c:\certs\psc001.key –out c:\certs\psc001.csr

  2. Submit the request to a CA. Use the Subordinate CA template for the request
  3. Download the cert in Base64 format; save it to c:\certs
  4. Wait at least 24 hrs (VMCA requires a cool off period J )
  5. Run certificate manager again from c:\program files\vmware\vCenter Server\bin for Windows and /usr/lib/vmware-vmca/bin/certificate-manager for the Appliance
  6. Choose option 2: Replace VMCA Root certificate with Custom Signing Certificate and replace all Certificates
  7. Answer all questions
  8. When asked to provide a valid customer certificate for root, enter the path for the cert you got earlier
  9. When asked for the customer key or root, provide the path to the .key file generated earlier
  10. Enter Y to replace the cert
  11. Add the cert to a Windows Group Policy as an intermediate CA

Enable / Disable Single Sign-On

This isn’t possible that I know of. I am reaching out for clarification on it, but at least in 5.5 VMware posted the following:

Can I disable SSO 5.5 in vCenter Server?


No, you cannot disable SSO 5.5 in vCenter Server 5.5. This is similar to vSphere 5.1.

Identify available Authentication methods Available with VMware vCenter

There are a number of authentication methods you can employ with vCenter. You can log in with SSO or with a local user for the vCenter. If you do log in as root with the appliance, you are no longer are not able to see any of your objects in the environment. The full listing of Identity sources available for use is as follows:

  • Active Directory versions 2003 and later. You can specify a single domain as an identity source. If you have child domains, they must be trusted
  • Active Directory over LDAP. This is included for backwards compatibility with the vCenter SSO service included with 5.1
  • OpenLDAP versions 2.4 and later
  • Local OS Users. These are the users local to whatever OS the vCenter Server is running on. This only exists in Basic SSO server deployments and not in multiple vCenter SSO instances. This is shown as “localos” in the web client
  • vCenter Single Sign-On system users. Exactly one system identity source named vsphere.local is created when you install vCenter. This is shown as vsphere.local in the web client (obviously) J

At any time only one default domain exists. If you do no belong to that one, you will need to add your domain to the username for it to work. And finally to add or configure identity sources you will need to be a SSO administrator. Here is a screenshot of where to find the identity sources for a refresher. And stay tuned for the next new set of objectives.

Objective 1.2: Secure ESXi, vCenter Server, and vSphere Virtual Machines

And here are the following topics underneath this objective:

  • Enable / Configure / Disable services in the ESXi Firewall
  • Enable Lockdown Mode
  • Configure Network Security Policies
  • Add an ESXi host to a directory service
  • Apply permissions to ESXi hosts using Host Profiles
  • Configure virtual machine security Policies
  • Create / Manage vCenter Server Security Certificates

I will try to take these one at a time.

ESXI Firewall

Working with services in the ESXi firewall is not too difficult, this can be done on a per host basis under the configuration tab for it or the manage tab using the web client. Most of my material will be using the web client since that is the way things are going. That being said there are a number of ways to work with the firewall settings. You can 1) Set a security profile set in a host profile and apply that to a host (or number of hosts). 2) You can use ESXCLI commands from the command line to customize. 3) You can go through the Web Client. The Procedure to do that would be the following:

  1. Go through the Web Client to the host you are looking to change.
  2. Click the Manage Tab and then click on Settings
  3. Now Click on Security Profile
  4. The Web client will now show a list of incoming and outgoing connections with the ports.
  5. You can select to enable or disable the rule.

You can also allow or restrict these services to specific IP address. By default all IP addresses are allowed.

Lockdown Mode

Lockdown mode has been created to increase security to your hosts. Those of us familiar with vSphere 5 were already of there being a lockdown mode available for your hosts. For vSphere 6 there is now an additional lockdown mode available for use. Strict Lockdown Mode. The lockdown modes now shape up like this:

  • Normal Lockdown Mode: This kills access to the machine through the client to the host, and will deny root access through SSH. However you can still access the physical machine and DCUI is still able to be logged into and used.
  • Strict Lockdown Mode: This will also disable the DCUI client. If there are no exception users and it loses access to the vCenter server, you will need to reinstall the host.

In order to enable Lockdown mode you need to perform the following steps:

  1. Navigate to the host in the object browser that you want to modify
  2. Click on the host and then on the Manage Tab.
  3. Click on Settings and then click on Security Profile on the left side.
  4. Scroll down until you see the Lockdown Mode Section
  5. Click on Edit and choose desired Mode.

Configuring Network Policies

So configuring network policies, what are they talking about? Perform the following steps

  1. Navigate to the host you are interested in and click on Manage Tab for that host
  2. Click on the Networking Button
  3. Click on the Virtual Switches
  4. Now Depending on whether you want to change the vSwitch or Port Group you would click on the pencil associated with that object.

Now at this point you can decide to work your security magic on either on the Virtual Switch itself or you can impose your will on the Port Group. Your options are the same either way just where they want to apply the policy and if they want the same one on everything, or just a subset. Your Options are as follows:

  • Promiscuous Mode: Reject(Default) or Accept
  • MAC address Changes: Accept(Default) or Reject
  • Forged Transmits: Accept(Default) or Reject

So what do these options mean?

Promiscuous mode being set to Accept will remove the filtering on by default and will receive all traffic observed. This can be useful if you are running Wireshark or some other IDS or packet sniffing program. Otherwise you would generally leave this off.

MAC address changes setting affects the traffic that a virtual machine receives. If set to Reject, ESXi won’t honor requests to change the effective MAC address to a different address than the initial MAC address. The Initial MAC address is set for a vNic when ESXi assigns the NIC to a virtual machine. The OS sees the vNIC with a MAC address and should use that MAC address and it becomes the effective MAC address. Occasionally you might change this to receive traffic coming to a different MAC, such as in the case of Microsoft Network Load Balancing. Where the OS would present a separate NIC to load balance and you want the VM to receive on that MAC address. You would need to make sure that the setting was set to Accept in this case.

Forged Transmits setting affects traffic the virtual machine sends. When this is set to accept ESXi does not compare the source and effective MAC addresses. If the OS tries to send out traffic as a different MAC then ESXi sets for it, and this is set to Reject, ESXi will drop the packets into the bit bucket (trash). The guest OS will most likely assume that the packets are being dropped.

Add an ESXi host to a directory service

Adding an ESXi host to AD or LDAP is not difficult at all. Just follow the following steps:

  1. Navigate to the host, and then click on Manage
  2. Click on Settings and then scroll down to Authentication Services
  3. The top section has to do with adding the host to a domain, so click on Join Domain
  4. You are now presented with a box for Join Domain
  5. Enter in the Domain and User Credentials (will need to be a user in the Domain with admin privileges) and click OK
  6. When it is successful, your Domain will show up and Directory Services Type will say the type of Domain (Active Directory) you chose

 

Apply Permissions to ESXi Hosts using Host Profile

I assume that if you have gotten this far and you are taking a delta exam and not starting from scratch, that you have at least a rudimentary idea of what host profiles are. However just in case, a host profile is a list of host configuration options that can be applied to a host/s or a cluster in order to keep your machines as close to each other as possible. This can come in handy for you as the administrator, since they are centrally managed and can improve efficiency, compliance, and also enable you to use time saving features such as Auto Deploy. You also need to have the proper licensing in place to be able to use this feature. This requires Enterprise Plus licensing.

Creating a Host Profile is simple. You can do it one of two ways. You can either navigate to the host you have configured to use as the reference host and then right click and Host Profile and Extract Host Profile like the below

Or you can navigate right to Host Profiles from the Home Screen

And then after you click on the Host Profiles. You can click on the (+) sign to add a new. It will then pop out a screen and ask you what host you want to use as the reference host.

After you create the Host Profile, then you can go back in to edit if you need to as well. You will need to do this by going to the Host Profiles screen from the Home Menu as mentioned above. After you get there you can click on the Host Profile you want to edit and click Edit Settings. Some of the settings you can set are seen here.

After all that is done, from the same screen, you will need to attach the host profile to a host/s or cluster/s. Then you can run a scan against them to see if they are compliant or not. If not you can Remediate them to bring them into compliance.

Configure Virtual Machine Security Policies

For security it is good to think of your virtual machine the same as a physical machine. You have all the same abilities as a physical machine for securing, firewall on the VM itself and make sure you restrict who has physical access to the VM, and of course patching. You have a number of advantages in these things though since you are using a VM. You can employ things like templates to create a fully patched version of your server when you bring it up, reducing the time it takes to secure it. You can also restrict ability to use the VMRC or Remote Console. The ability to use this should be treated the same as a person having physical access to the machine.

There are other things to also consider as well. Such as if someone were to gain access to the machine, he/she could possibly introduce a program that would start eating resources in your environment. If this were to go unchecked, it runs the possibility of not only affecting that virtual machine but also all others sharing the same resources. You can use something like limits or shares to prevent this from happening.

Also as always, only give the VM what it needs to run. Don’t have unnecessary hardware or features that you won’t use on it. You can disable things like copy/paste and Host Guest File System to further increase your security. Finally we have,

Create / Manage vCenter Server Security Certificates

Personally in the past, I have always let VMware handle the certificates for me, however if you have a need for replacing the self-signed with ones that are signed by a third-party or enterprise certificate authority, VMware can definitely accommodate you. If you have been working with them in previous versions, you will need to know that the 5.5 certificate replacement tool will not work for 6.0 due to the new architecture. There are certs used for just about all authentication services in VMware now.

For vCenter Server, you can use the following to view and replace certificates:

  • vSphere Certificate Manager Utility – You can perform all common certificate replacement tasks from the command-line
  • Certificate Management CLIs – Perform all certificate management tasks with dir-cli, certool, and vecs-cli
  • vSphere Web Client certificate management – View certificates, including expiration information

There is a lot of information included in replacing or regenerating certs in the VMware environment and some of it depends on which cert you are replacing. The CLI tools you can use to do this are as follows:

  • certool – this allows you to generate and manage certificates and keys. This is part of the VMCA
  • vecs-cli – This allows you to manage the contents of VMware certificate store instances. This is part of the VMAFD
  • dir-cli – This allows you to create and update certificates in the VMware Directory Service. Also part of the VMAFD

Core Identity Services

  1. vmdir – This handles SAML certificates management for authentication with vCenter SSO
  2. VMCA (VMware Certificate Authority) – Issues certificates for VMware solution users, machine certificates for machines on which services are running, and ESXi host certificates. VMCA can be used as is, or as an intermediary CA. VMCA issues certificates to only clients that can authenticate to SSO in the same domain.
  3. VMware Authentication Framework Daemon (VMAFD) – Includes the VMware Endpoint Certificate Store (VECS) and several other authentication services. VMware administrators interact with VECS; the other services are used internally.

Certificate Management Tool Locations

  • Windows
    • C:\Program Files\VMware\vCenter Server\vmafdd\vecs-cli.exe
    • C:\Program Files\VMware\vCenter Server\vmafdd\dir-cli.exe
    • C:\Program Files\VMware\vCenter Server\vmcad\certool.exe
  • Linux
    • /usr/lib/vmware-vmafd/bin/vecs-cli
    • /usr/lib/vmware-vmafd/bin/dir-cli
    • /usr/lib/vmware-vmca/certool

Just due to the sheer depth of certificate management I will defer to the guide for further direction.

 

 

 

 

 

VCP 6 Delta Beta Exam Study Material (series)

Objective 1.1: Configure and Administer Role-Based Access Control

Ok so starting off with the first objective here. Underneath are the following topics:

  • Identify common vCenter Server privileges and roles
  • Describe how permissions are applied and inherited in vCenter Server
  • View/Sort/Export user and group lists
  • Add/Modify/Remove permissions for users and groups on vCenter Server inventory objects
  • Create/Clone/Edit vCenter Server Roles
  • Determine the correct roles/privileges needed to integrate vCenter Server with other VMware products
  • Determine the appropriate set of privileges for common tasks in vCenter Server

So first it is good to go over a few things. We will start with the type of permissions that are available to us.

In order to perform tasks on objects or be able to view properties or even log in, we need to have the proper permissions associated with the user we are logging in with. We have:

vCenter Server Permissions – These permissions are assigned through the vCenter for an object that the vCenter manages. This permission is made up of one or a number of privileges that you are allowing this user to have for that object. This would include things such as modification of a VM or creation of datastores etc.

Global Permissions – These are permissions assigned to a user that needs to access more than one solution. For example a user needs to access not just vCenter but also vRealize Operations Manager. This is a permission that spans across the whole vsphere.local (or whatever you named your SSO domain) These are also replicated across the domain.

Group Memberships – These are permissions associated with a group. It is much easier to apply permissions to a group and then just add or remove a user rather than assigning roles to a user every time you need to change. A user can me a member of more than one group and will receive the union of the privileges associated with the groups (keep in mind that most restrictive applies)

ESXi Local Permissions – This is only for managing a single ESXi host.

The Permission Model

There are 4 concepts that make up the permission model. Permissions, Users and groups, Privileges, and Roles.

Permissions – Every object in the vSphere world has associated permissions. These permissions specify what privileges that group or user have on that object.

Users and Groups – You can only assign permissions to authenticated users or groups. These can either be the built-in users and groups in vCenter or they can be from one of the added identity sources (such as AD).

Roles – Roles are predefined sets of privileges. Instead of having to find out what permissions you need to assign to a user or group, every single time. You can gather privileges into a single Role – such as VM Creator, to easier assign them. There are a number of them already pre-defined in vCenter for you

Privileges – These are the fine grained access controls. You group these into Roles.

Assigning permissions.

There are a few things to keep in mind when assigning permissions. You can assign permissions to a Parent object so that the child objects can inherit the permission as well (If inheritance is on). Child permissions will always override Parent permissions. What does this mean? If you were to create permissions on a child object that deny being able to modify it, even if the user has admin rights on the parent, he will not be able to do so.

A user will also receive the aggregate of the groups he/she are added to. If one group has read permission and also belongs to a group with modify permission, that user will get both of the permissions. Now if one of those groups happens to be more restrictive, such as a Read-Only permission or Access Denied, then that will take precedence no matter what other group they belong to, for that object.

In order to Add Permission to an Object you need to do the following:

  1. Browse to the object you that you want to add the permission to in the Object Navigator
  2. Click on the object and then click on Manage tab and then under that click on Permissions

pic1

 

  1. Click on the Add (+) icon and then on the Window that comes up, click on Add
  2. Click on the Domain and either choose the SSO Domain or Identity Source where the user resides

pic2

5. After the user is selected you then need to assign a Role that has the desired access you wish to assign          to this user – Also choose if it propagates to children objects or not.

6. Click OK and your user should now show up in the middle pane

pic3

 

That’s all there is to it. To modify, just click in the middle pane on the user you wish to modify and then click on the pencil.

To remove, click on the user you wish to remove and then click on the red ‘x’.

 

The above assumes you have the privileges you want for that user already setup in a role. If you don’t you will need to first create a role with the needed privileges.

 

If you don’t you will need to do the following.

From the Main Home Screen, click on Administration on the Object Navigator Pane (left pane)

  1. Click on Roles on the left hand pane
  2. You can now choose from either creating a new Role from scratch or copying one of the existing Roles and then Modifying it. To create a new Role click on the plus sign (+) if you wish to copy an existing, click on the Role you wish to copy and then click the Clone Role Button.
  3. When you create a new Role you are presented with a box to give it the name of the Role and underneath all the privileges available to assign to it.

pic4

4.    If you decided to choose door number 2 and clone an existing role this is what it                  might look like.

pic5

5.  You would then go in just like the other, and choose what privileges you want that              role to have.

Incidentally you would go to the same basic place for global permissions as well. The Global Permissions are there to give a user/group privileges to all objects in a hierarchy.

  1. Go to Administration from the Home Screen and then click on Global Permissions

pic6

  1. In order to add to it, once again, click on the plus sign (+). Or to modify, click on the object and then on the pencil icon.

In order to assign Global Permissions, you need the .Permissions.Modify permission privilege on the root object for all inventory hierarchies

There are a number of Default Roles to be aware of as well. They are as follows:

Administrator Role – Users assigned this Role are allowed to view and perform all actions for this object. By default administrator@vsphere.local has the Administrator role on both vCenter Single Sign On and vCenter Server after installation.

No Access Role – Users assigned this role cannot view or change the object in any way. All new users and groups are assigned this role by default. The only users not assigned this by default are the Administrator@vsphere.local, root user, and vpxuser.

Read Only Role – Users assigned this role are only able to view the state of the object and details about the object. They cannot view the remote console and all actions on the menu and toolbars are not allowed.

Required Privileges for Common Tasks

This is where it starts getting tricky because everything is very granular. You need to think of not just the privilege that you are assigning but also what are you performing it on. I am going to just “borrow” the table from the PDF. And only the Create a Virtual Machine. For the rest of them feel free to view the PDF.

pic7

Here is the link to the Security Guide – http://pubs.vmware.com/vsphere-60/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-60-security-guide.pdf

Alright. I think that about covers this particular point. Next up will be Objective 1.2 on Securing ESXi, vCenter Server, and vSphere Virtual Machines.