VCP6 Delta

Objective 8.1: Deploy ESXi Hosts Using Auto Deploy

So I have skipped a few sections dealing with troubleshooting. I may circle back around and get to them if I can but I am shooting to try to get all the other stuff done. Besides, I am sure if you have gotten this far – you have had at least a couple problems and have at least a rudimentary knowledge of troubleshooting (REBOOT ALL THE THINGS!! J ).

Looking to try to keep the momentum going we are going to discuss the following topics:

  • Identify ESXi Auto Deploy requirements
  • Configure Auto Deploy
  • Explain PowerCLI cmdlets for Auto Deploy
  • Deploy/Manage multiple ESXi hosts using Auto Deploy

Identify ESXi Auto Deploy requirements

Generally, as sysadmins, we don’t like to do the same thing over and over again. I am not saying we are lazy (some of us are J) but let’s face it, there are much cooler things to learn and do than load a system and put a config on it a couple of hundred times. So for those people, VMware has offered up Auto Deploy and Host Profiles. We will go over Host Profiles in the next objective point of 8, so don’t worry.

We will start off by explaining just what Auto Deploy does. Auto Deploy is a vSphere component that uses PXE boot infrastructure in conjunction with vSphere host profiles to provision and customize one to hundreds of hosts. No state is stored on the ESXi box itself, it is held by Auto Deploy. Now you don’t necessarily HAVE to use Host Profiles, but it does make you job a lot easier once it’s setup. You can even deploy different images such as versions and driver loads to different servers, based on criteria you specify.

We will now list the requirements, as it is always good to begin with those:

  1. The Hosts you are going to provision need to be setup in BIOS mode (not UEFI)
  2. If you are going to use VLANs make sure that is all setup prior and there is connectivity
  3. Verify you have enough storage for the Auto Deploy repository (Best Practice is to allow about 2GB for 4 images)
  4. DHCP server
  5. TFTP server
  6. Admin privileges to the DHCP server
  7. While not a requirement, it would be a good idea to setup a remote syslog server and ESXi Dump Collector in case things go wrong.
  8. PXE does not support IPv6 so make sure you have IPv4 (PXE doesn’t support it, specifically)

How do we configure it?

Configure Auto Deploy

  1. Install the vCenter Windows app or VCSA
  2. You will need to start up the Auto Deploy service
    1. You will need to log into your Web Client
    2. Click on Administration
    3. Under System Configuration, click Services
    4. Select Auto Deploy and select Edit Startup Type
    5. On Windows the service is disabled – select Manual or Automatic to enable it
    6. On the VCSA the service is set to Manual by default, you can select Automatic to have it start unassisted
  3. Configure the TFTP server (there are many different ones to choose from)
    1. In the Web Client go to the Inventory List and select your vCenter server System
    2. Click the Manage tab, select Settings, and click Auto Deploy
    3. Click Download TFTP Boot zip to download the configuration file and unzip the file to the directory in which your TFTP server stores files.
  4. Setup your DHCP server to point to the TFTP server where you just downloaded the config files.
    1. Specify the TFTP server’s IP address in the DHCP server using Option 66 (also called next-server)
    2. Specify the boot file name which is undionly.kpxe.vmw-hardwired in the DHCP option 67 (also called boot-filename)
  5. Set each host you want to provision with Auto Deploy to network boot or PXE boot.
  6. Locate the image profile you want to use and the depot it is located
  7. Write a rule that assigns an image profile to hosts

Next you are going to need to install PowerCLI to be able to create rules that assigns the image profile and optionally a host profile.

Explain PowerCLI cmdlets for Auto Deploy

Help is always just a command away by just typing Get-Help<cmdlet>. Also remember that Powershell isn’t case sensitive and you can tab to complete. You can also clean up output using Format-List or Format-Table. Now.. the commands:

Connect-VIServer 192.x.x.x – This command will, as you might have guessed, connect you to the vCenter that you plan to use for Auto Deploy. The IP address will need to be changed to yours. This command might return a certificate error. This is normal in development environments.

Add-EsxSoftwareDepot <c:\location.zip> – This will add the image profile to the PowerCLI session that you are in so that you can work with it.

Get-EsxImageProfile – This will list out the Image Profiles that are included in the zip that you are using. Usually there are a couple of them in there that may include VMware Tools and one that does not.

New-DeployRule –Name “testrule” –Item “My Profile25” –Pattern “vendor=Acme,Dell”, “ipv4=192.168.1.10-192.168.1.20” – This is a little meatier. This is creating a rule with the name “testrule” that is going to use the image profile “My Profile25” and will only be applied to a system from either Acme or Dell that is using an ip address in the range from 192.168.1.10-20. Double quotes are required if there are spaces, otherwise they are optional. You can specify –AllHosts instead of pattern to just carpet bomb your machines. If you have a host profile to add to it you can do so with the –Item <name of profile> and it will apply this profile to those hosts.

Add-DeployRule testrule – This adds the rule to the active rule set.

That is all the rules you have to have to have. But there are some more that you might find useful with Auto Deploy. They include

Get-DeployRule – This will get all current rules

Copy-DeployRule –DeployRule <name of rule> -ReplaceItem MyNewProfile – This will copy a rule and change the profile. You cannot edit a rule after it is added. You have to copy and replace.

Deploy/Manage multiple ESXi hosts using Auto Deploy

The beauty of the above is that you can use it for multiple ESXi hosts. I mean that is what it was really meant to be used for. You also have the ability of load balancing the TFTP servers to help distribute the load.

And that’s all I will write on this objective. Next stop, 8.2

Objective 6.1: Configure and Administer a vSphere Backups/Restore/Replication Solution

Welcome back to another version of Mike’s VMware show! Up for today, we are going to discuss Backups and Replication. In specific, the topics we are going to cover are:

  • Identify snapshot requirements
  • Identify VMware Data Protection requirements
  • Explain VMware Data Protection sizing Guidelines
  • Identify VMware Data Protection version offerings
  • Describe vSphere Replication architecture
  • Create/Delete/Consolidate virtual machine snapshots
  • Install and Configure VMware Data Protection
  • Create a backup job with VMware Data Protection
  • Install/Configure/Upgrade vSphere Replication
  • Configure VMware Certificate Authority (VMCA) integration with vSphere Replication
  • Configure Replication for Single/Multiple VMs
  • Identify vSphere Replication compression methods
  • Recover a VM using vSphere Replication
  • Perform a failback operation using vSphere Replication
  • Determine appropriate backup solution for a given vSphere implementation

Identify snapshot requirements

So as we are all aware, snapshots are not backups and have no place in being used as such. So why would we put that in this objective? Well because most of our backup programs do use the snapshot mechanism to take a picture point in time of a VM. We can also use this mechanism to take a crash and application consistent snapshot that will allow us to reboot the VM and still be able to have our programs work properly. So first, what options can I specify when I create a snapshot?

  • Name: This is used to identify the snapshot
  • Description: Give it a wordier description of the snapshot
  • Memory – We can select whether or not we are including the memory of the VM when taking a snapshot. This will take longer but allows us to revert to a running VM vs a just booted machine. If this option is selected, the machine will be stunned (paused briefly) while the snapshot is being taken.
  • Quiesce: VMware tools must be installed in order to use this option. This option will flush all the buffers from the OS to make sure that the disk is in a state fully suitable for backups

When a snapshot is created, it is comprised of the following files:

  • <vm><number>.vmdk and <vm><number>-delta.vmdk
  • <vm>.vmsd : this is a database of the virtual machine’s snapshot information and the primary source of information for the snapshot manager. This file contains line entries which define the relationships between snapshots as well as the child disks for each snapshot
  • <vm>snapshot<number>.vmsn : Current configuration and optionally the active state of the virtual machine.

Some of the products which use snapshots are:

  • VMware Data Recovery
  • VMware Lab Manager (now vCloud Director)
  • Storage vMotion
  • VDP and VDPA

This is why we are going over this a bit. It is also important to note that it uses a Copy on Write (COW) mechanism in which the virtual disk contains no data until copied there by a write. The other thing I think it is important to note is space. While you have a Snapshot, the total disk space used is the original base disk + any changes made to it after the snapshot is done. Feasibly, the child disk could be as large as the parent disk.

Identify VMware Data Protection Requirements

So there are a number of different types of requirements for VMware Data Protection. We should really start off first though with an explanation of what VMware Data Protection is. You might remember it by its acronym, VDP and VDPA (‘A’ being for Advanced). VMware Data Protection is a robust, simple to deploy, disk based backup and recovery solution powered by EMC. The product they are referring to is EMC’s Avamar. Now the Requirements.

Capacity Requirements depend on a number of things including:

  • Number of VMs protected
  • Amount of data contained in each protected machine
  • Types of data being backed up
  • Backup retention policy
  • Data retention rates

As far as software requirements go, VDP 6.0 requires at least vCenter 5.1 with 5.5 or later recommended. If for some reason the VM of VDP was migrated to a vSphere host with 5.1 or earlier, it wouldn’t be functional.

It is deployed as a VM with a hardware version of 7 – Therefore if you are intending to backup a VM that is Flash Read Cache backed, it will use the network block device protocol instead of HotAdd affecting performance.

Also be aware that VDP does not support the following disk types:
– Independent
– RDM Independent – Virtual Compatibility Mode
– RDM Physical Compatibility Mode

VDP is available as a .5TB, 1TB, 2TB, 4TB, 6TB, 8TB configuration. You will need to follow the following table for hardware configurations (lifted from the vSphere guide)


You will also need your normal DNS and NTP settings setup.

Explain VMware Data Protection sizing Guidelines

So look to the table above for sizing Guidelines. Keep in mind that you can expand after its deployed if need be (this is different than the old VDP which required you to just deploy a new appliance. The old VDPA would allow you to expand though). One thing to also be aware of is VMware will try to Dedupe the drive, so try to group the same type of VMs together on the same appliance so that you can conserve more space.

Identify VMware Data Protection version offerings

There used to be two versions of VMware Data Protection. There was VDP and VDPA. But since 6.0 rolled out, VMware has decided to roll the features of the higher end product (VMware Data Protection Advanced) and just call it VDP. So among other things, VDP can support up to 400 virtual machines per appliance. You can also have up to 8TB of storage size for your backups. It supports File Level, Image Level, Individual disk backups, and even has support for guest level backups and restores of MS Exchange, SQL, and Sharepoint Servers.

Describe vSphere Replication architecture

So as far as vSphere Replication goes, you will need a few things which you more than likely already have. One is a vCenter Server – version 6.0 since Replication 6.0. Also you will need SSO. You can use SRM with it as well, but they will need to be the same versions.

vSphere Replication itself is deployed as one or more prebuilt, Linux-based, virtual appliances. A maximum of 10 can be deployed per vCenter server. Each appliance is deployed with 4GB of RAM and 2vCPUs for small environments or 4 vCPUs for larger environments. The appliance also has two VMDKs totaling 18GB in size.

One of the nice things about vSphere Replication is since it is host based, it is independent of the underlying storage. This means you can use a number of storage types or more than one. vCloud Air is also supported as a migration location.

Create/Delete/Consolidate virtual machine snapshots

We won’t spend too much time on snapshots since I figure most people already know about them. I will, however run through a quick demonstration of how you would do each of these.

First, you would right-click on whatever VM you are working with – you will be presented with a menu that looks like this


Next you are going to click on the snapshot option – you will have these options


You can click on Take Snapshot in order to create one. Depending on whether your machine is on, your options might be greyed out.


You will now need to give it a name and if you want, a description. You can also choose here to snapshot the virtual machines memory and whether you will quiesce the guest file system. It will point out that it needs VMware Tools installed in order to do this. Keep in mind that if you want to snapshot the memory, you will need to make sure you have enough disk space and also, realize it will take a little longer since you are going to write extra data to the disk. Once done, it will say the task is completed down in the Recent Tasks bar.

If we want to perform other tasks such as delete or consolidate we will go back to the same menu option, and choose our task there. If we are going to delete we will want to choose Manage Snapshots


This is now the screen that will come up.


We can revert back to a snapshot or delete one, or all of the snapshots we have. I am going to Delete All. Once done, I am now presented with a nice clean window.


And that is all there is to it.

Install and Configure VMware Data Protection

To install VMware Data Protection, just need to deploy the .ova. This is just like most other ova’s so I won’t bother you too much with the details of that. After you have finished that and turned it on, the console gives you nice helpful hints what to do next.


When we go to the above address we are given a nice gui wizard


Now we go through the setup and make sure that DNS is resolvable. One of the things I would like to call out here in this setup is the storage setup. We have a few different options available to us.


As mentioned before, this is not static but can be enlarged later, so for now, I am going to leave it at 500MB. I then have the option of putting it on a different datastore


The next screen will give you the vCPU and Memory requirements needed for the storage size that you have chosen.


You then have the opportunity to run a performance analysis on your storage configuration.

After that is done, it will restart the appliance.

Create a backup job with VMware Data Protection

So now the appliance is installed and you are ready to start protecting things… All sorts of things. You will need to make sure the plug-in is installed in the Web Client and you reload it. Once you do, you will see a new icon in the home screen.


When you click on it, you will be prompted to connect to an appliance.


Click on Connect and now you have a whole new world of options available to you. You can now create backups and restore and all sorts of things. In order to create a backup job, click on the Basic Tasks: Create a Backup Job, under the Getting Started page. Or go to the Backup tab, click on Backup job actions and click New. It will open a new window where the first screen is to ask you what type of Backup job you want to create.



Now choose your VM to protect


Choose your schedule


Retention Policy (How long to keep the backups)


Name the job


Click on Finish


You have now created your first job.

Install/Configure/Upgrade vSphere Replication

That was so much fun, let’s do it with Replication. Once again deploying the ovf (comes to you in a zip or ISO) is old hat so we won’t cover that. So after you install the ovf, the next thing you will need to do is configure it to work with the vCenter. You will need to go to the appliances address with :5480 on the end. When you get there and logon, you will need to go to the VR tab and then Configuration. There you will add in the user name and password and double check the rest of the information. Then click on Save and Restart Service.


Accept the cert. After a few minutes it will start up the service and save the configuration. You then can go back to your web client and make sure the plugin is enabled for replication. Once it is, you will see a new option called vSphere Replication. And when you click on it you will see something that looks like this.


There are a couple of different ways to replicate. You can replicate between different vCenters, sites, or even hosts. There is an option to replicate to a cloud provider as well. Since I am just a poor education consultant, I am just going to do an intra-site replication.

First I will need to right click on the VM I want to replicate


The first one is what I am going to choose. This can be to the same vCenter or a separate vCenter. Next window I will choose which vCenter.


Next window I can allow it to auto-assign a replication server, or I can manually choose one. I will let it auto-regulate.


The next window, I will tell it where I want it to replicate to. In this case, I am choosing a local datastore of one of my ESXi servers.


I am now presented with a few options of quiescing and network compression. I am going to choose network compression to save bandwidth at the expense of CPU power. (It will consume additional CPU cycles to compress) Now I click Next.


I now have the option of choosing my Recover Point Objective. This is where I want to be able to recover to if I have an issue. This is not the same as the Recovery Time Objective. This is basically saying that wherever I set this, it will try to have a backup of a point in time of every N hours. It will try to start the backup before to try to meet it. But be aware that if you don’t plan for how much data you will be moving you can easily overlap. Don’t get caught by that!! The other option is Point in Time Instances. After the primary copy, each additional copy is a snapshot. This is how many of those deltas you are willing to keep. I am not really worried about this VM and am only creating this for the sake of this lab, so I will leave defaults.


Summary… Here we go. One other thing to note…. Replication will not work unless ——– the machine is turned on! If it isn’t important enough to have turned on, then do you really need to replicate it? J

You can check status of the machine by going to replication and then Monitor


Finally, to upgrade your replication. Back to the appliance at :5480 (or you can update via Update Manager) This is the page to update. It is relatively straightforward.


Configure VMware Certificate Authority (VMCA) integration with vSphere Replication

By default, Replication uses a self-signed cert. In order to use one from the vCenter’s CA its rather easy. Just log back into the appliance’s config page and click on the SSL Certificate Policy – Accept only SSL certificates signed by a trusted Certificate Authority. Then Save and Restart. That’s it. Here is where it is….


Configure Replication for Single/Multiple VMs

I won’t go over this again since I already did above. The only difference is you highlight multiple VMs instead of just one.

Identify vSphere Replication compression methods

So this is basically a simple table.

Source ESXi Host        ESXi that manages the Target Datastore        Supports

Earlier than 6.0            Any Supported Version                Nope no compression
6.0                Earlier than 6.0                    Looks for a ESXi 6.0 host to do work. Else the Replication App. does the work
6.0                6.0                        Full speed ahead!!!

Recover a VM using vSphere Replication

This is relatively simple as well. Just go to the Replication section, and choose monitor. After you are there, choose Incoming Replication and choose the VM / VMs you wish to recover and right click and choose Recover. You are given three options to choose from now.

  1. Synchronize recent changes – The VM will need to be off, but it will try to sync to that VM before it restores. Use this if the VM is available and you can get at it. If not….Then
  2. Use latest available data – This will use the replicated info and copy back over.
  3. Point in Time – This is only available if you chose it when you configured the replication.

Next screen, choose the Folder in your environment to restore to.

And then choose the target compute/datastore resource.

Summary and Voila, restore

Perform a failback operation using vSphere Replication

Just going to tell you what the guide says on this one. ”

Failback of virtual machines between vCenter Server sites is a manual task in vSphere Replication.

Automated failback is not available.

After performing a successful recovery on the target vCenter Server site, you can perform failback. You log

in to the target site and manually configure a new replication in the reverse direction, from the target site to

the source site. The disks on the source site are used as replication seeds, so that vSphere Replication only
synchronizes the changes made to the disk files on the target site.”

Determine appropriate backup solution for a given vSphere implementation

This one is all you. You will need to figure out depending on customer’s requirements and the capabilities of the equipment you have and might be able to purchase. You will also need to find (ask the customer) how much risk they are willing to assume. Keep in mind that the less risk they assume, the more the cost will be.

Happy VM’ing and remember if women don’t find you handsome, they should at least find you handy.


Objective 5.1: Configure Advanced/Multilevel Resource Pools

Back again with a new objective. This time we are going to go over Resource Pools. Over the course of the blog post we will cover the following points.

  • Describe the Resource Pool hierarchy
  • Define the Expandable Reservation parameter
  • Describe vFlash architecture
  • Create/Remove a Resource Pool
  • Configure Resource Pool attributes
  • Add/Remove virtual machines from a Resource Pool
  • Create/Delete vFlash Resource Pool
  • Assign vFlash resources to VMDKs
  • Determine Resource Pool requirements for a given vSphere implementation
  • Evaluate appropriate shares, reservations and limits for a Resource Pool based on virtual machine workloads

So jumping right in…

Describe the Resource Pool hierarchy

Whether you create additional pools or not, you already have a resource pool in your environment. That’s right, your original hosts whether by themselves or in a cluster make up a resource pool. So what is a resource pool? Well the official definition of a resource pool is: A logical abstraction for flexible management of resources. They can be used to partition available CPU and memory resources.

As mentioned before, whether you have a standalone host or DRS cluster, you have a resource pool. EVEN if it doesn’t show in your client. This is your root resource pool. Now you can create additional pools that further partition those resources from there. These are known as child resource pools. Depending on which pool you are talking about in reference to which pool, the relationship is familial. So the upstream pool is known as a Parent Resource pool and the downstream is known as the Child Resource pool. Continuing along in this family thing, if the pools are at the same level, they are known as Sibling Resource pools. A resource pool can contain child resource pools or virtual machines.

Define the Expandable Reservation parameter

So you have gone ahead and partitioned off resources. That’s great and you have officially been heralded as the savior of at least two different departments. The only issue is that one of the departments you have restricted the usage to, was the payroll department. At least occasionally, they may need a bit more resources-to make sure your check goes out on time. Since you don’t necessarily wish to have to answer every email when they need it, you would like a better way to occasionally give them more resources. Enter the expandable reservation check mark. Checking this button allows your resource pool to occasionally grab more resources, if they are available, from the parent resource pool. And once again, the peasants rejoiced.

Describe vFlash architecture

Starting with version 5.5 a new architecture and vFlash setup started being used. This is now vSphere Flash Read Cache. The design is based off a framework made up of two parts:

  • vSphere Flash Read Cache Infrastructure
  • vSphere Flash Read Cache software

This architecture allows for the pooling of multiple Flash-based devices into a single consumable object, called a Virtual Flash Resource. This can be consumed and managed in the same way CPU and memory are done. So how does it work?

The vSphere Flash Read Cache infrastructure becomes the resource manager and broker for the consumption of the Virtual Flash Resources and also enforces admission control policies. The Flash resource is broken into two different pieces. Virtual Flash Host Swap Cache for VMware vSphere Hypervisor and Virtual Flash Read Cache for virtual machines.

The first object is used as one of the memory reclamation techniques. This replaces the previous tool, swap to SSD. The Hypervisor can use up to 4TB available for Swap Cache.

Virtual Flash Cache software is natively built into the hypervisor. This provides a mechanism for the VMs to use SSD directly to enhance the read portion of their operations, without having to modify anything inside the VM. The amount of cache space used is assigned on a per VMDK basis and only consumed when the machine is turned on. vSphere uses a filesystem called VFFS or VMware Flash File System.

Create/Remove a Resource Pool

In order to create a Resource Pool, we will need to:

  1. Navigate to the parent object where we will want to place the resource pool.
  2. Right Click on the object and select New Resource Pool
  3. Now you will need to assign it a name you can also specify how to divvy up the CPU or Memory resources of the parent
  4. Here you see the finished resource pool in native habitat

Kind of looks like a pie chart signifying cutting up a piece of the resources for you. To remove it, just right-click on the resource pool and delete.

Configure Resource Pool attributes

Now I could be really lazy and say to configure the resource pools attributes, just right click and click Edit. But I won’t do that to you. Yes, that is the way to do it. However, you should know what all those things are before you start meddling with them. So here is an explanation of what you will find on that screen:

Shares: This on the surface looks simple enough, right? Shares should equal how many shares of the resource. But it gets a little deeper. It is dependent on the number of shares owned by the parent. If you are inside another resource pool, then you get that many shares of the original shares. Or a fraction. The other thing to keep in mind as well is shares only come into play when there is contention for that resource. So as long as everyone has enough…. Shares don’t matter. You can specify Low, Normal, High, or Custom. Low=2000 shares, Normal=4000, and High=8000. The number doesn’t really matter as it is just based on the overall shares assigned in the pool. Custom allows you to specify any number you want.

Reservation: This specifies a guaranteed CPU or memory allocation for that resource pool. The interesting thing about this is, that regardless if there is a VM inside the pool, the reservation is still in effect.

Expandable Reservation: As mentioned before, if this box is checked, it allows a VM inside the resource pool to borrow resources from the parent pool (if available).

Limit: You can use this to specify an upper limit to this type of resource. Use this sparingly as this will prevent you from starting machines or worse, if you use it unscrupulously.

Add/Remove virtual machines from a Resource Pool

You can move a VM into or out of a resource pool a couple of different ways. The resource management guide from VMware has you right click on the VM and Migrate it. You can also just drag it into it or out of it. That is to me the easier way. Now when you do that you need to be mindful that if you have assigned any shares to the VM, they will change according to the overall number of shares already in the resource pool. Also be mindful of any reservations you have set. If the resource pool can’t support the reservation, it will cause the move to fail. Likewise, moving a machine out of a resource pool will once again redistribute the weight of the shares, as there is a smaller number of overall shares so each one will be worth more.

Create/Delete vFlash Resource Pool

In order to create a vFlash Resource Pool you will need to navigate to a host with SSDs and then click on Manage>Settings>Virtual Flash and then click on Add Capacity. To remove you would click on the Remove All


Assign vFlash resources to VMDKs

To assign vFlash resource to a VMDK you will need to have some to assign (obviously) but then you would go to the VM and edit the settings. Then click on the Hard Drive with the VMDK you wish to add the flash resource to.

When you expand it out you can see Virtual Flash Read Cache with a number specifying how much space you wish to assign to it.

Determine Resource Pool requirements for a given vSphere implementation
Evaluate appropriate shares, reservations and limits for a Resource Pool based on virtual machine workloads

So the above two points are going to be really based on a lot of factors. You should first keep the goals of Resource Management in mind.

  • Performance Isolation: prevent virtual machines from monopolizing resources and guarantee predictable service rates.
  • Efficient Utilization: exploit undercommitted resources and overcommit with graceful degradation.
  • Easy Administration: control the relative importance of virtual machines, provide flexible dynamic partitioning, and meet absolute service-level agreements.

The next thing you will need to do is remember that there is overhead associated with your VM. Sure you may have given that VM 4GB of RAM but it is consuming more than that due to VMware needing to use RAM to manage it. How do you figure out how much you need? You need to go to the VM and then click over to Monitor > Utilization. When you get there, you will see a bunch of line graphs and numbers. What you are looking for here is on the memory and CPU, Worst Case Allocation. This is the absolute worst case scenario that you would need to prepare for.

In my example here, CPU worst case is 3.54 GHz. This is because I have allocated 2 vCPUs to the machine and both of those cores are running at 1.6Ghz. Then add in overhead. With RAM, I am looking at 4.08GB as my worst case scenario. This is the 4GB I have allocated to this box plus overhead needed to manage it. You can also work with Guest Memory to figure out how much memory your workload / app is actually using. Keep these in mind when sizing and working with your resource pools.

Next Up….. Objective 6.1 – Where we talk about Backups / Restores / and Replication

Objective 4.2: Perform vCenter Server Upgrades

To wrap up upgrade processes and things, we are going to go over vCenter Upgrades. The following points will be covered:

  • Identify steps required to upgrade a vSphere implementation
  • Identify upgrade requirements for vCenter
  • Upgrade vCenter Server Appliance (VCA)
  • Identify the methods of upgrading vCenter
  • Identify/troubleshoot vCenter upgrade errors

Identify steps required to upgrade a vSphere implementation

There are many things to think about for your vCenter and vSphere architecture. Especially now that we have the split of new types of Roles. The Platform Services Controller and the vCenter Role. You have the options of creating an Embedded installation which has all the roles installed on one server, or you can do an External Installation with a separation of the roles. There are advantages and disadvantages of each of these installations. Namely:

Embedded:

Advantages

  1. Connection between the vCenter and the PSC (Platform Services Controller) is not over the network and is not subject to issues associated with DNS and connectivity
  2. Licensing is cheaper (if installed on Windows machines)
  3. Fewer Machines to keep track of and manage
  4. You don’t need to think about distributing loads with a load balancer across Platform Services Controllers

Disadvantages

  1. There is a Platform Services Controller for each product – This consumes more resources
  2. The model is suitable for small-scale environments

vCenter with External Platform Services Controller:

    Advantages

  1. Less Resources consumed by the combines services in the Platform Services Controller, reducing the footprint and reduced maintenance
  2. Your environment can consist of more vCenter Server instances

Disadvantages

  1. The connection between the vCenter/s and Platform Services Controller is over the network and is subject to any issues with connectivity or DNS
  2. You need more Windows licenses (if using Windows)
  3. You must manage more virtual machines or physical – causing more work for you, the admin

The actual steps for the upgrade process are as follows

  1. Read the vSphere release notes… This should go without saying. There are a lot of services going on in the background, you don’t want to have to hurt your current setup (which brings us to Step 3- Backup your configuration)
  2. Verify that your system vSphere hardware and software requirements
  3. Backup your current configuration including your DB
  4. If your vSphere system includes VMWare solutions and/or plugins, verify they will work with the version you are upgrading to. Think about all them. It is a bad day if you upgrade and then realize your backup software won’t work with the new version.
  5. Upgrade vCenter Server

Concurrent upgrades are not supported and upgrade order matters. You will need to give this due consideration if you have multiple vCenters or services that are not installed on the same physical or virtual server.

Identify upgrade requirements for vCenter

The upgrade requirements will in part depend on your current setup. Do you have the Windows version? Or the Appliance? Do you have the Full on SQL server, Express? And so on. Documentation will be your best friend here, but we are going to go over the highlights.

For Windows Server PreReqs:

  • Synchronize the clocks on the machines running the vCenter Server 5.x services
  • Verify the DNS name of the machines running vCenter are valid and accessible from the other machines
  • Verify that if the user you are using to run the vCenter services is an account other than a Local System Account, it has the following permissions 1) Member of Administrators Group 2) Log on as a Service and 3) Act as part of the OS
  • Verify the connection between the vCenter and the Domain Controller

When you run the installer it will perform the following checks on its own

  • Windows Version
  • Minimum Processor Requirements
  • Minimum Memory Requirement
  • Minimum Disk Requirements
  • Permissions on the selected install and data directory
  • Internal and External Port availability
  • External Database version
  • External Database connectivity
  • Administrator privileges on the Windows System
  • Any credentials you enter
  • vCenter 5.x servers

The next thing you will need to think about it disk space. Depending on what type of deployment model you are going with, the requirements change. An embedded will require about 17 GB minimum. If you are using an external PSC, you will need that 17GB on the one machine but you will need 4GB minimum on the external PSCs.

Hardware Requirements again depend on the type of installation you require (based on size). A PSC will require 2 CPUs and 2 GB of RAM regardless – since it is scaling out vs scaling up. The others are based on the size:

  • Tiny (10 or under Hosts and 100 or under VMs) = 2 CPUs and 8 GB of RAM,
  • Small (up to 100 Hosts and 1000 VMs) = 4 CPUs and 16 GB RAM
  • Medium (up to 400 Hosts and 4000 VMs) = 8 CPUs and 24GB RAM
  • Large (up to 1000 Hosts and 10,000 VMs) =16 CPUs and 32 GB RAM

You will also need a 64-Bit Windows OS to put this on. The earliest version that will work is Windows 2008 SP2. You will also need a 64 bit DSN to connect to your Database.

Those are all the normal things you consider when simply deploying the machine. What does it do when you upgrade it though? Well there is a decent amount going on behind the scenes. The database schema is upgraded; the old Single Sign-On will be migrated to the new Platform Services Controller. And then you have the upgrade of the normal vCenter server software. Some of the upgrades depend on your current version.

  • For vCenter 5.0 you can choose to configure either an embedded or external PSC during the upgrade.
  • For vCenter 5.1 or 5.5 with all services deployed on a single machine, you can upgrade to a vCenter with an Embedded PSC.
  • For vCenter 5.1. or 5.5 with a separate SSO server, you will need to upgrade that to a PSC first
  • If you have multiple instances of vCenter installed, concurrent upgrades are not supported and order does matter.

The following information is a good check list to have before upgrading, as they will ask you for these information items.

Upgrade vCenter Server Appliance (VCA)

This is a bit simpler in my opinion, than the Windows version. There are still a few gotchas you need to be mindful of however. You need to make sure that you are running at least vCenter 5.1 Update 3, or 5.5 Update 2 before you can do an upgrade to 6.0. So if you are not at least at those levels, you will need to update those first to the needed version. In order to do this, it is really simple. Go to the IP or URL of the vCenter Appliance and port 5480. When you login, go to the Update tab and click on Check Updates

Then go ahead and click on Install Updates – You are asked to confirm and after you click yes, it will start.

A reboot is required afterwards for the changes to take effect.

Now that you are at a required level for you to be able to upgrade, you will need to have the VCSA install ISO and the Client Integration Plugin installed on your computer. Then open up the ISO (or burn it to a CD) and run the vcsa-setup.html file

You want to do an upgrade – So go ahead and click on that.

You will next need to accept the EULA
Now you need to tell it the host you are going to deploy the appliance to

The rest of the setup is just as if you are going to deploy a new appliance (because you are) with the addition of one screen. Where you tell it where the source appliance is and user name and password for it, so that it can copy the configuration over.

Identify the methods of upgrading vCenter

As of currently, the only supported method is using the user interface based installer (the web page) – Found on KB2109772
As far as the Windows version, you would use the regular installer. Depending on the deployment method you already have (embedded PSC or external)

Identify/troubleshoot vCenter upgrade errors

So as with most things, the best thing to do when things go wrong, is to look at the logs. If there are any error messages, that might be helpful as well. The log you will want to look at is the installation logs. There are a couple of ways you can go about this. If the install errored out before it fully finished, you can leave the check box selected on the screen for collect logs and it will save it in a zip on your desktop. In the Windows Server the logs will be located at:

%PROGRAMDATA%\VMware\CIS\logs directory, usually C:\ProgramData\VMware\CIS\logs

    %TEMP% directory, usually C:\Users\username\AppData\Local\Temp

You can open the files in the above locations in a text editor such as Notepad++ to look for clues. The Appliance houses the log files in a little different location, since the machine is Linux. First you need to access the appliance. You can do this via SSH or if you have direct access to the appliance (like through the console in the Windows Client). Either way once you get access, you will need to log in and get a command line prompt. If you are not already at a PI Shell prompt, run pi shell to get to the Bash prompt. Then run the vc-support.sh script to get a support bundle. You can then export it from the /var/tmp folder. Either to your desktop or you can cat or vi the firstbootStatus.json file to see which services failed.

You can also grab logs from the ESXi host by running the vm-support command in the ESXi shell or SSH or you can connect via the Windows Client and export logs from there. There are a lot of possible errors – you can go over a few in the Upgrade guide here: vSphere Upgrade Guide .

Next up… Resource Pools.

Objective 4.1: Perform ESXi Host and Virtual Machine upgrades

Here we are again, starting Objective 4.1. The following points will be covered:

  • Identify upgrade requirements for ESXi hosts
  • Upgrade a vSphere Distributed Switch
  • Upgrade Virtual Machine hardware
  • Upgrade an ESXi host using vCenter Update Manager
  • Stage multiple ESXi host upgrades
  • Determine whether an in-place upgrade is appropriate in a given upgrade scenario

So to begin with we should go over a few things before performing an upgrade. Your infrastructure is, I am guessing rather important to you and your company’s livelihood. So we need to take a measured approach to it. We can’t just go ahead and stampede into this without giving it an appropriate amount of thought and planning. There is an order to which components to upgrade first, and there are a number of ways to do it. And for the love of God, make sure your hardware is on the Hardware compatibility list…..before you begin. I just had a case this week from a customer that upgraded to 6 and now will need to downgrade as their server was not on the HCL and they couldn’t get support on it. The PDFs have a pretty good approach to the upgrade process

  1. Read the Release Notes
  2. Verify ALL your equipment you are going to use or need to use, is on the HCL
  3. Make sure you have a good backup of your VM’s as well as your configuration
  4. Make sure the plug-ins or other solutions you are using are compatible with vSphere 6
  5. Upgrade vCenter Server
  6. Upgrade Update Manager
  7. Upgrade Hosts
  8. You can actually stop here, but if you go on you could upgrade your HW version on the VMs etc. and any appliances.

So now we will look directly at the ESXi hosts for upgrading. I am assuming you have gone through the above. In addition to this, make sure there is sufficient disk space for the upgrade. And if there is a SAN connected to the host, for safety sake, it might be best to detach that before performing the upgrade so that you don’t make the mistake of choosing the wrong datastore to overwrite and create a really bad day. If you haven’t already, you will want to move off any remaining VMs or shut them down. When the system is done rolling through the upgrade, apply your licenses. If it wasn’t successful then if you had backed it up, you can restore. Otherwise you can reload it with the new version.

You can upgrade an ESXi 5.x directly to 6.0 a couple of different ways. You can upgrade via Update Manager, interactive upgrade, scripted upgrade, auto deploy, or esxcli command line. A host can also have third part VIBs (VMWare Installation Bundles). They could be driver packages or enhancement packs such as Dell’s Open Manage Plugin. Occasionally you can run into a problem upgrading the host with these installed. You can choose to do a number of things at that point. You can remove the VIB and then retry, or you can create a custom installer ISO.

Upgrade a vSphere Distributed Switch

This is a relatively painless process. You can upgrade from 4.1 all the way to 6.0 if you so choose. You need to make sure your hosts support it. If you have even one host attached to this distributed switch that is at a lower level, that is the level you will need to make the distributed switch. For example, if you have all 6.0 hosts except for one 5.5 host, you will either need to make your distributed switch a 5.5 or remove that host from the vDS. One other thing to be mindful of, you can’t downgrade.

To upgrade, navigate to your networking and then to the distributed switch you wish to upgrade

Now you need to click on upgrade

That will open this dialog

This will show you the versions you can upgrade the switch to. After you click on Next, it will check version against the hosts that are attached to the vDS. It will let you know if any hosts are not able to be upgraded to that version.

Upgrade Virtual Machine hardware

In order to upgrade your virtual machine hardware, you can right-click on the VM you need to upgrade and click on compatibility and then either Upgrade VM Compatibility or Schedule VM Upgrade – as seen here:

This is irreversible and will make it incompatible with previous versions of ESXi. The next screen will ask you what version you want to upgrade to.

This will then upgrade it as soon as you scheduled it.

Upgrade an ESXi host using vCenter Update Manager

To upgrade a host to vSphere 6, you will need to follow the following procedure:

  1. Configure Host Maintenance Mode Settings – Host updates might require you to reboot the host and enter maintenance mode before they can be applied. Update Manager will do this, but you will need to configure what to do with the VMs and the host if it fails to enter maintenance mode
  2. Configure Cluster Settings – The remediation can happen in sequence or in parallel. Temporarily disable DPM, HA Admission Control, and Fault Tolerance to make sure your remediation is successful
  3. Enable Remediation of PXE booted ESXi hosts (if you have them)
  4. Import Host Upgrade Images and create Host Upgrade Baselines
  5. Create a Host Baseline Group – Create a baseline group with the 6 image that you want to apply
  6. Attach Baselines and Baseline groups to Objects – You will need to attach the baseline in Update Manager to the objects you want to upgrade
  7. Manually Initiate a Scan of the ESXi hosts – You will need to do this for Update Manager to pay attention to these hosts
  8. View Compliance Information for vSphere objects – Make sure the baseline that you want to apply is correct for the hosts
  9. Remediate Hosts Against an Upgrade Baseline / Groups – NOW the fun starts, this is where Update Manager starts to apply the patches and upgrades to the ESXi hosts.

Stage multiple ESXi host upgrades

In order to stage patches or upgrades the process is going to be relatively the same as what we just went through. The difference would be you are going to have multiple hosts that are attached to the baseline and instead of Remediating, you will just be Staging. Staging allows you to load the patches or upgrades to the hosts without actually rebooting or applying them yet. This will let you decide when the best time is to take executive action against them. Possibly on the weekend or some other designated time. The actual process is lifted from the guide and transplanted here:

Procedure

1 Connect the vSphere Client to a vCenter Server system with which Update Manager is registered and select Home > Inventory > Hosts and Clusters in the navigation bar.
2 Right click a datacenter, cluster, or host, and select Stage Patches.
3 On the Baseline Selection page of the Stage wizard, select the patch and extension baselines to stage.
4 Select the hosts where patches and extensions will be applied and click Next.

If you select to stage patches and extensions to a single host, it is selected by default.

5 (Optional) Deselect the patches and extensions to exclude from the stage operation.
6 (Optional) To search within the list of patches and extensions, enter text in the text box in the upper-right corner.
7 Click Next.
8 Review the Ready to Complete page and click Finish.

Determine whether an in-place upgrade is appropriate in a given upgrade scenario

This question can encompass a number of things. The hardware requirements aren’t extremely different from ESXi 5.5 to 6. You will need to take into account if you are going to use the same boot type, are you already using something on 5.5 that isn’t yet compatible with 6, or are you more interested in upgrading machines period because your current ones are long in the tooth (old)? All these questions and more are going to have to be considered by you and the members of your team in order to answer if you are going to do an in-place upgrade vs migrate to new systems or installs over the top of the current. There are valid reasons of course for all of them and it all depends on your environment and your vision for it.

This one was the longest to get out so far. Lots of things going on in personal life. I hope to get back to a normal blogging schedule really soon.

-Mike

Objective 3.5 Setup and Configure Storage I/O Control

Moving on to our last sub point in the Storage Objectives, we are going to cover Storage I/O Control. We will cover the following:

  • Enable/Disable Storage I/O Control
  • Configure/Manage Storage I/O Control
  • Monitor Storage I/O Control

Enable / Disable Storage I/O Control

This is relatively easy to do. Click on the datastore you want to modify and then click on Manage > Settings > and then General. Underneath Datastore Capabilities, you can click on Edit and then uncheck the Enable Storage I/O Control.

Configure/Manage Storage I/O Control

The same place is where you can configure it. As you see above you can change the congestion threshold or set a manual latency threshold.

Monitor Storage I/O Control

You can do this on a datastore basis by clicking on the datastore and then clicking on Monitor and then Performance. You can monitor the datastore’s space or Performance. If you click on Performance, you are treated to a lot of graphs detailing everything from latency to IOPs. And that is how you can monitor it – Here is a bonus picture.


And that concludes the Storage Section. Up Next is Virtual Machine Management! So get ready for some fun!

Objective 3.4 Perform Advanced VMFS and NFS Configurations and Upgrades

Continuing along our Storage Objectives, we now are going to cover VMFS and NFS datastores and our mastery of them. We will cover in this objective:

  • Identify VMFS and NFS Datastore properties
  • Identify VMFS5 capabilities
  • Create/Rename/Delete/Unmount a VMFS Datastore
  • Mount/Unmount an NFS Datastore
  • Extend/Expand VMFS Datastores
  • Place a VMFS Datastore in Maintenance Mode
  • Identify available Raw Device Mapping (RDM) solutions
  • Select the Preferred Path for a VMFS Datastore
  • Enable/Disable vStorage API for Array Integration (VAAI)
  • Disable a path to a VMFS Datastore
  • Determine use case for multiple VMFS/NFS Datastores

Time to jump in.

Identify VMFS and NFS Datastore Properties, and Capabilities

Datastores are containers we create in VMWare to hold our files for us. They can be used for many different purposes including storing Virtual Machines, ISO images, Floppy Images, and so on. The main difference between NFS and VMFS datastores is their backing. The storage behind the datastore. For VMFS, you are dealing with block level storage, whereas with NFS you are dealing with a share from a NAS that already has a filesystem on it. These each have their own pros and cons, and specific abilities that can used and worked with.

There have been a few different versions of VMFS that have been released since inception. They include VMFS2, VMFS3, and VMFS5. It is to be noted though, that if you still have VMFS2 you can no longer read or write to them as of ESXi 5 and you can’t create VMFS3 in ESXi6, though you can read and write to them.

VMFS5 provides many enhancements over its predecessors. Among them include the following:

  • Greater than 2TB storage devices for each extent
  • Support of virtual machines with large capacity disks larger than 2TB
  • Increased resource limits such as file descriptors
  • Standard 1MB block size
  • Greater than 2TB disk size for RDM
  • Support of small files of 1KB
  • Scalability improvements on devices supporting hardware acceleration
  • Default use of ATS-only locking mechanism (previously SCSI reservations were used)
  • Ability to reclaim physical storage space on thin provisioned storage devices
  • Online upgrading process that allows you to upgrade to the latest version of VMFS5 without having to offline the datastore

Datastores can be local or shared. They are made up of the actual files, directories and so on, but they also contain mapping information for all these objects called metadata. Metadata is also frequently changed when certain operations take place.

In a shared storage environment, when multiple hosts access the same VMFS datastore, specific locking mechanisms are used. This is one of the biggest advantages of VMFS5, called ATS or Atomic Test and Set, better known as hardware assisted locking. This supports discrete locking per disk sector. This is vs normal windows volume locking where a single server will lock the volume for use, preventing some of the cooler features allowed for by VMFS.

Occasionally you will have a datastore that still uses a combination of ATS and SCSI reservations. One of the issues with this is time. When metadata operations occur the whole storage device is locked vs just the disk sectors involved. Then when the operation has completed, other operation can continue. As you can imagine if enough of these occur, you can start creating disk contention and your VM performance might suffer.

You can use the CLI to show what system you are using on a VMFS datastore. At a CLI prompt type in the following:

Esxcli storage vmfs lockmode list

You can also specify a server by adding –server=<servername>

You can view VMFS and NFS properties by going doing the following:

  1. From the Home Screen, click on Storage
  2. On the left, choose the datastore (VMFS or NFS) you are interested in
  3. In the middle pane, click on Manage and then click on Settings You will now see some variation of the following

This will show you a number of different properties that may be useful for you.

Now let’s cover NFS a bit. NFS is just a bit different that VMFS. Instead of directly accessing block storage, a NFS client exports a share out over TCP/IP to access a NFS volume located on that NFS server. VMWare now supports 3 and 4.1 versions of NFS. The ESXi hosts can mount and use the volume for their needs. Most of the features are supported on NFS volumes including:

  • vMotion and Storage vMotion
  • High Availability and Distributed Resource Scheduling
  • Fault Tolerance and Host Profiles
  • ISO images
  • Virtual Machine snapshots
  • Virtual machines with large capacity disks larger than 2TB
  • Multi-pathing (4.1 only)

Create/Rename/Delete/Unmount a VMFS Datastore

There are a number of ways to do these things, here is one of them.

  1. While on the Storage tab in the Navigation Pane, right click on the host or the cluster and then click on Storage and then New Datastore

  1. You are presented with the above. You will then have a window pop up that notifies you of the location it will be created. Click Next and you will be presented with this window

  1. Click on VMFS and click on Next
  2. Now you are asked to put in a name for your new Datastore and choose the host that has the device accessible to it

  1. Click Next and it will show you partition information for that device, if there is any and will make sure you want to wipe everything to replace it with a VMFS datastore. Click next again and then Finish on the next screen

Renaming the datastore is as simple as right-clicking on the datastore you wish to rename and then click Rename. Deleting and unmounting is the same way. Beware that Deleting will delete the datastore and everything on it, while unmounting just makes it inaccessible.

Mount/Unmount a NFS Datastore

This is as easy as creating a VMFS datastore. Just a few different steps in there. Follow the same first steps as before to create a new VMFS datastore. When it asks about the VMFS though, there are two more options underneath there. NFS and VVols.

Next of course you will need to fill out a few different details. The next detail you will need to fill in is the version of NFS

Next window

Under this you would put the server (NAS) IP address and the export share folder and what you are going to call the datastore. You can also mount the NFS share as read-only. Next screen is asking you what hosts are going to have access to the share.

Last screen is just a summary. Click Finish and you are done.

Extend/Expand VMFS Datastores

There are two ways to make your datastore larger. You can expand your existing array or you can use another LUN (not already used for a datastore) to team together and create a larger datastore. To do either one, navigate to the datastore you wish to increase and then right click on it and click on Increase Datastore Capacity. If you have a datastore that can be expanded, it will show up in the next screen. If not, then it will remain blank. Depending on your layout and your previous selections you will have the opportunity to use another LUN or to expand existing one.

Place a VMFS Datastore in Maintenance Mode

Maintenance Mode is a really cool feature. You have to have a Datastore Cluster in order to make it work though. If you right click on a normal datastore, the option to put it in maintenance mode is greyed out. Once you have created a datastore cluster and have the disks inside it, you can right click on the datastore and click on Maintenance Mode and Click enter maintenance mode.

Identify available Raw Device Mapping (RDM) Solutions

Raw Device Mapping provides a mechanism for a virtual machine to have direct access to a LUN on a physical storage system. The way this works is that when you create a RDM, it creates a mapping file that contains metadata for managing and redirecting disk access to the physical device. Lifting a picture from the official PDF to pictorially represent it.

There are a few situations where you might need a RDM.

  1. SAN Snapshots or other layered applications that use features inherent to the SAN
  2. In any MSCS clustering scenario that spans physical hosts.

There are two types of RDM, physical compatibility and virtual compatibility. Virtual Compatibility allows an RDM to act exactly like a virtual disk file, including the use of snapshots. Physical Compatibility mode allows for lower level access if needed.

Select the Preferred Path for a VMFS Datastore

This is relatively easy to do, It can only be done under the Fixed PSP policy. Click on the datastore you want to modify underneath the navigation pane. Then click Manage and Settings and then Connectivity and Multi-pathing.

Then Click on Edit Multi-pathing

Now you can choose your Preferred Path.

Enable/Disable vStorage API for Array Integration (VAAI)

VAAI or hardware acceleration is enabled by default on your host. If for some reason you want to disable it, you would need to browse to your host in the Navigator. Then you would click on the Manage, then Settings, and under System you would click on Advanced System Settings. Change the value of any of the three following to 0

  • VMFS3.HardwareAcceleratedLocking
  • DataMover.HardwareAcceleratedMove
  • DataMover.HardwareAcceleratedInit

Disable a path to a VMFS Datastore

To disable a path to a datastore, you need to navigate to the datastore you are interested in again, and then click on Manage, then Settings, then Connectivity and Multi-pathing. Scroll Down under the Multi-pathing details and you will see Paths. Click on the Path you want to disable and then click Disable

Determine Use Cases for Multiple VMFS/NFS Datastores

There are a number of reasons to have more than one LUN. Most SAN arrays will adjust queues and caching based per LUN. Having too many VMs on a single LUN could overload IO to those same disks. Also when you are creating HA clusters, it typically wants at least 2 LUNs to maintain heartbeats to. All these are valid reasons for creating more than a single LUN.

And this is me signing off again, till the next time.

Objective 3.3 Configure Storage Multi-pathing and Failover

Once again we return to cover another objective. It has been hectic lately, with delivering my first ICM 6 class and also delivering a vBrownBag the same week on one of the future objectives. Now we are back though and trying to get in the swing of things again. Here are the sub-points we will be covering this time:

  • Configure/Manage Storage Load Balancing
  • Identify available Storage Load Balancing options
  • Identify available Storage Multi-pathing Policies
  • Identify features of Pluggable Storage Architecture (PSA)
  • Configure Storage Policies
  • Enable/Disable Virtual SAN Fault Domains

Without further ado lets dig in. Storage Multi-pathing is a cool feature that allows you to load balance IO and also allows for path failover in the event of failure. Storage plays a rather important path in our virtualization world, so it stands to reason that we would want to make sure that it is as fast and as reliable as possible. We have 3 multi-pathing options available by default, but have the ability to add more depending on the storage devices we have in our environment. For example, Equallogic adds a new Multi-pathing PSP when you are using their “MEM” kit. The default policies are as follows:

  • VMW_PSP_FIXED
  • VMW_PSP_MRU
  • VMW_PSP_RR

Defining which of these we want to use we can choose how we load balance and failover paths. Of course we should probably get a better understanding of what they do in order to make the best choice.

  • Fixed is where the host will use the designated preferred path if configured. Otherwise, it will select the first working path discovered at system boot time. This is the default policy for most active-active SANs. Also if you specify a path to preferred and it becomes unavailable, when it becomes available again, it will revert back.
  • Most Recently Used selects the path that it used most recently. If that path becomes unavailable, it will choose an alternative. If the path becomes available again, it will not revert. MRU is the default path for active-passive arrays
  • Round Robin uses an automatic path selection algorithm rotating through all the active paths when connecting to active-passive or all available paths when connecting to active-active arrays. RR is the default for a number of arrays

How do we configure and manage these? We will need to do the following

  1. Browse to the host in the navigator
  2. Click on Manage tab and then Storage
  3. Click on the Storage Device or Protocol Endpoint
  4. Click on the Device you want to manage
  5. Under the Properties tab, scroll down to Edit Multipathing and click



  1. Choose the Multi-Pathing type you want and click Ok



And that is how we configure it.

Moving on to features of the PSA or Pluggable Storage Architecture now. To manage storage multipathing, ESXi uses a collection of Storage APIs. This is also known as the Pluggable Storage Architecture. It consists of the following pieces

  • NMP or Native Multi-Pathing Plug-In. This is the generic VMWare multipathing module
  • PSP or Path Selection Policy. This is how VMWare decides on a path for a given device
  • SATP or Storage Array Type Plug-In. This is how VMWare handles path failover for a given array

Using the storage APIs, as mentioned before other companies can introduce their own Pathing Policy. Here is a good picture on how everything aligns

Storage Array Type Plugins or SATPs are run in conjunction with VMWare NMP and are responsible for array-specific operations. ESXi offers a SATP for every type of array it supports, and it also provides for default SATPs for active-active and active-passive and ALUA arrays. These are used to monitor the health of each path, report changes in them, and do necessary actions to failover in case of something going wrong.

Storage Policies are next on the agenda. Storage Policies are a mechanism that allow you to specify storage requirements on a per VM basis. If you are using VSAN or Virtual Volumes, then this policy can also determine how the machine is provisioned and allocated within the storage resource to guarantee the required level of service.

In vSphere 5.0 and 5.1 storage policies existed as storage profiles and had a different format and were not quite as useful. If you previously used them, when you upgrade to 6.0 they would be upgraded to the new Storage Policy.

There are 2 types of storage policies. You can base them on abilities or Storage-Specific data services, or you can use reference tags to group them. Let’s cover both of them a little more in depth.

Rules based on Storage-Specific Data Services

These rules are based on data services that storage entities such as Virtual SAN and Virtual Volumes advertise. To supply this information, these products use storage providers called VASA providers. This will surface the possible capabilities that are available to VMWare for you to put in your Storage Policy. Some examples of this include capacity, performance, availability, redundancy and so on.

Rules based on Tags

Rules based on tags reference datastore tags that you associate with specific datastores. Just like tags on other objects that, you as an administrator can apply, you can apply tags to a datastore. You can apply more than one tag to a single datastore. Once you apply a tag to a datastore, it will show up in the Storage Policies interface, which you can then use to define the rules for the storage policy.

So how do we use these? There are a number of steps we need to perform to enable these and apply them. The very first thing we need to do is to enable Storage Policies on a host or Cluster. To do that, perform the following steps:

  1. In the web client, click on Policies and Profiles and then VM Storage Polices
  2. Click on the Enable Storage policies icon (looks like a scroll with a check mark)
  3. Select vCenter instance and all the clusters and hosts that are available will appear
  4. Choose a host or cluster and then click on Enable

Now you can define your VM Storage policy. For the first one we will work on the Tag based policy.

  1. Browse to a datastore and then click on Manage and then Tags



  1. Click on the new tag icon
  2. Select the name of the Tag and Description. Under Category, choose New Category
  3. Under Category Name, type in the Name you desire and also what type of object you will associate it with

  4. When you are done creating it, you will now need to assign it to a datastore – this is the tag with green arrow pointing to the right
  5. Your tag should show up here. Highlight and click Assign
  6. You should now see your tag show up

Now you can create a storage policy based on this tag. You do that by navigating to the same place where you enabled the policies.

  1. Click on Policies and Profiles and then VM Storage Polices
  2. Click the Create a New VM Storage Policy

  3. Click Next twice and you will have a Rule Set 1 and you have the ability to create one on data services or based on tags. Choose the one on tags

  4. Under Category, choose the one you had previously created, and then the tag that you have created.

  5. You can add more rules if you have them but if not click on next

  6. When you click on Next you are now shown the datastores that are compatible (because you have associated the tag to them

  7. A summary appears and then you can click on Finish

The next thing to do to make this active is to apply it to a VM. You can do this when you create the VM or afterwards. If you are applying it afterwards, you will need to do it by going to the Settings of the VM and then clicking the little arrow in front of the Hard Disk. Next choose the Storage Policy you want from the drop down box

Now you have achieved your goal. You can go through the same steps with either policy.

Last thing we will need to go over is Enabling and Disabling Fault Domains for Virtual SAN. I don’t have a VSAN setup (if anyone wants to contribute toward my home lab fund let me know..J ) But if you did, you would enable them underneath the settings for the cluster. Then you would go to Manage and then VSAN. Underneath that sub category you will find Fault Domains. This is where you would create/enable/disable Fault Domains.

And thus concludes another objective. Next up, VMFS and NFS datastores. Objective 3.4


Objective 3.2: Configure Software Defined Storage

Back again!! This time we are going to go over the relatively new VSAN. Now VMWare Virtual SAN originally came out in 5.5 U1, but has been radically overhauled for 6.0 (massive jump from VSAN 1.0 -6 J ) So what are we going to go over and have to know for the exam?

  • Configure/Manage VMware Virtual SAN
  • Create/Modify VMware Virtual Volumes (VVOLs)
  • Configure Storage Policies
  • Enable/Disable Virtual SAN Fault Domains

Now this is not going to be an exhaustive guide and information about VSAN and its use, abilities, and administration. Cormac Hogan and Rawlinson Rivera already do that so well, there is no point. I have Cormac’s Blog to the right. He has more info than you probably can process. So we will concern ourselves with a high overview of the product and the objectives.

Here comes my 50 mile high overview. VSAN is Software Defined Storage. What does this mean? While you still have physical drives and cards, you are pulling them together and creating logical containers (virtual disks) and such through software and the vmkernel. You can setup VSAN as a hybrid or all flash cluster. In the hybrid approach you have magnetic media used as the storage media and the flash is the cache. In all flash, the flash disks are used for both jobs.

When you setup the cluster, you can do it on a new cluster or you can add the feature to an existing cluster. When you do that it will take the disks and aggregate them into a single datastore available to all hosts in the VSAN cluster. You can later expand this by adding more disks or additional hosts with disks to the cluster. The cluster will run much better if all the hosts in the cluster are as close as possible to each other, just like your regular cluster. You can have machines that are just compute resources and have no local datastore or disk groups and still be able to use the VSAN datastore.

In order for a host to contribute its disks it has to have at least one SSD and one spindle disk. Those disks form what is known as a disk group. You can have more than one disk group per machine, but each one needs at least the above combination.

Virtual SAN manages the data in the form of flexible data containers called object. VSAN is known as Object Based Storage. An object is a logical volume that has its data and metadata spread distributed across the cluster. There are the following types of objects:

  • VM Home Namespace = this is where all configuration files are stored such as the .vmx, log files, vmdks and snapshot delta description files and so on.
  • VMDK = The .vmdk stores the contents of the Virtual Machine’s HD
  • VM Swap Object = This is created when the VM turns on, just like normal
  • Snapshot Delta VMDKs =Are created when snapshots are taken of the VM, each Delta is an object
  • Memory Object = Are the objects created when memory is selected as well when the snapshot is taken

Along with the objects, you have metadata that VSAN uses called a Witness. This is a component that serves as a tiebreaker when a decision needs to be made regarding the availability of surviving datastore components, after a potential failure. There may be more than one witness depending on your policy for the VM. Fortunately, this doesn’t take up much space – approximately 2MB on the old VSAN 1.0 and 4MB for version 2.0/6.0

So the part of the larger overall picture is being able to apply policies granularly. You are able to specify on a per VM basis how many copies of something you want, vs a RAID 1 where you have a blanket copy of everything regardless of its importance. SPBM (Storage Based Policy Management) allows you to define performance, and availability in the form of this policy. VSAN ensure that you have a policy for every VM. Whether it is the default or a specific one for the VM. For best results you should create and use your own, even if the requirements are the same as the default.

So those of us who used and read about VSAN 1.0 how does the new version differ? Quite a lot. This part is going to be lifted from Cormac’s Site. (Just the highlights)

  1. Scalability – Because vSphere 6.0 can now support 64 hosts in a cluster, so can VSAN
  2. Scalability – Now supports 62TB VMDK
  3. New on-disk Format (v2) – This allows a lot more components per host to be supported. It leverages VirtsoFS
  4. Support for All-Flash configuration
  5. Performance Improvement using the new Disk File System
  6. Availability improvements – You can separate racks of machines into Fault Domains
  7. New Re-Balance mechanism – rebalances components across disks, disk groups, and hosts
  8. Allowed to create your own Default VM Storage Policy
  9. Disk Evacuation granularity – You can evacuate a single disk now instead of a whole disk group
  10. Witnesses are now smarter – They can exercise more than a single vote instead of needing multiple witnesses
  11. Ability to light LEDs on disks for identification
  12. Ability to mark disks as SSD via UI
  13. VSAN supports being deployed on Routed networks
  14. Support of external disk enclosures.

As you can see this is a huge list of improvements. Now that we have a small background and explanation of the feature, let’s dig into the bullet points.

Configure/Manage VMware Virtual VSAN

So first, as mentioned before, there are a few requirements that need to be met in order for you to be able to create and configure VSAN.

  • Cache = You need one SAS or SATA SSD or PCIe Flash Device that is at least 10% of the total storage capacity. They can’t be formatted with VMFS or any other file system
  • Virtual Machine Data Storage = For Hybrid group configurations, make sure you have at least one NL-SAS, SAS, or SATA magnetic drive (sorry PATA owners). For All Flash disk groups, make sure you have at least one SAS, SATA, or PCIe Flash Device
  • Storage Controller = One SAS or SATA Host Bus Adapter that is configured in pass-through or RAID 0 mode.
  • Memory = this depends on the number of disk groups and devices that are managed by the hypervisor. Each host should contain a minimum of 32GB of RAM to accommodate for the maximum 5 disk groups and max 7 capacity devices per group
  • CPU = VSAN doesn’t take more than about 10% CPU overhead
  • If booting from SD, or USB device, the device needs to be at least 4GB
  • Hosts = You must have a minimum of 3 hosts for a cluster
  • Network = 1GB for Hybrid solutions, 10GB for all flash solutions. Multicast must be enabled on the switches – Only IPv4 is supported at this time
  • Valid License for VSAN

Now that we got all those pesky requirements out of the way. Let’s get started on actually creating the VSAN. The first thing we will need to do is create a VMkernel port for it. There is a new option for traffic as of 5.5U1 which is ….. VSAN. You can see it here:

After you are done, it will show up as being enabled you can check by looking here:

Now that is done, you will need to enable the cluster for VSAN as well. This is done under the cluster settings or when you create the cluster to begin with.

You have the option to automatically add the disks to the VSAN cluster, or if you leave in manual you will need to add the disks yourself, and new devices are not added when they are installed. After you create it you can check the status of it on the Summary page.

You can also check on the individual disks and health and configure disk groups and Fault Domains under the Manage > Settings > Virtual SAN location.

Here is a shot from my EVO:RAIL with VSAN almost fully configured

The errors are because I don’t have the VLANs fully configured for them to communicate yet. There is a lot more we could work on with VSAN but I don’t have the blog space nor the time. So moving on….

Create/Modify VMware Virtual Volumes (VVOLs)

First a quick primer on VVols. What are these things called Virtual Volumes? Why do we want them when LUNs have served us well for so long? So if you remember, one of the cool advantages of VSAN is the ability to assign policies on a per VM basis. But VSAN is limited to only certain capabilities. What if we want more? In comes VVols. Use VVols and a storage SAN that supports them, you can apply any abilities that SAN has on a per VM basis. Stealing from a VMWare blog for the definition, “VVols offer per-VM management of storage that helps deliver a software defined datacenter”. So what does this all mean? In the past you have had SANs with certain capabilities, such as deduplication, or a specific RAID type, etc. You would need to have a really good naming system or DB somewhere to list which LUN was which. However now, we have the ability for us to just set a specific set of rules for the VM in a policy and then it will find the LUN matching that set of rules for us. Pretty nifty huh?

So how do you create and modify these things now? The easiest way is to create a new datastore just like you would a regular VMFS or NFS.

  1. Select vCenter Inventory Lists > Datastores
  2. Click on the create new datastore icon
  3. Click on Placement for the datastore
  4. Click on VVol as the type

  5. Now put in the name you wish to give it, and also choose the Storage Container that is going to back it. (kind of like a LUN – you would have needed to add a Protocol Endpoint and Storage Container before getting to this point)
  6. Select the Hosts that are going to have access to it
  7. Finish

Kind of working this backwards but how do you configure them? You can do the following 4 things:

  1. Register the Storage Provider for VVols = Using VASA you configure communication between the SAN and vSphere. Without this communication nothing will work in VVols.
  2. Create a Virtual Datastore = this is to be able to create the VVol
  3. Review and Manage Endpoints = this is a logical proxy used to communicate between the virtual volumes and the virtual disks that they encapsulate. Protocol endpoints are exported along with their associated storage containers by the VASA provider.
  4. (Optional) If you host uses ISCSI-based transport to communicate with protocol endpoints representing a storage array, you can modify the default multipathing policy associated with it.

Configure Storage Policies

At the heart of all these changes is the storage policy. The storage policy is what enables all this wonderful magic to happen behind the scenes with you, the administrator, blissfully unaware. Let’s go ahead and define it as VMWare would like it defined: “A vSphere storage profile defines storage requirements for virtual machines and storage capabilities of storage providers. You use storage policies to manage the association between virtual machines and datastores.”

Where is it found? On the home page in your web client under…. Policies and Profiles. Anti-Climatic I know. Here is a picture of that when you click on it.

This gives you a list of all the profiles and polices associated with your environment. We currently are interested only in the Storage Policies so let us click on that. Depending on what products you have setup, yours might look a little different.

You can have Storage policies based off one of the following:

  • Rules based on Storage-Specific Data Services = these are based on data services that entities such as VSAN and VVols can provide, for example deduplication
  • Rules based on Tags = these are tags you, as an administrator, associate with specific datastores. You can apply more than one per datastore

Now we dig in. First thing we are going to need to do is to make sure that storage policies are enabled for the resources we want to apply them to. We do that by clicking on the Enable button underneath storage policies

When enabled you will see the next screen look like this (with your own resource names in there of course)

We can go ahead and create a storage policy now and be able to apply it to our resources. When you click on Create New VM Storage Policy, you will be presented with this screen:

Go ahead and give it a name and optionally a description. On the next screen we will define the rules that are based on our capabilities

In this one I am creating one for a Thick provisioned LUN


Unfortunately none of my datastores are compatible. You can also configure based off of tags you associate on your datastores.

Enable/Disable Virtual SAN Fault Domains

This is going to be a quick one as I am a bit tired of this post alreadyJ. In order to work with Fault Domains you will need to go to the VSAN Cluster and then click on Manage and Settings. Next on the left hand side you will see a Fault Domains. Click on it. You now have the ability to segregate hosts into specific fault domains. Click on add (+) to create a fault domain and then add the hosts to it you want. You will end up with a screen like this:

Onwards and Upwards to the next post!!


Objective 3.1: Manage vSphere Storage Virtualization

Wait, wait wait…. Where did objective 2.2 go? I know… I didn’t include it since I have already gone over in previous objectives all the questions asked on it. So moving on to Storage.

So we are going to cover the following objective points.

  • Identify storage adapters and devices
  • Identify storage naming conventions
  • Identify hardware/dependent hardware/software iSCSI initiator requirements
  • Compare and contrast array thin provisioning and virtual disk thin provisioning
  • Describe zoning and LUN masking practices
  • Scan/Rescan storage
  • Configure FC/iSCSI LUNs as ESXi boot devices
  • Create an NFS share for use with vSphere
  • Enable/Configure/Disable vCenter Server storage filters
  • Configure/Edit hardware/dependent hardware initiators
  • Enable/Disable software iSCSI initiator
  • Configure/Edit software iSCSI initiator settings
  • Configure iSCSI port binding
  • Enable/Configure/Disable iSCSI CHAP
  • Determine use case for hardware/dependent hardware/software iSCSI initiator
  • Determine use case for and configure array thin provisioning

So let’s get started

Identify storage adapters and devices

Identifying storage adapters is easy. You have your own list to refer to, the Storage Devices view. To navigate to it do the following:

  1. Browse to the Host in the navigation pane
  2. Click on the Manage tab and click Storage
  3. Click Storage Adapters
    This is what you will see

As you can see, identification is relatively easy. Each adapter is assigned a VMHBAXX address. They are under larger categories and also you are given the description of the name of the hardware, i.e. Broadcom ISCSI adapter. You can find out a number of details about the device by looking down below and going through the tabs. As you can see, one of the tabs is devices that is under that particular controller. Which brings us to our next item, storage devices.

Storage Devices is one more selection down from Storage Adapters, so naturally you would navigate the same way to it and then just click on Storage Devices instead of adapters. And now that we are seeing those, now it’s time to move on to naming conventions to understand why they are named how they are.

Identify Storage Naming Conventions

You are going to have multiple names for each type of storage and device. Depending on the type of storage, ESXi will use a different convention or method to name each device. The first type is SCSI Inquiry identifiers. The host uses a command to query the device and uses the response to generate a unique name. They are unique, persistent, and will have one of the following formats:

  • Naa.number = naa stands for Network Address Authority and then it is followed by a bunch of hex number that identify vendor, device, and lun
  • T10.number = T10 is a technical committee tasked with SCSI storage interfaces and their standards (also plenty of other disk standards as well)
  • Eui.number – Stands for Extended Unique Identifier.

You also have the path based identifier. This is created if the device doesn’t provide the needed information to create the above identifiers. It will look like the following: mpx.vmhbaxx:C0:T0:L0 – this can be used just the same as the above identifiers. The C is for the channelr, the T is for target, and L is for the LUN. It should be noted that this identifier type is neither unique nor persistent and could change every reboot.

There is also a legacy identifier that is created. This is in the format vml.number

You can see these identifiers on the pages mentioned above and also at the command line by typing the following.

  • “esxcli storage core device list”

Identify hardware/dependent hardware/software iSCSI initiator requirements

You can use three types of ISCSI initiators in VMware. These are independent hardware, dependent hardware, and software initiators. The differences are as follows

  • Software ISCSI = this is code built into the VMkernel. It allows your host to connect to ISCSI targets without having special hardware. You can use a standard NIC for this. This requires a VMkernel adapter. This is also able to use all CHAP levels
  • Dependent Hardware ISCSI initiator – this device still depends on VMware for networking, ISCSI configuration, and management interfaces. This is basically ISCSI offloading. An example of this is Broadcom 5709 NIC. This requires a VMkernel adapter (this will show up as a NIC and a Storage adapter). This is able to use all CHAP levels
  • Independent Hardware ISCSI initiator – this device implements its own networking, ISCSI config and management interfaces. An example of this is the Qlogic QLA4052 adapter. This does not require a VMkernel adapter (this will show up as a Storage Controller). This is only able to use unidirectional CHAP and use unidirectional unless prohibited by target CHAP levels

Compare and contrast array thin provisioning and virtual disk thin provisioning

You have two types of thin provisioning. The biggest difference between these are where the provisioning is going on at. The array can thinly provision the LUN. In this case it will present the total logical size to the ESXi host which may be more than the real physical capacity. If this is the case then there is really no way for your Esxi host to know if you are running out of space. This obviously can be a problem. Because of this, Storage APIs – Array Integration was created. Using this feature and a SAN that supports it, your hosts are aware of the underlying storage and are able to tell how your LUNs are configured. The requirements for this are simply ESXi 5.0 or later and a SAN that supports Storage APIs for VMware.

Virtual Disk Thin Provisioning is the same concept but done for the Virtual Hard Disk of the Virtual Machine. You are creating a disk and telling it that it has more space than it actually might. Due to this, you will need to monitor the status of that disk in case your VM Operating System starts trying to use that space.

Describe zoning and LUN masking practices

Zoning is a Fiber Channel Concept meant to restrict servers from seeing storage arrays. Zone will define which HBA’s, or Cards in the Server, can connect to which Storage Processors on the SAN. LUN Masking on the other hand will only allow certain hosts to see certain LUNs.

With ESXi hosts you want to use single initiator zoning or single initiator-single target zoning. The latter is preferred. This can help prevent misconfigurations and access problems.

Rescan Storage

This one is pretty simple. In order to take count of new storage or see changes on existing storage, you may want to rescan your storage. You can do two different operations from the gui client: 1) Scan for New Storage Devices, 2) Scan for new VMFS Volumes. The 1st will take longer that the second. You can also rescan for storage at the command line by performing the following command at CLI

esxcli storage core adapter rescan –all – this will rescan for new Storage Devices
vmkfstools –V
– This will scan for new VMFS volumes

I have included a picture of the webclient with the button circled in red that is to rescan.

Configure FC/iSCSI LUNs as ESXi boot devices

ESXi supports booting from Fiber Channel or FCoE LUNs as well as ISCSI. First we will go over Fiber Channel.

Why would you want to do this first off? There are a number of reasons. Among them you remove the need to have storage included in the servers. This makes them cheaper and less prone to failure as hard drives are the most likely component to fail. It is also easier to replace the servers. One server could die and you can drop a new one in its place, change zoning and away you go. You also access the boot volume through multiple paths, whereas if it was local, you would generally have one cable to go through and if that fails, you have no backup.

You do have to be aware of requirements though. The biggest one is to have a separate boot LUN for each server. You can’t use one for all servers. You also can’t multipath to an active-passive array. Ok so how do you do it? On a FC setup:

  1. Configure the array zoning and also create the LUNs and assign them to the proper servers
  2. Then using the FC card’s BIOS you will need to point the card to the LUN and add in any CHAP credentials needed
  3. Boot to install media and install to the LUN

On ISCSI, you have a little different setup depending on what kind of initiator you are using. If you are using an Independent Hardware ISCSI initiator, you will need to go into the card’s BIOS to configure booting from the SAN. Otherwise with a Software or Dependent, you will need to use a network adapter that supports iBFT. Good recommendations from VMware include

  1. Follow Storage Vendor Recommendations (yes I got a sensible chuckle out of that too)
  2. Use Static IPs to reduce the chances of DHCP conflicts
  3. Use different LUNs for VMFS and boot partitions
  4. Configure proper ACLs. Make sure the only machine able to see the boot LUN is that machine
  5. Configure a diagnostic partition – with independent you can set this up on the boot LUN. If iBFT, you cannot

Create an NFS share for use with vSphere

Back in 5.5 and before you were restricted to using NFS v3. Starting with vSphere 6, you can now use NFS 4.1 VMware has some recommendations for you about this as well.

  1. Make sure the NFS servers you use are listed in the HCL
  2. Follow recommendations of your storage vendor
  3. You can export a share as v3 or v4.1 but you can’t do both
  4. Ensure it’s exported using NFS over TCP/IP
  5. Ensure you have root access to the volume
  6. If you are exporting a read-only share, make sure it is consistent. Export it as RO and make sure when you add it to the ESXi host, you add it as Read Only.

To create a share do the following:

  1. On each host that is going to access the storage, you will need to create a VMkernel Network port for NFS traffic
  2. If you are going to use Kerberos authentication, make sure your host is setup for it
  3. In the Web Client navigator, select vCenter Inventory Lists and then Datastores
  4. Click the Create a New Datastore icon
  5. Select Placement for the datastore
  6. Type the datastore name
  7. Select NFS as the datastore type
  8. Specify an NFS version (3 or 4.1)
  9. Type the server name or IP address and mount point folder name (or multiple IP’s if v4.1)
  10. Select Mount NFS read only – if you are exporting it that way
  11. You can select which hosts that will mount it
  12. Click Finish




Enable/Configure/Disable vCenter Server storage filters

When you look at your storage and when you add more or do other like operations, VMware by default has a set of storage filters it employs. Why? Well there are four filters. The filters and explanation for them are as follows

  1. config.vpxd.filter.vmfsFilter = this filters out LUNs that are already used by a VMFS datastore on any host managed by a vCenter server. This is to keep you from reformatting them by mistake
  2. config.vpxd.filter.rdmFilter = this filters out LUNs already referenced by a RDM on any host managed by a vCenter server. Again this is protection so that you don’t reformat by mistake
  3. config.vpxd.filter.SameHostAndTransportsFilter = this filters out LUNs ineligible for use due to storage type or host incompatibility. For instance you can’t use Fiber Channel Extent and attach it to an ISCSI LUN
  4. config.vpxd.filter.hostRescanFilter = this prompts you to rescan anytime you perform datastore management operations. This tries to make sure you maintain a consistent view of your storage

And in order to turn them off you will need to do it on the vCenter (makes sense huh?) You will need to navigate to the vCenter object and then to the Manage tab.

You will then need to add these settings since they are not in there for you to change willy nilly by default. So click on the Edit and then type in the appropriate filter and type in false for the value. Like so

Configure/Edit hardware/dependent hardware initiators

A dependent hardware ISCSI still uses VMware networking and ISCSI configuration and management interfaces provided by VMware. This device will present two devices to VMware. Both a NIC and an ISCSI engine. The ISCSI will show up under Storage Adapters (vmhba). In order for it to work though, you still need to create a VMKernel port for it and to a physical network port. Here is a picture of how it looks underneath your storage adapters for a host.

There are a few things to be aware of while using a dependent initiator.

  1. When you are using the TCP Offload engine, you may not show activity or little activity on the NIC associated with the adapter. This is due to the host passing all the ISCSI to the engine and it bypasses the regular network stack
  2. If using the TCP offload engine, it has to reassemble the packets in hardware and there is a finite amount of buffer space. You should enable flow control in order to be able to better manage the traffic (pause frames anyone?)
  3. Dependent adapters will support IPv4 and IPv6

To setup and configure them you will need to do the following:

  1. You can change the alias or the IQN name if you want by going to the host and Manage > Storage > Storage Adapters and highlightling the adapter and then clicking Edit
  2. I am assuming you have already created a VMKernel port by this point. The next thing to do would be to bind the card to the VMKernel
  3. You do this by clicking on the ISCSI adapter in the list and then clicking on Network Port Binding below
  4. Now click on the add icon to associate the NIC and it will give you this window
  5. Click on the VMkernel you created and click Ok
  6. Go back to the Targets section now and add your ISCSI target. Then rescan and voila.

Enable/Disable software iSCSI initiator

For a lot of reasons you might want to use the software ISCSI initiator instead of the hardware one. For instance you might want to maintain an easier configuration where the NIC doesn’t matter. Or you just might not have the availability of dependent cards. Either way, you can use the software initiator to work your bits. “But Wait” you say, “Software will be much slower than hardware!” You would be correct perhaps with older revisions of ESX. However the software initiator is so fast at this point that there is not much difference between them. By default, however, the software initiator is not installed. You will need to install it manually. To do this you will need to go to the same place we were before, under Storage Adapters. Then while there, click on the add icon (+) and Click on Add Software ISCSI adapter. Once you do that, it will show up in the adapters and allow you to add targets and bind NICs to it just like the hardware ISCSI will. To disable it, just click on it and then under properties, down below, click on Disable.

Configure/Edit software iSCSI initiator settings
Configure iSCSI port binding
Enable/Configure/Disable iSCSI CHAP

I’m going to cover all these at the same time since they flow together pretty well. To configure your ISCSI initiator settings you would navigate to the ISCSI adapter. Once there all your options are down at the bottom under the Adapter Details. If you want to edit one, you would click on the tab that has the settings and click on Edit. Your Network Port Bindings is under there and is configured the same as we did before. The Targets is there as well. Finally CHAP is something we haven’t talked about yet. But it is there underneath the Properties tab under Authentication. Depending on what type of adapter you have, will change the type of CHAP available to you. Click on Edit under the Authentication section and you will get this window

As you probably noticed this is done on a per storage adapter level. So you can change it for different initiators. Keep in mind this is all sent via plain text so the security is not really that great. This is better used to mask LUNs off from certain hosts.

We have already pretty much covered why we would use certain initiators over others and also thin provisioning. So I will leave those be and sign off this post. (Which keep getting more and more loquacious)

%d bloggers like this: