June 14, 2015

Objective 3.5 Setup and Configure Storage I/O Control

Moving on to our last sub point in the Storage Objectives, we are going to cover Storage I/O Control. We will cover the following:

  • Enable/Disable Storage I/O Control
  • Configure/Manage Storage I/O Control
  • Monitor Storage I/O Control

Enable / Disable Storage I/O Control

This is relatively easy to do. Click on the datastore you want to modify and then click on Manage > Settings > and then General. Underneath Datastore Capabilities, you can click on Edit and then uncheck the Enable Storage I/O Control.

Configure/Manage Storage I/O Control

The same place is where you can configure it. As you see above you can change the congestion threshold or set a manual latency threshold.

Monitor Storage I/O Control

You can do this on a datastore basis by clicking on the datastore and then clicking on Monitor and then Performance. You can monitor the datastore’s space or Performance. If you click on Performance, you are treated to a lot of graphs detailing everything from latency to IOPs. And that is how you can monitor it – Here is a bonus picture.


And that concludes the Storage Section. Up Next is Virtual Machine Management! So get ready for some fun!

Objective 3.4 Perform Advanced VMFS and NFS Configurations and Upgrades

Continuing along our Storage Objectives, we now are going to cover VMFS and NFS datastores and our mastery of them. We will cover in this objective:

  • Identify VMFS and NFS Datastore properties
  • Identify VMFS5 capabilities
  • Create/Rename/Delete/Unmount a VMFS Datastore
  • Mount/Unmount an NFS Datastore
  • Extend/Expand VMFS Datastores
  • Place a VMFS Datastore in Maintenance Mode
  • Identify available Raw Device Mapping (RDM) solutions
  • Select the Preferred Path for a VMFS Datastore
  • Enable/Disable vStorage API for Array Integration (VAAI)
  • Disable a path to a VMFS Datastore
  • Determine use case for multiple VMFS/NFS Datastores

Time to jump in.

Identify VMFS and NFS Datastore Properties, and Capabilities

Datastores are containers we create in VMWare to hold our files for us. They can be used for many different purposes including storing Virtual Machines, ISO images, Floppy Images, and so on. The main difference between NFS and VMFS datastores is their backing. The storage behind the datastore. For VMFS, you are dealing with block level storage, whereas with NFS you are dealing with a share from a NAS that already has a filesystem on it. These each have their own pros and cons, and specific abilities that can used and worked with.

There have been a few different versions of VMFS that have been released since inception. They include VMFS2, VMFS3, and VMFS5. It is to be noted though, that if you still have VMFS2 you can no longer read or write to them as of ESXi 5 and you can’t create VMFS3 in ESXi6, though you can read and write to them.

VMFS5 provides many enhancements over its predecessors. Among them include the following:

  • Greater than 2TB storage devices for each extent
  • Support of virtual machines with large capacity disks larger than 2TB
  • Increased resource limits such as file descriptors
  • Standard 1MB block size
  • Greater than 2TB disk size for RDM
  • Support of small files of 1KB
  • Scalability improvements on devices supporting hardware acceleration
  • Default use of ATS-only locking mechanism (previously SCSI reservations were used)
  • Ability to reclaim physical storage space on thin provisioned storage devices
  • Online upgrading process that allows you to upgrade to the latest version of VMFS5 without having to offline the datastore

Datastores can be local or shared. They are made up of the actual files, directories and so on, but they also contain mapping information for all these objects called metadata. Metadata is also frequently changed when certain operations take place.

In a shared storage environment, when multiple hosts access the same VMFS datastore, specific locking mechanisms are used. This is one of the biggest advantages of VMFS5, called ATS or Atomic Test and Set, better known as hardware assisted locking. This supports discrete locking per disk sector. This is vs normal windows volume locking where a single server will lock the volume for use, preventing some of the cooler features allowed for by VMFS.

Occasionally you will have a datastore that still uses a combination of ATS and SCSI reservations. One of the issues with this is time. When metadata operations occur the whole storage device is locked vs just the disk sectors involved. Then when the operation has completed, other operation can continue. As you can imagine if enough of these occur, you can start creating disk contention and your VM performance might suffer.

You can use the CLI to show what system you are using on a VMFS datastore. At a CLI prompt type in the following:

Esxcli storage vmfs lockmode list

You can also specify a server by adding –server=<servername>

You can view VMFS and NFS properties by going doing the following:

  1. From the Home Screen, click on Storage
  2. On the left, choose the datastore (VMFS or NFS) you are interested in
  3. In the middle pane, click on Manage and then click on Settings You will now see some variation of the following

This will show you a number of different properties that may be useful for you.

Now let’s cover NFS a bit. NFS is just a bit different that VMFS. Instead of directly accessing block storage, a NFS client exports a share out over TCP/IP to access a NFS volume located on that NFS server. VMWare now supports 3 and 4.1 versions of NFS. The ESXi hosts can mount and use the volume for their needs. Most of the features are supported on NFS volumes including:

  • vMotion and Storage vMotion
  • High Availability and Distributed Resource Scheduling
  • Fault Tolerance and Host Profiles
  • ISO images
  • Virtual Machine snapshots
  • Virtual machines with large capacity disks larger than 2TB
  • Multi-pathing (4.1 only)

Create/Rename/Delete/Unmount a VMFS Datastore

There are a number of ways to do these things, here is one of them.

  1. While on the Storage tab in the Navigation Pane, right click on the host or the cluster and then click on Storage and then New Datastore

  1. You are presented with the above. You will then have a window pop up that notifies you of the location it will be created. Click Next and you will be presented with this window

  1. Click on VMFS and click on Next
  2. Now you are asked to put in a name for your new Datastore and choose the host that has the device accessible to it

  1. Click Next and it will show you partition information for that device, if there is any and will make sure you want to wipe everything to replace it with a VMFS datastore. Click next again and then Finish on the next screen

Renaming the datastore is as simple as right-clicking on the datastore you wish to rename and then click Rename. Deleting and unmounting is the same way. Beware that Deleting will delete the datastore and everything on it, while unmounting just makes it inaccessible.

Mount/Unmount a NFS Datastore

This is as easy as creating a VMFS datastore. Just a few different steps in there. Follow the same first steps as before to create a new VMFS datastore. When it asks about the VMFS though, there are two more options underneath there. NFS and VVols.

Next of course you will need to fill out a few different details. The next detail you will need to fill in is the version of NFS

Next window

Under this you would put the server (NAS) IP address and the export share folder and what you are going to call the datastore. You can also mount the NFS share as read-only. Next screen is asking you what hosts are going to have access to the share.

Last screen is just a summary. Click Finish and you are done.

Extend/Expand VMFS Datastores

There are two ways to make your datastore larger. You can expand your existing array or you can use another LUN (not already used for a datastore) to team together and create a larger datastore. To do either one, navigate to the datastore you wish to increase and then right click on it and click on Increase Datastore Capacity. If you have a datastore that can be expanded, it will show up in the next screen. If not, then it will remain blank. Depending on your layout and your previous selections you will have the opportunity to use another LUN or to expand existing one.

Place a VMFS Datastore in Maintenance Mode

Maintenance Mode is a really cool feature. You have to have a Datastore Cluster in order to make it work though. If you right click on a normal datastore, the option to put it in maintenance mode is greyed out. Once you have created a datastore cluster and have the disks inside it, you can right click on the datastore and click on Maintenance Mode and Click enter maintenance mode.

Identify available Raw Device Mapping (RDM) Solutions

Raw Device Mapping provides a mechanism for a virtual machine to have direct access to a LUN on a physical storage system. The way this works is that when you create a RDM, it creates a mapping file that contains metadata for managing and redirecting disk access to the physical device. Lifting a picture from the official PDF to pictorially represent it.

There are a few situations where you might need a RDM.

  1. SAN Snapshots or other layered applications that use features inherent to the SAN
  2. In any MSCS clustering scenario that spans physical hosts.

There are two types of RDM, physical compatibility and virtual compatibility. Virtual Compatibility allows an RDM to act exactly like a virtual disk file, including the use of snapshots. Physical Compatibility mode allows for lower level access if needed.

Select the Preferred Path for a VMFS Datastore

This is relatively easy to do, It can only be done under the Fixed PSP policy. Click on the datastore you want to modify underneath the navigation pane. Then click Manage and Settings and then Connectivity and Multi-pathing.

Then Click on Edit Multi-pathing

Now you can choose your Preferred Path.

Enable/Disable vStorage API for Array Integration (VAAI)

VAAI or hardware acceleration is enabled by default on your host. If for some reason you want to disable it, you would need to browse to your host in the Navigator. Then you would click on the Manage, then Settings, and under System you would click on Advanced System Settings. Change the value of any of the three following to 0

  • VMFS3.HardwareAcceleratedLocking
  • DataMover.HardwareAcceleratedMove
  • DataMover.HardwareAcceleratedInit

Disable a path to a VMFS Datastore

To disable a path to a datastore, you need to navigate to the datastore you are interested in again, and then click on Manage, then Settings, then Connectivity and Multi-pathing. Scroll Down under the Multi-pathing details and you will see Paths. Click on the Path you want to disable and then click Disable

Determine Use Cases for Multiple VMFS/NFS Datastores

There are a number of reasons to have more than one LUN. Most SAN arrays will adjust queues and caching based per LUN. Having too many VMs on a single LUN could overload IO to those same disks. Also when you are creating HA clusters, it typically wants at least 2 LUNs to maintain heartbeats to. All these are valid reasons for creating more than a single LUN.

And this is me signing off again, till the next time.

Objective 3.3 Configure Storage Multi-pathing and Failover

Once again we return to cover another objective. It has been hectic lately, with delivering my first ICM 6 class and also delivering a vBrownBag the same week on one of the future objectives. Now we are back though and trying to get in the swing of things again. Here are the sub-points we will be covering this time:

  • Configure/Manage Storage Load Balancing
  • Identify available Storage Load Balancing options
  • Identify available Storage Multi-pathing Policies
  • Identify features of Pluggable Storage Architecture (PSA)
  • Configure Storage Policies
  • Enable/Disable Virtual SAN Fault Domains

Without further ado lets dig in. Storage Multi-pathing is a cool feature that allows you to load balance IO and also allows for path failover in the event of failure. Storage plays a rather important path in our virtualization world, so it stands to reason that we would want to make sure that it is as fast and as reliable as possible. We have 3 multi-pathing options available by default, but have the ability to add more depending on the storage devices we have in our environment. For example, Equallogic adds a new Multi-pathing PSP when you are using their “MEM” kit. The default policies are as follows:

  • VMW_PSP_FIXED
  • VMW_PSP_MRU
  • VMW_PSP_RR

Defining which of these we want to use we can choose how we load balance and failover paths. Of course we should probably get a better understanding of what they do in order to make the best choice.

  • Fixed is where the host will use the designated preferred path if configured. Otherwise, it will select the first working path discovered at system boot time. This is the default policy for most active-active SANs. Also if you specify a path to preferred and it becomes unavailable, when it becomes available again, it will revert back.
  • Most Recently Used selects the path that it used most recently. If that path becomes unavailable, it will choose an alternative. If the path becomes available again, it will not revert. MRU is the default path for active-passive arrays
  • Round Robin uses an automatic path selection algorithm rotating through all the active paths when connecting to active-passive or all available paths when connecting to active-active arrays. RR is the default for a number of arrays

How do we configure and manage these? We will need to do the following

  1. Browse to the host in the navigator
  2. Click on Manage tab and then Storage
  3. Click on the Storage Device or Protocol Endpoint
  4. Click on the Device you want to manage
  5. Under the Properties tab, scroll down to Edit Multipathing and click



  1. Choose the Multi-Pathing type you want and click Ok



And that is how we configure it.

Moving on to features of the PSA or Pluggable Storage Architecture now. To manage storage multipathing, ESXi uses a collection of Storage APIs. This is also known as the Pluggable Storage Architecture. It consists of the following pieces

  • NMP or Native Multi-Pathing Plug-In. This is the generic VMWare multipathing module
  • PSP or Path Selection Policy. This is how VMWare decides on a path for a given device
  • SATP or Storage Array Type Plug-In. This is how VMWare handles path failover for a given array

Using the storage APIs, as mentioned before other companies can introduce their own Pathing Policy. Here is a good picture on how everything aligns

Storage Array Type Plugins or SATPs are run in conjunction with VMWare NMP and are responsible for array-specific operations. ESXi offers a SATP for every type of array it supports, and it also provides for default SATPs for active-active and active-passive and ALUA arrays. These are used to monitor the health of each path, report changes in them, and do necessary actions to failover in case of something going wrong.

Storage Policies are next on the agenda. Storage Policies are a mechanism that allow you to specify storage requirements on a per VM basis. If you are using VSAN or Virtual Volumes, then this policy can also determine how the machine is provisioned and allocated within the storage resource to guarantee the required level of service.

In vSphere 5.0 and 5.1 storage policies existed as storage profiles and had a different format and were not quite as useful. If you previously used them, when you upgrade to 6.0 they would be upgraded to the new Storage Policy.

There are 2 types of storage policies. You can base them on abilities or Storage-Specific data services, or you can use reference tags to group them. Let’s cover both of them a little more in depth.

Rules based on Storage-Specific Data Services

These rules are based on data services that storage entities such as Virtual SAN and Virtual Volumes advertise. To supply this information, these products use storage providers called VASA providers. This will surface the possible capabilities that are available to VMWare for you to put in your Storage Policy. Some examples of this include capacity, performance, availability, redundancy and so on.

Rules based on Tags

Rules based on tags reference datastore tags that you associate with specific datastores. Just like tags on other objects that, you as an administrator can apply, you can apply tags to a datastore. You can apply more than one tag to a single datastore. Once you apply a tag to a datastore, it will show up in the Storage Policies interface, which you can then use to define the rules for the storage policy.

So how do we use these? There are a number of steps we need to perform to enable these and apply them. The very first thing we need to do is to enable Storage Policies on a host or Cluster. To do that, perform the following steps:

  1. In the web client, click on Policies and Profiles and then VM Storage Polices
  2. Click on the Enable Storage policies icon (looks like a scroll with a check mark)
  3. Select vCenter instance and all the clusters and hosts that are available will appear
  4. Choose a host or cluster and then click on Enable

Now you can define your VM Storage policy. For the first one we will work on the Tag based policy.

  1. Browse to a datastore and then click on Manage and then Tags



  1. Click on the new tag icon
  2. Select the name of the Tag and Description. Under Category, choose New Category
  3. Under Category Name, type in the Name you desire and also what type of object you will associate it with

  4. When you are done creating it, you will now need to assign it to a datastore – this is the tag with green arrow pointing to the right
  5. Your tag should show up here. Highlight and click Assign
  6. You should now see your tag show up

Now you can create a storage policy based on this tag. You do that by navigating to the same place where you enabled the policies.

  1. Click on Policies and Profiles and then VM Storage Polices
  2. Click the Create a New VM Storage Policy

  3. Click Next twice and you will have a Rule Set 1 and you have the ability to create one on data services or based on tags. Choose the one on tags

  4. Under Category, choose the one you had previously created, and then the tag that you have created.

  5. You can add more rules if you have them but if not click on next

  6. When you click on Next you are now shown the datastores that are compatible (because you have associated the tag to them

  7. A summary appears and then you can click on Finish

The next thing to do to make this active is to apply it to a VM. You can do this when you create the VM or afterwards. If you are applying it afterwards, you will need to do it by going to the Settings of the VM and then clicking the little arrow in front of the Hard Disk. Next choose the Storage Policy you want from the drop down box

Now you have achieved your goal. You can go through the same steps with either policy.

Last thing we will need to go over is Enabling and Disabling Fault Domains for Virtual SAN. I don’t have a VSAN setup (if anyone wants to contribute toward my home lab fund let me know..J ) But if you did, you would enable them underneath the settings for the cluster. Then you would go to Manage and then VSAN. Underneath that sub category you will find Fault Domains. This is where you would create/enable/disable Fault Domains.

And thus concludes another objective. Next up, VMFS and NFS datastores. Objective 3.4


%d bloggers like this: