Continuing along our Storage Objectives, we now are going to cover VMFS and NFS datastores and our mastery of them. We will cover in this objective:
Time to jump in.
Identify VMFS and NFS Datastore Properties, and Capabilities
Datastores are containers we create in VMWare to hold our files for us. They can be used for many different purposes including storing Virtual Machines, ISO images, Floppy Images, and so on. The main difference between NFS and VMFS datastores is their backing. The storage behind the datastore. For VMFS, you are dealing with block level storage, whereas with NFS you are dealing with a share from a NAS that already has a filesystem on it. These each have their own pros and cons, and specific abilities that can used and worked with.
There have been a few different versions of VMFS that have been released since inception. They include VMFS2, VMFS3, and VMFS5. It is to be noted though, that if you still have VMFS2 you can no longer read or write to them as of ESXi 5 and you can’t create VMFS3 in ESXi6, though you can read and write to them.
VMFS5 provides many enhancements over its predecessors. Among them include the following:
Datastores can be local or shared. They are made up of the actual files, directories and so on, but they also contain mapping information for all these objects called metadata. Metadata is also frequently changed when certain operations take place.
In a shared storage environment, when multiple hosts access the same VMFS datastore, specific locking mechanisms are used. This is one of the biggest advantages of VMFS5, called ATS or Atomic Test and Set, better known as hardware assisted locking. This supports discrete locking per disk sector. This is vs normal windows volume locking where a single server will lock the volume for use, preventing some of the cooler features allowed for by VMFS.
Occasionally you will have a datastore that still uses a combination of ATS and SCSI reservations. One of the issues with this is time. When metadata operations occur the whole storage device is locked vs just the disk sectors involved. Then when the operation has completed, other operation can continue. As you can imagine if enough of these occur, you can start creating disk contention and your VM performance might suffer.
You can use the CLI to show what system you are using on a VMFS datastore. At a CLI prompt type in the following:
Esxcli storage vmfs lockmode list
You can also specify a server by adding –server=<servername>
You can view VMFS and NFS properties by going doing the following:
This will show you a number of different properties that may be useful for you.
Now let’s cover NFS a bit. NFS is just a bit different that VMFS. Instead of directly accessing block storage, a NFS client exports a share out over TCP/IP to access a NFS volume located on that NFS server. VMWare now supports 3 and 4.1 versions of NFS. The ESXi hosts can mount and use the volume for their needs. Most of the features are supported on NFS volumes including:
Create/Rename/Delete/Unmount a VMFS Datastore
There are a number of ways to do these things, here is one of them.
Renaming the datastore is as simple as right-clicking on the datastore you wish to rename and then click Rename. Deleting and unmounting is the same way. Beware that Deleting will delete the datastore and everything on it, while unmounting just makes it inaccessible.
Mount/Unmount a NFS Datastore
This is as easy as creating a VMFS datastore. Just a few different steps in there. Follow the same first steps as before to create a new VMFS datastore. When it asks about the VMFS though, there are two more options underneath there. NFS and VVols.
Next of course you will need to fill out a few different details. The next detail you will need to fill in is the version of NFS
Next window
Under this you would put the server (NAS) IP address and the export share folder and what you are going to call the datastore. You can also mount the NFS share as read-only. Next screen is asking you what hosts are going to have access to the share.
Last screen is just a summary. Click Finish and you are done.
Extend/Expand VMFS Datastores
There are two ways to make your datastore larger. You can expand your existing array or you can use another LUN (not already used for a datastore) to team together and create a larger datastore. To do either one, navigate to the datastore you wish to increase and then right click on it and click on Increase Datastore Capacity. If you have a datastore that can be expanded, it will show up in the next screen. If not, then it will remain blank. Depending on your layout and your previous selections you will have the opportunity to use another LUN or to expand existing one.
Place a VMFS Datastore in Maintenance Mode
Maintenance Mode is a really cool feature. You have to have a Datastore Cluster in order to make it work though. If you right click on a normal datastore, the option to put it in maintenance mode is greyed out. Once you have created a datastore cluster and have the disks inside it, you can right click on the datastore and click on Maintenance Mode and Click enter maintenance mode.
Identify available Raw Device Mapping (RDM) Solutions
Raw Device Mapping provides a mechanism for a virtual machine to have direct access to a LUN on a physical storage system. The way this works is that when you create a RDM, it creates a mapping file that contains metadata for managing and redirecting disk access to the physical device. Lifting a picture from the official PDF to pictorially represent it.
There are a few situations where you might need a RDM.
There are two types of RDM, physical compatibility and virtual compatibility. Virtual Compatibility allows an RDM to act exactly like a virtual disk file, including the use of snapshots. Physical Compatibility mode allows for lower level access if needed.
Select the Preferred Path for a VMFS Datastore
This is relatively easy to do, It can only be done under the Fixed PSP policy. Click on the datastore you want to modify underneath the navigation pane. Then click Manage and Settings and then Connectivity and Multi-pathing.
Then Click on Edit Multi-pathing
Now you can choose your Preferred Path.
Enable/Disable vStorage API for Array Integration (VAAI)
VAAI or hardware acceleration is enabled by default on your host. If for some reason you want to disable it, you would need to browse to your host in the Navigator. Then you would click on the Manage, then Settings, and under System you would click on Advanced System Settings. Change the value of any of the three following to 0
Disable a path to a VMFS Datastore
To disable a path to a datastore, you need to navigate to the datastore you are interested in again, and then click on Manage, then Settings, then Connectivity and Multi-pathing. Scroll Down under the Multi-pathing details and you will see Paths. Click on the Path you want to disable and then click Disable
Determine Use Cases for Multiple VMFS/NFS Datastores
There are a number of reasons to have more than one LUN. Most SAN arrays will adjust queues and caching based per LUN. Having too many VMs on a single LUN could overload IO to those same disks. Also when you are creating HA clusters, it typically wants at least 2 LUNs to maintain heartbeats to. All these are valid reasons for creating more than a single LUN.
And this is me signing off again, till the next time.