It’s been a while since I’ve done one of these. I did one for the VCP 6.0 and kind of miss it. I’ve decided to take a little different approach this time. I’m going to actually write it completely up as a single document and then slowly leak it out on my blog but also have the full guide available for people to use if they want. I’m not sure the usable life of this since there is a looming version on the horizon for VMware, but it will be a bit before they update the cert.
I’m also changing which certification I’m writing for. I originally did one for the delta. This time it will be the full. There shouldn’t be an issue using this for the delta, however. The certification, 2V0-21.19 is for vSphere version 6.7 and is a 70-question exam. You are required to pass with a score of no less than 300 and you are given 115 minutes to take it. This gives you about 40 seconds per question. Study well and if you don’t know something, don’t agonize over it. Mark it and come back. It is very possible a later question will job your memory or give you possible hints to the answer.
You will need to venture outside and interact with real people to take this test. No sitting at home in your pjs, unfortunately. You will need to register for the test on Pearson Vue’s Website here.
Standard disclaimer, I am sure I don’t cover 100% of the topics needed on the exam, as much as I might try. Make sure you use other guides and use your own research to help out. In other words, you can’t hold me liable if you fail
The first part starts with installation requirements. There are two core components that make up vSphere. ESXi and vCenter. There several requirements for ESXi and for vCenter Server. I’ll cover them here one component at a time to better understand them.
vSphere ESXi Server
The ESXi Server is the server that does most of the work. This server is where you install virtual machines (VMs) and provides the needed resources for all your VMs to run. The documentation also talks about virtual appliances. Virtual appliances are nothing more than preconfigured VMs, usually running some variant of Linux.
There is an order to installation of vSphere, and the ESXi server is installed first. There are a number of requirements for installation. Some of them I will generalize, as otherwise this would be a Study Textbook and not a guide.
You can use UEFI BIOS mode with vSphere 6.7+ or just regular BIOS mode. Once you have installed ESXi, you should not change the mode from one to the other in the BIOS, or you may need to re-install (it won’t boot). The actual display message is “Not a VMware boot bank” that you might encounter.
VMware requires a minimum boot device with 1 GB of storage. When booting from a local disk, 5.2 GB is needed to allow creation for the scratch disk and the VMFS (VMware File System) volume. If you don’t have enough space, or you aren’t using a local drive, the scratch partition will be placed in a RAMDISK or all in RAM. This is not persistent through reboots of the physical machine, and will give you a message (nagging you) until you do provide a location for it. It actually is a good thing to have though, as any dump files (code from ESXi describing what went wrong when a crash occurs) are stored there.
You can Auto Deploy a host as well – this is when you have no local disks at all and are using shared storage to install and run ESXi software. If you do use this method, you don’t need to have a separate LUN or shared disk, set aside for each host. You can share a single LUN across multiple hosts.
Actual installation of the ESXi software is straightforward. You can perform an Interactive, scripted or Auto Deploy installation. The latter requires a bit of preparation before you can do that and a number of other components. You will need to have TFTP server setup and make changes to your DHCP server to allow this to happen. There is more that goes into the Auto Deploy, but I won’t cover that here as the cert exam shouldn’t go too far in depth. For interactive installation you can create a customized ISO if you require specific drivers that aren’t included on the standard VMware CD
vSphere vCenter Server
The vCenter Server component of vSphere allows you to manage and aggregate your server hardware and resources. vCenter is where a lot of the magic lies. Using vCenter Server you can migrate running VMs between hosts and so much more. VMware makes available the vCenter Server Appliance or VCSA. This is a preconfigured Linux-based VM that is deployed into your environment. There are two main group of services that run on the appliance, vCenter Server and the Platform Services Controller. You run both of those together in what is known as an “embedded” installation or you can separate the Platform Services Controller (PSC) for larger environments. While you can install vCenter on Windows as well, VMware will no longer support that model for the next major release of vSphere.
There are a few software components that make up the vCenter Server Appliance. They include:
While past versions of vCenter Server Appliance were a bit less powerful, since 6.0 they have been considerably more robust. This one is no exception, with it scaling to 2,000 hosts and 35,000 VMs.
If you do decide to separate the services it is good to know what services are included with which component. They are:
If you go with a distributed model, you need to install the PSC first, since that machine houses authentication services. If there is more than one PSC, you need to setup them one at a time before you create the vCenter Server/s. Multiple vCenter Servers can be setup at the same time.
The installation process consists of two parts for the VCSA when using the GUI installer, and one for using CLI. For the GUI installation, the first stage deploys the actual appliance. The second guides you through the configuration and starts up its services.
If using CLI to deploy, you run a command against a JSON file that has all the values needed to configure the vCenter Server. The CLI installer grabs values inside the JSON file and generates a CLI command that utilizes the VMware OVF Tool. The OVF Tool is what actually installs the appliance and sets the configuration.
Hardware Requirements vary depending on the deployment configuration. Here are a few tables to help guide you:
Embedded vCenter with PSC
Environment | vCPUs | Memory |
Tiny (up to 10 hosts or 100 VMs) | 2 | 10 GB |
Small (up to 100 hosts or 1,000 VMs) | 4 | 16 GB |
Medium (up to 400 hosts or 4,000 VMs | 8 | 24 GB |
Large (up to 1,000 hosts or 10,000 VMs) | 16 | 32 GB |
X-Large (up to 2,000 hosts or 35,000 VMs) | 24 | 48 GB |
If you are deploying an external PSC appliance you need 2 vCPUs and 4 GB RAM and 60 GB storage for each.
Environment | Default Storage Size | Large Storage Size | X-Large Storage Size |
Tiny (up to 10 hosts or 100 VMs) | 250 GB | 775 GB | 1650 GB |
Small (up to 100 hosts or 1,000 VMs) | 290 GB | 820 GB | 1700 GB |
Medium (up to 400 hosts or 4,000 VMs | 425 GB | 925 GB | 1805 GB |
Large (up to 1,000 hosts or 10,000 VMs) | 640 GB | 990 GB | 1870 GB |
X-Large (up to 2,000 hosts or 35,000 VMs) | 980 GB | 1030 GB | 1910 GB |
Both the vCenter Server and PSC appliance must be installed on a minimum ESXi 6.0 host or later.
Make sure that DNS is working and the name you choose for your vCenter Server Appliance is resolvable before you start installation.
Installation happens from a client machine and needs certain requirements. If using Windows, you can use Windows 7-10, or Server 2012-2016 (x64). Linux users can use SUSE 12 and Ubuntu 14.04. If Mac OS, 10.9-11 and Sierra are all supported.
Installation on Microsoft Windows
This may be covered on the test, but I can’t imagine too many questions since it is being deprecated. That being said, vCPUs and Memory are the same as the appliance. Storage sizes are different. They are:
Default Folder | Embedded | vCenter | PSC |
Program Files | 6 GB | 6 GB | 1 GB |
ProgramData | 8 GB | 8 GB | 2 GB |
System folder (to cache the MSI installer) | 3 GB | 3 GB | 1 GB |
As far as OS’s, it requires a minimum of Microsoft Windows 2008 SP2 x64. For databases you can use the built-in PostgreSQL for up to 20 hosts and 200 VMs. Otherwise you will need Oracle or Microsoft SQL Server.
vCenter High Availability is a mechanism that protects your vCenter Server against host and hardware failures. It also helps reduce downtime associated with patching your vCenter Server. This is from the Availability guide. Honestly, I’m not sure on the last one as it seems as if you are upgrading with an embedded installation, your vCenter might be unavailable for a bit but not very long (unless there is a failure). If distributed, you have other PSCs and vCenter Servers to take up the load. So, I’m not sure if it really works for me in that scenario or not. Perhaps someone might enlighten me later and I’m not thinking it all the way through. Either way…..
vCenter Server High Availability uses 3 VCSA nodes. It uses two full VCSA nodes and a witness node. One VCSA node is active and one passive. They are connected by a vCenter HA network that is created when you set this up. This network is used to replicate data across and connectivity to the witness node. Requirements are:
vSphere supports multiple types of storage. I will go over the main types. Local and Networked Storage.
Local Storage
Local storage is storage connected directly to the server. This can include a Direct Attached Storage (DAS) enclosure that is connected to an external SAS card or storage in the server itself. ESXi supports SCSI, IDE, SATA, USB, SAS, flash, and NVMe devices. You cannot use IDE/ATA or USB to store virtual machines. Any of the other types can host VMs. The problem with local storage is the server is a single point of failure or SPOF. If the server fails, no other server can access the VM. There is a special configuration that you can use that would allow sharing local storage however, and that is vSAN. vSAN requires flash drives for cache and either flash or regular spinning disks for capacity drives. These are aggregated across servers and collected into a single datastore or drive. VM’s are duplicated across servers so if one goes down, access is still retained and the VM can still be started and accessed.
Network Storage
Network Storage consists of dedicated enclosures that have controllers that run a specialized OS on them. There are several types but they share some things in common. They use a high-speed network to share the storage, and they allow multiple hosts to read and write to the storage concurrently. You connect to a single LUN through only one protocol. You can use multiple protocols on a host for different LUNs
Fibre Channel or FC is a specialized type of network storage. FC uses specific adapters that allow your server to access it, known as Fibre Channel Host Bus Adapters or HBAs. Fibre Channel typically uses cables of glass to transport their signal, but occasionally use copper. Another type of Fibre Channel can connect using a regular LAN. It is known as Fibre Channel over Ethernet or FCoE.
ISCSI is another storage type supported by vSphere. This uses regular ethernet to transport data. Several types of adapters are available to communicate to the storage device. You can use a hardware ISCSI adapter or a software. If you use a hardware adapter, the server offloads the SCSI and possibly the network processing. There are dependent hardware and independent hardware adapters. The first still needs to use the ESXi host’s networking. Independent hardware adapters can offload both the ISCSI and networking to it. A software ISCSI adapter uses a standard ethernet adapter and all the processing takes place in the CPU of the hosts.
VMware supports a new type of adapter known as iSER or ISCSI Extensions for RDMA. This allows ESXI to use RDMA protocol instead of TCP/IP to transport ISCSI commands and is much faster.
Finally, vSphere also supports the NFS 3 and 4.1 protocol for file-based storage. Unlike the rest of the storage mentioned above, this is presented as a share to the host instead of block-level raw disks. Here is a small table on networked storage for easier perusal.
Technology | Protocol | Transfer | Interface |
Fibre Channel | FC/SCSI | Block access | FC HBA |
Fibre Channel over Ethernet (FCoE) | FCoE / SCSI | Block access |
|
ISCSI | ISCSI | Block access |
|
NAS | IP / NFS | File level | Network adapter |
NIOC = Network I/O Control
SIOC = Storage I/O Control
Network I/O Control allows you to determine and shape bandwidth for your vSphere networks. They work in conjunction with Network Resource Pools to allow you to determine bandwidth for specific types of traffic. You enable NIOC on a vSphere Distributed Switch and then set shares according to needs in the configuration of the VDS. This is a feature requiring Enterprise Plus licensing or higher. Here is what it looks like in the UI.
Storage I/O Control allows cluster wide storage I/O prioritization. You can control the amount of storage I/O that is allocated to virtual machines to get preference over less important virtual machines. This is accomplished by enabling SIOC on the datastore and set shares and upper limit IOPS per VM. SIOC is enabled by default on SDRS clusters. Here is what the screen looks like to enable it.
There are several tools you can use to manage your inventory easier. vSphere allows you to use multiple types of folders to hold your vCenter inventory. Folders can also be used to assign permissions and set alarms to objects. You can put multiple types of objects inside of a folder but only one type per folder. For example, if you had VMs inside a folder, you wouldn’t be able to add a host to it.
vApps is another way to manage objects. They can be used to manage other attributes as well. You can assign resources and even startup order with vApps.
You can use Tags and Categories to better organize and make your inventory searchable. You create them off the main menu. There is a menu item called Tags and Custom Attributes
You can create Categories such as “Operating Systems” and then Tags such as “Window 2012” and others. This sort of action will make your VMs easier to manage and search for things. You then can see the tags on the summary of the VM as shown here.
Tags can be used for rules on VMs too. You can see this (although a bit branded) by reading a blog post I wrote for Rubrik here.
HA is a feature designed for VM resilience. The other two, DRS and SDRS are for managing resources. HA stands for High Availability. HA works by pooling all the hosts and VMs into a cluster. Hosts are monitored and in the event of a failure, VMs are re-started on another host.
DRS stands for Distributed Resource Scheduling. This is also a feature used on a host cluster. DRS is a vSphere feature that will relocate VMs and make recommendations on host placement based on current load.
Finally, SDRS is Distributed Resource Scheduling for Storage. This is enabled on a Datastore cluster and just like DRS will relocate the virtual disks of a VM or make recommendations based on usage and I/O Load.
You can adjust whether or not DRS/SDRS takes any actions or just makes recommendations.
The official description of a resource pool is a logical abstraction for flexible management of resources. My unofficial description is a construct inside vSphere that allows you to partition and control resources to specific VMs. Resource pools partition memory and CPU resources.
You start with the root resource pool. This is the pool of resources that exists at the host level. You don’t see it, but it’s there. You create a resource pool under that that cords off resources. It’s also possible to nest resource pools. For example, if you had a company and inside that company you had departments, you could partition resources into the company and departments. This works as a hierarchy. When you create a child resource pool from a parent you are further diminishing your resources unless you allow it to draw more from further up the hierarchy.
Why use resource pools? You can delegate control of resources to other people. There is isolation between pools so resources for one doesn’t affect another. You can use resource pools to delegate permissions and access to VMs. Resources pools are abstracted from the hosts’ resources. You can add and remove hosts without having to make changes to resource allocations.
You can identify resources pools by their icon.
When you create a resource pool, you have a number of options you will need to make decisions on.
Shares – Shares can be any arbitrary number you make up. All the shares from all the resource pools added up will equal to a total number. That total number will be total of the root pool. For example. If you have two pools that each have 8000 shares, there are a total of 16,000 shares and each resource pool makes up half of the total, or 8,000/16,000. There are default options available as well in the form of Low, Normal, and High. Those will equal 1,000/2,000, and 4,000 shares respectively.
Reservations – This is a guaranteed allocation of CPU or memory resources you are giving to that pool. Default is 0. Reserved resources are held by that pool regardless if there are VMs inside it or not.
Expandable Reservation is a check box that allows the pool to “borrow” resources from its parent resource pool. If this is the parent pool, then it will borrow from the root pool.
Limits – specify the upper limit of what a resource pool can grab from either CPU or memory resources. When teaching VMware’s courses, unless there is a definite reason or need for it, you shouldn’t use limits. While shares only work when there is contention (fighting among VMs for resources) limits create a hard stop for the VM even if resources are high. Usually there is no reason to limit how much resources a VM would be able to use if there is no contention.
In past exams, there were questions asking you calculate resources given a number of resource pools. Make sure you go over how to do that.
VDS and VSS are networking constructs in vSphere. VDS is Virtual Distributed Switch and VSS is Virtual Standard Switch.
Virtual Standard Switch is the base switch. It is what is installed by default when ESXi is deployed. It has only a few features and requires you to configure a switch on every host. As you can imagine, this can get tedious and difficult to make these exactly the same. Which is what you need to do in order for VM’s to seamlessly move across hosts. You could create a host profile template to make sure they are the same, but then you lose the dynamic nature of switches.
Standard Switches create a link between physical NICs and virtual NICs. You can name them essentially whatever you want, and you can assign VLAN IDs. You can shape traffic but only outbound. Here is a picture I lifted from the official documentation for a pictorial representation of a VSS.
VDSs on the other hand add a management plane to your networking. Why is this important? It allows you to control all your host networking through one UI. This does require a vCenter and a certain level of licensing. Enterprise Plus or higher unless you buy vSAN licensing. Essentially you are still adding a switch to every host, just a little bit fancier one that can do more things and you only have to change once.
There are different versions of VDS you can create which are based on the version they were introduced with. Each version has its own features. A higher version retains all the features of the lower one and adds to it. Some of those features include Network I/O Control (NIOC) which allows you to shape your bandwidth incoming and outgoing. VDS also includes a rollback ability so that if you make a change and it loses connectivity, it will revert the changes automatically.
Here is a screenshot of me making a new VDS and some of the features that each version adds:
Here is a small table showing the differences between the switches.
Feature | vSphere Standard Switch | vSphere Distributed Switch |
VLAN Segmentation | Yes | Yes |
802.1q tagging | Yes | Yes |
NIC Teaming | Yes | Yes |
Outbound traffic shaping | Yes | Yes |
Inbound traffic shaping | No | Yes |
VM port blocking | No | Yes |
Private VLANs | No | Yes (3 Types – Promiscuous, Community, Isolated) |
Load Based Teaming | No | Yes |
Network vMotion | No | Yes |
NetFlow | No | Yes |
Port Mirroring | No | Yes |
LACP support | No | Yes |
Backup and restore network configuration | No | Yes |
Link Layer Discovery Protocol | No | Yes |
NIOC | No | Yes |
A vSphere cluster is a group of ESXi host machines. When grouped together, vSphere aggregates all of the resources of each host and treats it like a single pool. There are a number of features and capabilities you can only do with clusters. Here is a screenshot of what you have available to you. I will now go over them.
Under Services you can see DRS and vSphere Availability (HA). You also see vSAN on the list, as vSAN requires a cluster as well. We’ve already covered HA and DRS a bit but there are more features in each.
DRS
DRS Automation – This option lets vSphere make VM placement decisions or recommendations for placement. I trust them with Fully Automated as you can see in the window above. There are a few situations here and there where you might not want to, but 90% of the time I would say trust it. The small use cases where you might turn it off might be something like vCD deployments, but you could also just turn down the sensitivity instead. You have the following configuration options:
Automation
Additional Options
Power Management
vSphere Availability (HA)
There are a number of configuration options to configure. Most defaults are decent if you don’t have a specific use case. Let’s go through them.
Clusters allow for more options then I’ve already listed. You can set up Affinity and Anti-Affinity rules. These are rules setup to keep VMs on certain hosts, or away from others. You might want a specific VM running on a certain host due to licensing or for a specific piece of hardware only a specific host has. Anti-affinity rules might be setup for something like Domain Controllers. You wouldn’t place them on the same host for availability reasons, so you would setup an Anti-Affinity rule so that both of them would always be on different hosts.
EVC Mode is also a cool option enabled by clusters. EVC or Enhanced vMotion Compatibility allows you to take different generation hosts and still allows you to migrate them. Different generation processors have different features and options on them. EVC masks the newer ones so there is a level feature set. This means you might not receive all the benefits of a newer processors though. And a lot of newer processors are more efficient therefore lower clock speed. If you mask off those efficiencies, then you are just left with the lower clock speeds. Be mindful of that when you use it. You can enable it on a per VM basis making it more useful.
A VM is nothing more than files and software. Hardware is emulated. It makes sense to understand the files that make up a VM then. Here is a picture depicting files you might see in a VM folder lifted from VMware’s book.
Now as for an explanation of those files.
There are several ways to move VMs around in your environment. vMotion and Storage vMotion are two types of migration. The first thing I do, when I taught this, was ask, what do you really need to move to move a VM? The main piece of what make up a VM is the memory. CPU resources are used briefly. When you perform a vMotion, what you are really doing is just moving active memory to a different host. The new host will then start working on tasks with the CPU. All pointers in the files that originally point to the first host have to be changed as well. So how does this work?
Storage vMotion is moving the VM files to another datastore. Let’s go through the steps
This is slightly different than the vMotion process as it only needs one pass to copy all the files due to using the mirror driver.
There is one other type of migration called Cross-Host vSphere vMotion or Enhanced vMotion depending on who you ask. This is a combination of vMotion and svMotion at the same time. This is also notable because this allows you to migrate a VM while using local storage.
There are limitations on vMotion and svMotion. You need to be using the same type of CPUs (Intel or AMD) and the same generation, unless you are using EVC. You should also make sure you don’t have any hardware that the new host can’t support. CD-ROMs etc. vMotion will usually perform checks before you initiate it and let you know if there are any issues. You can migrate up to 4 VMs at the same time on a 1Gbps or 8 VMs on a 10Gbps network per host. 128 concurrent vMotion is the limit per VMFS datastore.