Hello again. My 2019 VCP Study Guide was well received, so, to help the community further, I decided to embark on another exam study guide with vSphere 7. This guide is exciting for me to write due to the many new things I’ll get to learn myself, and I look forward to learning with everyone.
I am writing this guide pretty much how I talk and teach in real life with a bit of Grammarly on the back end, to make sure I don’t go completely off the rails. You may also find the formatting a little weird. This is because I plan on taking this guide and binding it in a single guide at the end of this blog series. I will try to finish a full section per blog post unless it gets too large. I don’t have a large attention span to read huge technical blogs in one sitting and find most people learn better with smaller chunks of information at a time. (I wrote this before I saw the first section.)
In these endeavors, I personally always start with the Exam Prep guide. That can be found on VMware’s website here. The official code for this exam is 2VO-21.20, and the cost of the exam is $250.00. There are a total of 70 questions with a duration of 130 minutes. The passing score, as always, is 300 on a scale of 1-500. The exam questions are presented in a single and multiple-choice format. You can now take these exams online, in the comfort of your own home. A webcam is required, and you need to pan your webcam at the beginning of the session, and it needs to be on the whole time.
The exam itself focuses on the following topics:
Each of these topics can be found in the class materials for Install, Configure, and Manage, or Optimize and Scale classes, or supplemental papers by VMware on the web. Let’s begin with the first topic.
A vSphere implementation or deployment has two main parts. ESXi server and vCenter Server.
The first is the virtual server itself or ESXi server. The ESXi host server is the piece of the solution that allows you to run virtual machines and other components of the solution (such as NSX kernel modules). It provides the compute, memory, and in some cases, storage resources for a company to run. There are requirements the server needs to meet for ESXi. They are:
vSphere 7.0 can be installed using UEFI BIOS mode or regular old BIOS mode. If using UEFI, you have a wider variety of drives you can use to boot. Once you use one of those modes (UEFI or Legacy) to boot from, it is not advisable to try to change after installed. If you do, you may be required to reinstall. The error you might receive is “Not a VMware boot bank.”
One significant change in vSphere 7.0 is system storage requirements. ESXi 7.0 system storage volumes can now occupy up to 138 GB of space. A VMFS datastore is only created if there is an additional 4 GB of space. If one of the “local” disks aren’t found, then ESXi operates in a degraded where the scratch disk is placed in a RAMDISK or all in RAM. This is not persistent through reboots of the physical machine and displays an unhappy message until you specify a location for the scratch disk.
Now that being said, you CAN install vSphere 7 on a USB as small as 8 GB. You should, if at all possible, use a larger flash device. Why? ESXi uses the additional space for an expanded core dump file, and it uses the other memory cells to prolong the life of the media. So try to use a 32 GB or larger flash device.
With the increased usage of flash media, VMware saw fit to talk about it in the install guide. In this case, it specifically called out using M.2 and other Non-USB low-end flash media. There are many types of flash media available on the market that have different purposes. Mixed-use case, high performance, and more. The use case for the drive should determine the type bought. VMware recommends you don’t use low-end flash media for datastores due to VMs causing a high level of wear quickly, possibly causing the drives to fail prematurely.
While the guide doesn’t ask to call this out, I thought it would be a good thing to show a picture of how the OS disk layout differs from the previous version of ESXi. You should know that when you upgrade the drive from the previous version, you can’t rollback.
The ESXi has the resources and runs the virtual machines. In anything larger than a few hosts, management becomes an issue. vCenter Server allows you to manage and aggregate all your server hardware and resources. But, vCenter Server allows you to do so much more. Using vCenter Server, you can also keep tabs on performance, licensing, and update software. You can also do advanced tasks such as move virtual machines around your environment. Now that you realize you MUST have one, let’s talk about what it is and what you need.
vCenter is deployed on an ESXi host. So you have to have one of those running first. It is deployed using its included installer to the ESXi host, not as you would an OVA. The machine itself is upgraded from previous versions. It now contains the following:
But wait… there used to be the vSphere vCenter Server and Platform Services? You are correct. In the future, due to design flows and simplicity, etc., VMware combined all services into a single VM. So what services are actually on this machine now? I’m glad you asked.
Now that we have covered the components let’s talk deployment. You can install vCenter Server using either the GUI or CLI. If using the GUI install, there are two stages. The first stage installs the files on the ESXi host. The second stage configures parameters you feed into it. The hardware requirements have changed from the previous version as well. Here is a table showing the changes in green.
Topology is a lot simpler to talk about going forward because there is a flat topology. There is no vCenter Server service and Platform Controllers anymore. Everything is consolidated into one machine. If you are running a previous version and have broken vCenter Server out into those roles, don’t despair! There are tools VMware has created that allow you to consolidate them back. There are a few things to add to that.
First, Enhanced Link Mode. This is where you can log into one vCenter and manage up to 15 total vCenter instances in a single Single Sign-On domain. This is where the flat topology comes in. Enhanced Link Mode is set up during the installation of vCenter. Once you exceed the limits of a vCenter, you install a new one and link it. There is also vCenter Server High Availablity. Later on, in this guide, we cover how its configured. For now, here is a quick overview of what it is.
vCenter High Availability is a mechanism that protects your vCenter Server against host and hardware failures. It also helps reduce downtime associated with patching your vCenter Server. It does this by using 3 VMs. It uses two full VCSA nodes and a witness node. One VCSA node is active and one passive. They are connected by a vCenter HA network, which is created when you set this up. This network is used to replicate data across and connectivity to the witness node.
For a quick look at vCenter limits compared to the previous version:
The section I wrote in the previous guide still covers this well, so I am using that.
Local Storage
Local storage is storage connected directly to the server. This includes a Direct Attached Storage (DAS) enclosure that connects to an external SAS card or storage in the server itself. ESXi supports SCSI, IDE, SATA, USB, SAS, flash, and NVMe devices. You cannot use IDE/ATA or USB to store virtual machines. Any of the other types can host VMs. The problem with local storage is that the server is a single point of failure or SPOF. If the server fails, no other server can access the VM. There is a unique configuration that you can use that would allow sharing local storage, however, and that is vSAN. vSAN requires flash drives for cache and either flash or regular spinning disks for capacity drives. These are aggregated across servers and collected into a single datastore or drive. VM’s are duplicated across servers, so if one goes down, access is still retained, and the VM can still be started and accessed.
Network Storage
Network Storage consists of dedicated enclosures that have controllers that run a specialized OS on them. There are several types, but they share some things in common. They use a high-speed network to share the storage, and they allow multiple hosts to read and write to the storage concurrently. You connect to a single LUN through only one protocol. You can use multiple protocols on a host for different LUNs
Fibre Channel or FC is a specialized type of network storage. FC uses specific adapters that allow your server to access it, known as Fibre Channel Host Bus Adapters or HBAs. Fibre Channel typically uses cables of glass to transport their signal, but occasionally use copper. Another type of Fibre Channel can connect using a regular LAN. It is known as Fibre Channel over Ethernet or FCoE.
ISCSI is another storage type supported by vSphere. This uses regular ethernet to transport data. Several types of adapters are available to communicate to the storage device. You can use a hardware ISCSI adapter or software. If you use a hardware adapter, the server offloads the SCSI and possibly the network processing. There are dependent hardware and independent hardware adapters. The first still needs to use the ESXi host’s networking. Independent hardware adapters can offload both the ISCSI and networking to it. A software ISCSI adapter uses a standard ethernet adapter, and all the processing takes place in the CPU of the hosts.
VMware supports a new type of adapter known as iSER or ISCSI Extensions for RDMA. This allows ESXi to use RDMA protocol instead of TCP/IP to transport ISCSI commands and is much faster.
Finally, vSphere also supports the NFS 3 and 4.1 protocol for file-based storage. This type of storage is presented as a share to the host instead of block-level raw disks. Here is a small table on networked storage for more leisurely perusal.
Technology | Protocol | Transfer | Interface |
Fibre Channel | FC/SCSI | Block access | FC HBA |
Fibre Channel over Ethernet (FCoE) | FCoE / SCSI | Block access |
|
ISCSI | ISCSI | Block access |
|
NAS | IP / NFS | File level | Network adapter |
vSphere supports several different types of datastores. Some of them have features ties to particular versions, which you should know. Here are the types:
This is the first time I’ve seen this covered in an objective. I like that some of the objectives are covering more in-depth material. It’s hard to legitimize the importance of them without describing them and what they do a bit. I will explain what they are and then explain why they are essential.
Especially with some of the technology VMware offers (vSAN), these APIs are undoubtedly helpful for sysadmins and your infrastructure. Being able to determine health and adequately fit and apply a customer’s requirements for a VM is essential for business.
Storage Policies are a mechanism by which you can assign storage characteristics to a specific VM. Let me explain. Say you have a critical VM, and you want to make sure it sits on a datastore that is backed-up every 4 hours. Using Storage Policies, you can assign that to that VM. You can ensure that the only datastores that it can use are ones that satisfy that requirement. Or you need to limit a VM to a specific performance. You can do that via Storage Policies. You can create policies based on the capabilities of your storage array, or you can even create ones using tags. To learn even more, you can read about it in VMware’s documentation here.
K8s
I couldn’t find this in the materials listed, so I went hunting. For anyone wanting to read more about it, I found the info HERE.
vSphere with Kubernetes supports three types of storage.
vSAN
vSAN is converged, software-defined storage that uses local storage on all nodes and aggregates them into a single datastore. This usable by all machines in the vSAN cluster.
A minimum of 3 disks is required to be part of a vSphere cluster and enabled for vSAN. Each ESXi host has a minimum of 1 flash cache disk and 1 spinning or 1 flash capacity disk. A max of 7 capacity disks can be in a single disk group, and up to 5 disk groups can exist per host.
vSan is object-based, uses a proprietary VMware protocol to communicate over the network, and uses policies to enable features needed by VMs. You can use policies to enable multiple copies of data, performance throttling, or stripe requirements.
vVols
vVols shakes storage up a bit. How so? Typically you would carve storage out into LUNs, and then you would create datastores on them. The storage administrator would be drawn into architectural meetings with the virtualization administrators to decide on storage schemas and layouts. This had to be done in advance, and it was difficult to change later if something different was needed.
Another problem was that management such as speeds or functionality was controller at a datastore level. Multiple VMs are stored on the same datastore, and if they required different things, it would be challenging to meet their needs. vVols helps change that. It improves granular control, allowing you to cater storage functionality to the needs of individual VMs.
vVols map virtual disks and different pieces, such as clones, snapshots, and replicas, directly to objects (virtual volumes) on a storage array. Doing this allows vSphere to offload tasks such as cloning, and snapshots to the storage array, freeing up resources on the host. Because you are creating individual volumes for each virtual disk, you can apply policies at a much more granular level—controlling aspects such as performance better.
vVols creates a minimum of three virtual volumes, the data-vVol (virtual disk), config-vVol (config, log, and descriptor files), and swap-vVol (swap file created for VM memory pages). It may create more if there are other features used, such as snapshots or read-cache.
vVols start by creating a Storage Container on the storage array. The storage container is a pool of raw storage the array is making available to vSphere. Then you register the storage provider with vSphere. You then create datastores in vCenter and create storage policies for them. Next, you deploy VMs to the vVols, and they send data by way of Protocol Endpoints. The best picture I’ve seen I’m going to lift and use here from the Fast Track v7 course by VMware.
NIOC = Network I/O Control
SIOC = Storage I/O Control
Network I/O Control allows you to determine and shape bandwidth for your vSphere networks. They work in conjunction with Network Resource Pools to allow you to determine the bandwidth for specific types of traffic. You enable NIOC on a vSphere Distributed Switch and then set shares according to needs in the configuration of the VDS. This is a feature requiring Enterprise Plus licensing or higher. Here is what it looks like in the UI.
Storage I/O Control allows cluster-wide storage I/O prioritization. You can control the amount of storage I/O that is allocated to virtual machines to get preference over less critical virtual machines. This is accomplished by enabling SIOC on the datastore and set shares and upper limit IOPS per VM. SIOC is enabled by default on SDRS clusters. Here is what the screen looks like to enable it.
Instant Clone technology is not new. It was initially around in vSphere 6.0 days but was initially called VMFork. But what is it? It allows you to create powered-on virtual machines from the running state of another. How? The source VM is stunned for a short period. During this time, a new Delta disk is created for each virtual disk, a checkpoint created and transferred to the destination virtual machine. Everything is identical to the original VM. So identical, you need to customize the virtual hardware to prevent MAC address conflicts. You must manually edit the guest OS. Instant clones are created using API calls.
Going a little further in-depth, using William Lam’s and Duncan Epping’s blog posts here and here, we learn that as of vSphere 6.7, we can use vMotion, DRS, and other features on these instant clones. Transparent Page Sharing is used between the Source and Destination VMs. There are two ways instant clones are created. One is Running Source VM Workflow where a delta disk is created for each of the destination VMs created on the source VM. This workflow can cause issues the more of them created due to an excessive amount of delta disks on the source VM. The second is the Frozen Source VM Workflow. This workflow uses a single delta on the source VM and a single delta disk on each of the Destination VMs. This workflow is much more efficient. If you visit their blogs linked above, you can see diagrams depicting the two workflows.
Use cases (per Duncan) are VDI, Container hosts, Hadoop workers, Dev/Test, and DevOps.
vSphere’s Distributed Resource Scheduler is a tool used to keep VMs running smoothly. It does this, at a high level, by monitoring the VMs and migrating them to the hosts that allow them to run best. In vSphere 6.x, DRS ran every 5 minutes and concentrated on making sure the hosts were happy and had plenty of free resources. In vSphere 7, DRS runs every 60 seconds and is much more concentrated on VMs and their “happiness.” DRS scores each VM and, based on that, migrates or makes recommendations depending on what DRS is set to do. A bit more in-depth in objective 1.6.3.
EVC or Enhanced vMotion Compatibility allows you to take different processor generation hosts and still combine them and their resources in a cluster. Different generation processors have different features sets and options on them. EVC masks the newer ones, so there is a level feature set. Setting EVC means you might not receive all the benefits of newer processors. Why? A lot of newer processors are more efficient, therefore lower clock speed. If you mask off their newer feature sets (in some cases how they are faster), you are left with lower clock speeds. Starting with vSphere 6.7, you can enable EVC on a per VM basis allowing for migration to different clusters or across clouds. EVC becomes part of the VM itself. To enable per-VM EVC, the VM must be off. If cloned, the VM retains the EVC attributes.
VM “Happiness” is the concept that VMs have an ideal or best case throughput, or resource usage, and actual throughput. If there is no contention or competition on a host for a resource, those two should match, which makes the VM’s “happiness” 100%. DRS takes a look at the hosts in the cluster to determine if another host can provide a better score for the VM; it takes steps to migrate or recommend it to another host. Several costs are determined to see if it makes sense to move it. CPU costs, Memory costs, Networking Costs, and even Migration costs. A lower score does not necessarily mean that the VM is running poorly. Why? Some costs taken into account include if the host can accommodate a burst in that resource. The actual equation (thanks Niels Hagoort)
Keep in mind that the score is not indicative of a health score but an indicator of resource contention. A higher number indicates less resource contention, and the VM is receiving the resources it needs to perform.
vSphere HA or High Availablity, is a feature designed for VM resilience. Hosts and VMs are monitored, and in the event of a failure, VMs restart on another host.
There are several configuration options to configure. Most defaults work well unless you have a specific use case. Let’s go through them:
VDS and VSS are networking objects in vSphere. VDS stands for Virtual Distributed Switch, and VSS is Virtual Standard Switch.
Virtual Standard Switch is the default switch. It is what the installer creates when you deploy ESXi. It has only a few features and requires you to configure a switch on every host manually. As you can imagine, this is tedious and difficult to configure the same every time, which is what you need to do for VM’s to move across hosts seamlessly. (You could create a host profile template to make sure they are the same.)
Standard Switches create a link between physical NICs and virtual NICs. You can name them essentially whatever you want, and you can assign VLAN IDs. You can shape traffic but only outbound. Here is a picture I lifted from the official documentation for a pictorial representation of a VSS.
VDSs, on the other hand, add a management plane to your networking. Why is this important? It allows you to control all host networking through one UI. Distributed switches require a vCenter and a certain level of licensing-Enterprise Plus or higher unless you buy vSAN licensing. Essentially you are still adding a switch to every host, just a little bit fancier one that can do more things, that you only have to change once to change all hosts.
There are different versions of VDS you can create, which are based on the version they were introduced. Each newer version adds features. A higher version retains all the features of the lower one and adds to it. Some features include Network I/O Control (NIOC), which allows you to shape your bandwidth incoming and outgoing. VDS also includes a rollback ability, so if you make a change and it loses connectivity, it reverts the changes automatically.
Here is a screenshot of me making a new VDS and some of the features that each version adds:
Here is a small table showing the differences between the switches.
Feature | vSphere Standard Switch | vSphere Distributed Switch |
VLAN Segmentation | Yes | Yes |
802.1q tagging | Yes | Yes |
NIC Teaming | Yes | Yes |
Outbound traffic shaping | Yes | Yes |
Inbound traffic shaping | No | Yes |
VM port blocking | No | Yes |
Private VLANs | No | Yes (3 Types – Promiscuous, Community, Isolated) |
Load Based Teaming | No | Yes |
Network vMotion | No | Yes |
NetFlow | No | Yes |
Port Mirroring | No | Yes |
LACP support | No | Yes |
Backup and restore network configuration | No | Yes |
Link Layer Discovery Protocol | No | Yes |
NIOC | No | Yes |
VMkernel adapters are set up on the host, for the host itself to interact with the network. Your management and other functions of the host are taken care of by VMkernel adapters. The roles specifically are:
You should have a decent idea now of what a vSphere distributed switch is and what it can do. The next part is to show you what the pieces are and describe how to use them.
First, you need to create the vSphere distributed switch. Go to the networking tab by clicking on the globe in the HTML5 client. Then right-click on the datacenter and select Distributed Switch > New Distributed Switch
You must now give the switch a name – you should make it descriptive, so it’s easy to know what it does
Choose the version corresponding to the features you want to use.
You need to tell VMware how many uplinks per host you want to use. This is the number of physical NICs that are used by this switch. Also, select if you want to enable Network I/O Control and if you want vSphere to create a default port group for you – if so, give it a name.
Finish the wizard.
You can now look at a quick topology of the switch by clicking on the switch, then Configure and Topology.
After creating the vSphere distributed switch, hosts must be associated with it to use it. To do that, you can right-click on the vSphere distributed switch and click on Add and Manage Hosts.
You now have a screen that has the following options: Add Hosts, Manage host networking, and Remove hosts.
Since your switch is new, you need to Add hosts. Select that and on the next screen, click on New Hosts.
Select the hosts that you want to be attached to this switch and click OK and then Next again.
Now assign the physical NICs to an uplink and click Next
You can now move any VMkernel adapters over to this vSphere distributed switch if desired.
Same with VM networking
You now complete it. And of course, you notice you can make changes to all the hosts during the same process. This is one part of what makes vSphere distributed switches great.
Networking policies are rules on how you want virtual switches, both standard or distributed, to work. Several policies can be configured on your switches. They apply at a switch level. If needed, however, you CAN override them at a port group level. Here is a bit of information on them:
Virtual Standard Switch Policies:
vSphere Distributed Switch Policies:
One of the features that you can take advantage of on a vSphere distributed switch is NIOC or Network I/O Control. Why is this important? Using NIOC, you control your network traffic. You set shares or priorities to specific types of traffic, and you can also set reservations and hard limits. To get to it, select the vSphere distributed switch and then in the center pane, Configure, then Resource Allocation. Here is a picture of NIOC:
If you edit one of the data types, this is the box for that.
There are several settings to go through here. Let’s discuss them.
You can also set up a custom type of traffic with the Network Resource Pool.
Managing a large number of servers gets difficult and cumbersome quickly. In previous versions of vSphere, there was a tool called VUM or vSphere Update Manager. VUM was able to do a limited number of things for us. It could upgrade and patch hosts, install and update third-party software on hosts, and upgrade virtual machine hardware and VMware Tools. This was useful but left a few important things out. Things like hardware firmware and maintain a baseline image for cluster hosts. Well, fret no more! Starting with vSphere 7, a new tool called Lifecycle Manager was introduced. Here are some of the things you can do:
Just as with VUM, you can download updates and patches from the internet, or you can manually download them for dark sites. Keep in mind to use some of these features, you need to be using vSphere 7 on your hosts. Here is a primer just for those that are new to this or those needing a refresh.
Baseline – this is a group of patches, extensions, or an upgrade. There are 3 default baselines in Lifecycle Manager: Host Security Patches, Critical Host Patches, and Non-Critical Host Patches. You cannot edit or delete these. You can create your own.
Baseline Group – is a collection of non-conflicting baselines. For example, you can combine Host Security Patches, Critical Host Patches, and Non-Critical Host Patches into a single Baseline Group. You then attach this to an inventory object, such as a cluster or a host. You can then check the object for compliance. If it isn’t in compliance, remediation installs the updates. If the host can’t be rebooted, staging the software to it first loads the software and waits to install until a time of your choosing.
In vSphere 7, there are now Cluster baseline images. You set up an image and use that as the baseline for all ESXi 7.0 hosts in a cluster. Here is what that looks like:
In the image, you can see you load an image of ESXi (the .zip file, not ISO), and you can add a vendor add-on and firmware and drivers. Components allow you to load individual VIBs (VMware Installation Bundles) for hardware or features.
From the above, you can deduce that the new Lifecycle Manager will be a great help in managing the host’s software and hardware.
vSAN is VMware’s in-kernel software-defined storage solution that uses local storage and aggregates them into a single distributed datastore to be used by cluster nodes. vSAN requires a cluster and hardware that has been approved and on the vSAN hardware compatibility guide. vSAN is object-based, and when you provision a VM, its pieces are broken down into specific objects. They are:
VMs are assigned storage policies that are rules applied to the VM. Policies can be availability, performance, or other storage characteristics that need to be assigned to the VM.
A vSAN cluster can be a “Hybrid” or “All-Flash” cluster. A hybrid cluster is made up of flash drives and rotational disks, whereas an all-flash cluster consists of just flash drives. Each host, or node, contributes at least one disk group to storage. Each disk group consists of 1 flash cache drive, and 1-7 capacity drives, rotational or flash. A total of 5 disk groups can reside on a node for a total of 40 disks. The cache disk on a hybrid cluster is used for read caching and write buffering (70% read, 30% write.) On an all-flash cluster, the cache disk is just for write buffering (up to 600GB.)
vSAN clusters are limited by vSphere maximums of 64 nodes per cluster but typically use a max of 32. You can scale up, out, or back and supports RAID 1, 5, and 6. Different VM’s can have different policies and different storage characteristics using the same datastore.
We went over a few of them above but let’s list vSAN’s requirements entirely.
Although typically you need 3 nodes minimum for a vSAN cluster, 4 is better for N+1 and taking maintenance into account. In other cases, 2-node clusters also exist for smaller Remote Branch Office or ROBO installations.
Starting with vSphere 6.7, VMware introduced support for Trusted Platform Module or TPM 2.0 and the host attestation model. TPMs are that little device can be installed in servers that can serve as a cryptographic processor and can generate keys. It can also store materials, such as keys, certificates, and signatures. They are tied to specific hardware (hence the security part), so you can’t buy a used one off eBay to install in your server. The final feature of TPMs is what we are going to use here or determining if a system’s integrity is intact. It does this by an act called attestation. Using UEFI and TPM, it can determine if a server booted with authentic software.
Well, that’s all great, but vSphere 6.7 was view-only; there were no penalties or repercussions if the software wasn’t authentic. What’s changed?
Now, introduced in vSphere 7, we have vSphere Trust Authority. This reminds me of Microsoft’s version of this called Hyper-V Shielded Installs. Essentially you would create a hyper-secure cluster called Host Guardian Service, and then you would have 1 or more guarded hosts and shielded VMs. This is essentially the same concept.
You create a vSphere Trust Authority which can either establish its own management cluster apart from your regular hosts. The better way is to have a completely separate cluster, but to get started, it can use an existing management cluster. They won’t be running any normal workload VMs so they can be small machines. Once established, it has two tasks to perform:
If a host fails attestation now, the vTA will withhold keys for it, preventing secure VMs from running on that host until it passes attestation. Thanks to Bob Planker’s blog here for explaining it.
Intel’s Software Guard Extensions or SGX were created to meet the needs of the trusted computing industry. How so? SGX is a security extension on some modern CPUs. SGX allows software to create private memory regions called enclaves. The data in enclaves is only able to be accessed by the intended program and is isolated from everything else. Typically this is used for blockchain and secure remote computing.
vSphere 7 now has a feature called vSGX or virtual SGX. This feature allows the VMs to access Intel’s technology if it’s available. You can enable it for a VM through the HTML5 web client. For obvious reasons (can’t access the memory), you can’t use this feature with some of vSphere’s other features such as vMotion, suspend and resume, or snapshots (unless you don’t snapshot memory).
That ends the first section. Next up, we will go over VMware Products and Solutions, which is a lot lighter than this one was. Seriously my fingers hurt.