Mike W.

Tales of a Small Business Server restore……

I know that many of you have gone through your own harrowing tales of trying to bring environments back online. I always enjoy hearing experiences of these. Why? Because these are where learning takes place. Problems are found and solutions have to be found. While my tale doesn’t involve a tremendous amount of learning per se, I feel there are a few things I did discover along the way that may be useful for someone that has to deal with this later. So let’s being the timeline.


The current server is a Microsoft Small Business Server 2011. This server serves primarily as a DNS/File/Exchange server. It houses about 3-400GB of Exchange data, and about 700GB of user data. Now this machine is normally backed up using a backup product called Replibit. This product uses an onsite appliance to house the data and stage for replication to the cloud. So theoretically you will have a local backup snapshot and a remote-site backup. As backups always somehow have challenges associated with them, this seems like an appropriate amount of caution. The server itself is a Dell and is more than robust enough to handle the small business’ needs. There are other issues I would be remiss to not mention. Like the majority of the network is on a 10/100 switch with the single gigabit uplink being used by the SBS server.

Sometime in the wee hours of the morning on

This was when the server was laid low. Don’t know what exactly caused it, as I haven’t performed a root cause analysis yet, and it’s unlikely to happen now. For the future I will be recommending a new course direction for the customer, as I believe there are better options out there now (Office365, standard Windows Server).

I believe that there was some sort of patch that may or may not have happened about the time the machine went down. Regardless, the server went down and did not come back up. It would not even boot in Safe Mode. It would just continually reboot as soon as Windows began to load. Alerts went off notifying of the outage and the immediate action taken was to promote the latest snapshot to a VM on the backup appliance. This is one of the nice features that Replibit allows. The appliance itself runs on customized Lubuntu distro and virtualization duties are handled by KVM. The VM was started with no difficulty, and with a few tweaks to Exchange, (for some reason it didn’t maintain DNS forwarding options) everything was up and running.

After 20 min of unsuccessfully trying to get the Dell server to start in safe mode or Last Known Config, or any mode I could I decided my energies would be better spent just working on the restore. Due to the users working fine and happy on the vm, the decision was made to push the restore to Saturday to minimize downtime and disruption.

Saturday 8:00am…….

As much as I hate to get up early on a Saturday and do anything besides drink coffee, I got up and drove to the companies’ office. An announcement was made the day before that everyone should be out of email and network etc. Then we proceeded to shut down the VM. Using the recovery USB, I booted into the recovery console and attempted to start a restore of the snapshot that the VM was using to run. I was promptly told, “No” by the recovery window. Reason? The ISCSI target could not be created. This being the first time I had used Replibit personally, I discovered how it works is, the appliance creates an ISCSI target out of the snapshot, then uses that to stream the data back to the server being recovered. Apparently when we promoted the snapshot to a Live VM, it created a delta disk with the changes from Wednesday to Saturday morning. The VM had helpfully found some bad blocks on the 6mo old 2TB Micron SSD in the backup appliance, which corrupted the snapshot delta disk. This was not what I wanted to see.

With the help of Replibit support, we attempted everything we could to start the ISCSI target. We had no luck. We then tried creating an ISCSI target from the previous snapshot. This worked. This was a problem however, because we would lose 3.5 days of email and work. Through some black magic and a couple of small animal sacrifices, we mounted the D drive of the corrupted snapshot with the rest of the week’s data (somehow it was able to differentiate the drives inside the snapshot). I was afraid though, that timestamps would end up screwing us with the DB’s on the servers. Due to the lack of any other options though, we decided to press forward. The revised plan now, was to restore the C drive backup from Tuesday night and then try to copy the data from the D drive from the snapshot using WinSCP. We started the restore – it was about 11am-ish on Saturday. We were restoring 128GB of data only, so we didn’t believe that it would take that long. The restore was cooking at first, 2-350MB/min. But as the time wore on….the timer kept adding hours to the estimate and the transfer rate kept dropping. Let’s fast forward.

Sunday 9:20pm

Yes…. 30+hrs later for 130GB of data, and we were done with just the C drive. At this point, we were sweating bullets. The company was hoping to open as usual Monday morning and with those sort of restore times, it wasn’t going to happen. —Would like to send a special shout out to remote access card manufacturers. Dell’s iDRAC in this case. Without which, I would have been forced to stay onsite during this time and that wouldn’t have been fun—Back to the fun. First thing now was to see if the restore worked and the server would come up. I was going to bring it up in safe mode with networking as the main Exchange DB was on the D drive and I didn’t want the Exchange server to try to come up without that. Or any other services that also required files on the D drive for that matter.

The server started and F8 was pressed. “Safe Mode with Networking” was selected and fingers were crossed. The startup files scrolled all the way down through Classpnp.sys and it paused. The hard drives lit up and pulsed like a Christmas tree. 5 min later the screen flashed and “Configuring Memory” showed back up on the screen. “Fudge!” – this is what happened before the restore, just slower. Rebooted, came back to the item selection screen and this time just chose “Safe Mode”. For whatever reason, the gods were smiling on us and the machine came up. First window up by my hand was a command prompt with a SFC /scannow command run. That finished with no corrupt files found (of course) so I moved on. I then created the D drive as it had overwritten the partition table when the C drive was restored. I had no access to the network of course and needed that to continue with the restoration process. Rebooted again and chose “..with Networking” again. This time it came up.

Now we moved on to the file copy. The D drive was mounted on the backup appliance in the /tmp folder (just mounted mind you, not moved there) on the Linux backup appliance. We connected with WinSCP and chose a couple folders and started the copy. Those folders moved fine, so on to some larger ones……Annnnnd an error message. Ok what was the error? File name was too long. Between the path name and the file name, we had files that exceeded 255 chars. This was on basically a 2008r2 Windows server so there was no real help for files that exceeded that. While NTFS file system itself can accept a filename including path of over 32k characters, the Windows shell API can’t. Well crap. This was not going the way I wanted it to. Begin thought process here. Hmmm Windows says it has a hotpatch that can allow me to work around this… This doesn’t help me with the files that it pseudo-moved already though. I can’t move/delete/rename or do any useful thing to those files, whether in the shell or in Explorer. ( I do discover later that I can delete files locally with filenames past 255 char if I use WinSCP to do so. This does create a lock on the folder though so you will need to reboot before you can delete everything) I can’t run the hotfix in safe mode but I don’t really want to start Windows up in normal mode. I don’t have much choice at this point, so I move the rest of the Exchange DB files over to the D drive. This will allow me to start in regular mode without worrying about Exchange. I now go home to let the server finish the copy of about 350ish GB. A text is sent out that the server is not done and informing the company of the status of our work.

Monday morning 8am

The server is rebooted and it comes up in regular mode – BIG SIGH OF RELIEF – the hotpatch files are retrieved and I try to run them. Every one, even though 2008r2 is specifically called out, informs me that they will not work on my operating system. Well this is turning back into a curse-inducing moment.. again. Through a friend, I learn of a possible registry entry that might let us work with long file names – this doesn’t work either. Through my frantic culling through websites in my search for a solution, I find out there are two programs that do not use the Windows API and so are not hampered by that pesky MAX_PATH variable. (I did find there is a SUBST command I could use at the CLI to try to change the name manually. This is not feasible though as one user has over 50k files that would need to be renamed.) Those programs are RoboCopy and Fast Copy. Fast Copy looks a little dated, I know, but as I found out, it worked really well. On to the next hurdle! These tools require a Windows SMB share to work, so we need to mount a Samba share on the backup appliance and reference the mounted snapshot so we can get to it. This works and a copy is setup to test. 5 minutes in…. 10 minutes in… Seems like it’s working. Fast Copy is averaging a little better than 1GB/min transfer speeds as well. Set it up for multiple folders and decide to leave it in peace and go to bed (it is 12am at this point).

Tuesday morning

All files are moved over at this time. Some of them didn’t pull NTFS permissions with them for some odd reason, but no big deal, I’ll just re-create them manually. Exchange needs to be started. Eseutil to the rescue! The DB were shut down in a dirty state. The logs are also located on the C drive. We are able to find the missing logs though and merge everything back together and are able to get the DBs mounted. At this point, there is just a few “mop-up” things to do. There was one user that lost about 4 days of email since she was on a lone DB by herself and it was hosted on the C drive. She wasn’t happy, but not much we could do with a hardware corruption issue unfortunately.

Lessons learned from this are as follows (This list is not all inclusive). You should test the backup solution you are using before you need it. Some things are unfortunately beyond your control though. Corruption on the hardware on the backup device is one of those things which just seems like bad luck. You should always have a Restore Plan B, C, …. however. To go along with this, realistic RPOs and RTOs should be shared with the customer to keep everyone calm. Invest in good whiskey. And MAX_PATH variables suck but can be gotten around with the programs (whose links I included) above. Happy IT’ing to everyone!

VCIX-NV Objective 1.3 Configure and Manage Transport Zones

Covering Objective 1.3 now we will be covering the following topics

  • Create Transport Zones according to a deployment plan
  • Configure the control plane mode for a Transport Zone
  • Add clusters to Transport Zones
  • Remove clusters from Transport Zones

So, beginning with the first point, Create Transport Zones according to a deployment plan. What is a transport zone? Well simply, a transport zone is a virtual fence around the clusters that can talk to each other over NSX. If you want a cluster to be able to talk to other clusters that are on NSX, they must be included in the same transport zone. It is important to note that all VMs included in a cluster that is part of a transport zone will have access to that transport zone. Another thing to be careful of is that while a transport zone can span multiple VDSs, you should be sure that all the clusters that are on that VDS are included in the transport zone. You may run into situations where machines won’t be able to talk to each other otherwise if you have improper alignment.

Shown in the above example, you can see that even though you have the DVS Compute_DVS that spans across 2 clusters, since you add to a transport zone by cluster, it is possible to have just half of the clusters that make up that DVS on the transport zone. This leaves the hosts in Cluster A unable to talk to anyone on the NSX networks.

On to the next point. Configure the control plane mode for a Transport Zone. You can choose between three different control plane modes available.

  • Multicast
  • Unicast
  • Hybrid

These modes control how BUM (Broadcast, Unicast, Multicast) traffic is distributed and more.

Multicast replication mode depends on the underlaying architecture being a full Multicast implementation. The VTEPs on each host join a Multicast group so when BUM traffic is sent, they will receive it. The advantage of this is BUM traffic is only distributed to hosts that participate, possibly cutting the traffic down. Downsides of this are, IGMP, PIM, and Layer 3 Multicast routing are required at the hardware layer adding complexity to the original design.

Unicast replication mode, is everything multicast is not. More specifically, when a BUM packet is sent out, it is sent to every other host on the VXLAN segment. It will then pick a host on the other VXLAN segments and designate it a Unicast Tunnel End Point or UTEP and it will forward the frame to that and then the UTEP will forward it to all other hosts on its VXLAN segment. The advantages of this are not caring about the underlying hardware at all. This is a great thing from the decoupling from hardware standpoint, on the other hand the downside to it is, it uses a lot more bandwidth.

Hybrid replication mode is exactly that. Hybrid. It is a good mix between the above. Instead of needing all the things in multicast, only IGMP is used. Unicast is used between the VXLAN segments to avoid the need for PIM and Layer 3 routing, but internally on the VXLAN segment, IGMP is used and it cuts down on the bandwidth quite a bit. With Hybrid mode, instead of a UTEP being used between segments, it is now called a MTEP or Multicast Tunnel Endpoint.

Unicast is what is used most commonly on smaller networks and Hybrid in larger networks.

As far as adding and removing clusters from Transport Zones, you can do that a different times (adding). You can add when you initially create the transport zone, or you can do it afterwards. If you do it afterwards you will need to be in the Installation sub menu on the navigation menu on the left side of the screen. You then will need to click on the Transport Zones tab and then click on the transport zone you wish to expand. Then click on the Add Cluster icon, which looks like three little computers with a + symbol on the left side. Then select the clusters you wish to add. To remove a cluster, you need to be in the same place, but click on the Remove Clusters icon instead.

That’s the end of section 1. Next up. Section 2. Create and Manage VMware NSX Virtual Networks.

Objective 1.2 – Prepare Host Clusters for Network Virtualization

As mentioned above the next objective is preparing your environment for network virtualization. We will cover the following topics specified in the blueprint.

  • Prepare vSphere Distributed Switching for NSX
  • Prepare a cluster for NSX
    • Add / Remove Hosts from cluster
  • Configure appropriate teaming parameters for a given implementation
  • Configure VXLAN Transport parameters according to a deployment plan

Kicking off with preparing the distributed switching for NSX… First, we need to cover a little about distributed switches. A lot of people, myself included, just use standard switches due to the simplicity of them. Like an unmanaged hardware switch, there isn’t much that can go wrong with it. It either works or it doesn’t. There are a number of things you are missing out with however, by not using distributed switches.

Distributed Switches can:

  • Shape Inbound traffic
  • Be managed through a central location (vCenter)
  • Support PVLANs (yeah I don’t know anybody using these)
  • Netflow
  • Port Mirroring
  • Support LLDP (Link Layer Discovery Protocol)
  • SR-IOV and 40GB NIC support
  • Addtl types of Link Aggregation
  • Port Security
  • NIOC (v6.x)
  • Multicast Snooping (v6.x)
  • Multiple TCP/IP stacks for vMotion (v6.x)

These are the main improvements. You can see a better detailed list here – https://kb.vmware.com/s/article/1010555

The main takeaway though, if you didn’t already know, is that NSX won’t be able to do its job without distributed switches.

To prepare for NSX you will need to make sure that all the distributed switches are created and hosts are joined to them. There will be different setups that will all be dependent on environments. You can join hosts to multiple distributed switches if need be. Most sample setups will have you separate out your compute and management hosts and keep them on separate switches. There are advantages to doing it this way but it can add complexity. Just make sure if you are doing it this way you know the reasons why and it makes sense for you. The other main thing to realize is that a minimum MTU frame size of 1600 bytes is required. This is due to the additional overhead that VXLAN encapsulation creates.

For the purposes of the test I am going to assume that they will want you to know about the MTU, and how to add and remove hosts/vmkernel ports/VMs from a distributed switch. This IS something you should probably already know if you have gone through VCP level studies. If you don’t feel free to reach out to me and we’ll talk, or reference one of the VMware books, Hands on Labs, or other materials that can assist.

Next objective is preparing the cluster/s for NSX.

What are we doing when we prepare the cluster? The VMware installation bundles are loaded onto the hosts and installed. The number of VIBs installed depends on the version of NSX and ESXi installed. If you do need to look for them these are what they will be called, and in the following groups.

esx-vxlan, esx-dvfilter-switch-security, esx-vsip
esx-vxlan, esx-vsip

The control and management planes are also built.

When we click on Host Preparation tab in Installation, we are presented with clusters. Select the cluster desired, and then click on Actions and Install. This will kick off the installation. -Note: If you are using stateless mode (non-persistent state across reboots) you will need to manually add them to the image.

A few other housekeeping things. I’d imagine you already have things like DNS sorted. But if you didn’t before, make sure the little stuff is sorted. If you don’t weird issues can pop up at the worst time.

To check to see the VIB installed on your ESXi hosts, open SSH on them and type in the following:

Esxcli software vib list | grep esx

This will, regardless of version, give you all the installed VIBs with ESX in the name.

In order to add a new host to an already prepared cluster, do the following:

  1. Add the server as a regular host
  2. Add the host to the distributed switch that the other hosts are part of and that is used for NSX
  3. Place the host into maintenance mode
  4. Add the host to the cluster
  5. Remove the host from maintenance mode

The host, when it is added to the cluster will automatically be installed with the necessary VIBs for NSX. DRS will also balance machines over to the new host.

To remove a host from a prepared cluster:

  1. Place the host in maintenance mode
  2. Remove host from the cluster
  3. Make sure VIBs are removed and then place host how you want it.

Configure appropriate teaming policy for a given implementation is next. I am going to lift some information from a Livefire class I just went through for this. First, when NSX is deployed to the cluster, a VXLAN port-group is created automatically. The teaming option on this should be the same across all ESXi hosts and across all clusters using that VDS. You can see the port group in my environment that is created for the VTEPs

You choose the teaming option when you configure the VXLAN in the Host Preparation tab. The Teaming mode determines the number of VTEPs you can use.

  • Route based on originating port = Multi VTEP = Uplinks both active
  • Route based on MAC hash = Multi VTEP = Uplinks both active
  • LACP = Single VTEP = Flow Based
  • Route Based on IP Hash = Single VTEP = Flow based
  • Explicit failover = Single VTEP = One Active

It is recommended you use source port. The reasoning behind this is so you don’t have a single point of failure. Single VTEPs would essentially cripple the host and VMs that resided on it until failover occurred or it was brought back online.

Configure VXLAN Transport parameters according to deployment plan is last in this objective. This most likely covers configuring VXLAN on the Host Preparation page and then configuring a Segment ID range on the Logical Network tab.

When you prepare the VXLAN on the host prep tab, this involves setting the VDS you are going to use, a VLAN ID (even if default), an MTU size, and a NIC teaming policy. One interesting thing is if your VDS switch is set to a lower MTU size, by changing here, it will also change the VDS to match the VXLAN MTU. The number of VTEPs are not editable in the UI here. You can set the VTEPs to be assigned an IP with an IP Pool that can be setup during this. You can go back later to add or change parameters of the IP Pool or even add IP Pools by going to the NSX Manager, managing it, and then going to Grouping Objects.

When everything is configured it will look similar to this:

Going to the next button, takes you the Segment ID. You can create one here, if you need to create more than one segment ID, you will need to do it via API. Remember Segment IDs are essentially the number of Logical Switches you can create. While you can technically create more than 16 million, you are limited to 10,000 dvPortGroups in vCenter. A much smaller subset is usually used. Here is mine. Since it’s a home lab I’m not likely going to be butting up against that 10k limit any time soon.

And that’s the end of 1.2 Objective. Next up is the exciting world of Transport Zones in 1.3.

VCIX-NV Objective 1.1

So I started this journey a while ago, I let things get in the way of me getting it, and here we are. Trying to get back on track once again. This cert has eluded me longer than it should have.

I am going to try to do a little bit of mixed media in this Blog series, just to try to mix it up, but also to see if it helps me a little bit more. Hopefully these will help other people but most of all myself. Starting at the beginning, this is for Objective 1.1 which covers the following:

-Deploy the NSX Manager virtual appliance
-Integrate the NSX Manager with vCenter Server
– Configure Single Sign On
– Specify a Syslog Server
-Implement and configure NSX Controllers
-Exclude virtual machines from firewall protection

Starting with the first piece, deploying the NSX Manager OVA. First thing you will need to check is availability of resources for the manager. The manager requires 4 vCPUs and 16GB of RAM. It also needs 60GB of diskspace. This holds true all the way up to environments with 256 hosts. When the environment has 256 or more hosts or hypervisors, it is recommended to increase vCPUs to 8 and RAM to 24 GB of RAM.

The rest of the installation of the OVA is run of the mill. Same as every other OVA deployment. Once done with that, you will need to connect the NSX Manager to a vCenter. The NSX Manager has a 1:1 relationship with the vCenter so you will only need to do this once, most of the time.

You will need to log on using admin and the password you set during setup. Once the site opens, click on the Manager vCenter Registration button to continue the installation.

Once the Registration page pulls up, you will need to enter your vCenter information to properly register it.

As you can see I’ve already connected it to my vCenter. Once I’ve done this, it should inject the Networking and Security Plugin so that you will be able to manage NSX. You will want to make sure that bot is connected status. You can log into the vSphere Web Client and go to Administration and then Client Plugins to see it there.

The next step was to setup a syslog server. This is easy since it is right in the UI. If you are still logged in from the vCenter registration, you want to click on Manage Appliance settings and then General on the left side. And you will see the below:

I have set mine up for my Log Insight server in my environment. 514 is the standard port. It can be over UDP or TCP or IPv6 UDP or TCP. Once that is taken care of, next piece is installing the controllers. This is taken care of in the web client. Once in the web client, you need to click on Networking and Security under Home. When Networking and Security opens, you will want to click on Installation on the left side.

In the center pane, at the top you will see NSX Managers, and under that, NSX Controller nodes. I have already installed two in my environment. To add another, you will need to click on the green + icon.

When you click on the green + the following will popup.

You will need to fill out all the information that has asterixis in front of it. Once you click OK, it will start to deploy. It will take a few minutes to finish. You will want to make sure you have enough resources for it before you start the above. Each controller will want 4 vCPUs and 4GB of RAM and 28 GB of Hard disk space. One cool thing to notice is once the controllers are done deploying they each have a little box on the side letting you the other ones are online. Just one of the things I think is really cool about NSX – how easy they make it to keep tabs on things.

The last part we need to address now is excluding virtual machines from the firewall on each host. To do this you will need to click on the NSX Manager in the navigation pane, all the way at the bottom.

Once you click on that you will then need to click on the NSX manager instance.

Then in the middle, click on Manage. Then click on Exclusion List.

To add a virtual machine to the list, click on the green + icon. Then click on the virtual machine and move it from the left pane to the right. I would show that…but I have no virtual machine in my environment yet. And that is the end of the first Objective. Stay tuned for the next.

Creating vROps 6.5 Email Alerts and SMTP instances

Here is another small nuggetized video. In this video I will show how to create email alerts and a SMTP instance in vROps 6.5. Let me know if you find it useful!

vRealize 6.5 Creating Views and Reports

Been working on some Monitoring and Logging stuff at work so decided to share a little bit more. Here is one of the videos I felt might help a few people. Now this is just a small portion of what can actually be done with vROps 6.5 and of course 6.6 but with the basics the sky is the limit.

Log Insight UI Walkthrough 4.3

For those that would rather watch a video otherwise, scroll past: 


Welcome to the walkthrough of the Log Insight UI

So, Log Insight is installed – what’s next? How do you use it? First, you’ll need to have an understanding of the UI and where everything is, in order to better utilize its capabilities.

After logon, Log Insight will present you with the last screen you had open. Or If this is a new installation it should be redirected to the dashboards page. Let’s start there.

At the top, you have the program name itself. It is clickable and acts as a refresh button. If you look at the html code for it, it just points back to the installation of itself.

Dashboards Overview

Next you have the dashboards button, this takes you to your dashboards. The dashboards page is a collection of widgets. What widgets are displayed is entirely dependent the content packs installed. Log Insight should be connected to vsphere at this point so at a minimum there will be the General and Vmware – vSphere dashboards. I have a few more installed since I have a Dell server with an iDrac or their remote access card installed, and I have a Synology in my environment.

If I click on the General item, it has a few dashboards underneath it.

I will click on the Overview item. In the Widget Pane in the center, you see a number of little squares. These are your widgets. These can be displayed a number of different ways, numerically, graphically, or it can be text if the widget is a query.

If we hover over a widget we see a small menu on the top right.

There are three items and from left to right, the first one will open up interactive analytics and show you the data on the widget in the actual logs. The second icon will show you information about what that widget is displaying. The final icon will clone that widget to another dashboard so that you can create a personal dashboard of widgets.

Up at the top of the widget pane there are filtering options available. These will apply to all the widgets underneath. A number of common filters are already provided but if those won’t work, you can add new ones. You can also restrict the time to a specific period for the widgets, which is handy when in a large environment with tons of logs.

Interactive Analytics Overview

Next at the top, we have the Interactive Analytics button. This page allows you to perform searches on the logs ingested. You can use expressions and addition criteria to filter the data.

There is a lot going on with this page. Starting at the top, there is a large bar chart. By default, this bar chart displays the count of all events seen over the last 5 minutes. All log entries in logs is seen as an “event” by Log Insight. Looking at the bar chart allows you to see the flow of logs as they are seen by Log Insight. This can be manipulated into showing other data however. The line right below the graph allows you to change what you are looking at and how.

You can also change how it displays it since bar charts may not always work best for the data you are trying to display. You can choose between columns, lines, area, bar, pie, bubble, gauge, table, and scalar charts and setup the axis to best suit you.

Some options may be greyed out, this is because the type of data that is currently being displayed can’t support that particular graph. Underneath, the actual log entries are displayed.

At the top is a search bar where you can type in terms or expressions. You can then refine those even further by adding filters using the ‘+ Add Filter’ button. When you create these filters, Log Insight will help you out by autocompleting names or other data found in the logs. Once you have created a query that gives you important data, you can save the query using the star button to create a favorite. This is part of the 4 button tool bar displayed at the end of the search bar.

You can use the dashboard icon (second icon) to send that query to either a personal dashboard or a shared dashboard. The alarm button (third button) allows you to create an alert from the current query or manage alerts in general. The final button allows you to share the query or export the results.

That log data itself can be shown a number of ways as well.

There are events, which show every line item as a separate event. There is field table which parses all the events out into a table with headers. There are event types, which will move like events into a group with a number at the beginning of the line, showing you how many instances of that event exist. The last item is Event Trends. This shows a comparison of an event and whether that event is now trending and becoming more frequent, staying static, or decreasing in frequency. It shows this by color coding at the front of the line. Green shows an increasing trend, red a decreasing.

Also of note is that you can color code the events to group like items together. At the beginning of the event line you will see a little gear icon. Click on that to pop up a menu to give you more options. You can track down more events like the one you are highlighting, exclude them, or colorize event types.


The Fields pane on the right, will allow you to see a graph that will give you information on how prevalent an item is to other like objects and to the overview chart.


Going back up to the top, you have two buttons left. One is “Admin” which allows you to see your role, email, and change your password. The second icon, which looks like 3 lines, is your administration and settings icon. This will allow you to change settings and configuration of Log Insight, and add Content Packs for products.

There is a lot more information to fully explain Log Insight and I highly recommend going to learn more about this powerful product from VMware’s Log Insight documentation page here, https://www.vmware.com/support/pubs/log-insight-pubs.html

A small teaser….

Experimenting a bit with how I do some of my blogging. I am still going to use this site (obviously) but thought it would be nice if I could include a bit more video and demos. As kind of a teaser to that effect, I am putting a few teaser videos of demos up here I did for Log Insight. If you hate it or love it, then feel free to let me know. If you are ambivalent then I guess I won’t hear anything. 🙂

Creating a 2 Tier App for testing

It has been a remarkably long time since my last post, and I apologize for that. Things got in the way…Such as my own laziness, job, laziness. You get the idea.

This blog post was conceived because of the lack of posts out there for this. Granted I may just be dense but it took me a while and some help to get this working and I used a previous blow post from another author for this. This post here http://blog.bertello.org/2015/07/building-a-basic-3-tier-application-for-your-home-lab/ was used as a template but there were a number of problems and things that were left out that caused me issues. So, I took it upon myself to correct those small things and repost it. In full disclosure, I did try to reach out to the blog author, but have not heard back from him yet.

To start out with a bit about my enviro. I created a couple of VMs using my home lab setup of vSphere 6.5. I don’t have anything fancy in it right now, especially since NSX doesn’t run on 6.5 currently. I started the VMs out with what vSphere automatically provisioned for the VMs, 1 vCPU, 2Gb of RAM, and 16GB HD. This can be reduced of course since I am just using CentOS 6.8 minimal install CD and don’t believe there will be a lot of traffic that they need to handle. I ran through the graphical setup and setup a hostname and IP address on each of the machines. The goal of course is to have these machines eventually be on separate network tiers to test out all the features available to us in NSX, such as micro-segmentation. (of course this is once NSX is supported on 6.5 vSphere)

I am using CentOS 6.8 (which is the latest release on 6.x as of this writing) and the main reason why is that I am more familiar with 6.x than 7. Also Linux is free and easy to deploy and doesn’t take much in the way of resources, providing a perfect OS to use. The first thing we need to do is disable the firewall. This IS a lab environment so I am not too worried about hackers etc., and I will be adding NSX firewalls on them later. To accomplish this, type the following:

service iptables save

service iptables stop

chkconfig iptables off

You will do this for both machines. We will concentrate on the database server first. This is only going to be a 2 tier app. We will have a Database server and a Web/PHP/Wordpress server. You can add more however you want to but this is a good start. Perhaps for the third you could add proxy like the blog post above. Personally, I was just going to put the client machine on it to access the first two. But it is all up to you – it’s your world, and if you want a happy little tree in there, you put one in there. J

Database Server Config

We are going to use MySQL like the original blog.

yum install -y mysql mysql-server mysql-devel

The above line will install all the needed pieces of SQL that we will need. We now need to start the service, set it to run at start up, and go through the small setup of creating a admin password and deciding whether we want a default database in addition to the one we create and if we want to allow anonymous users and remote root login.

service mysqld start

chkconfig mysqld on


Also another thing I should note is that it is much easier to copy and paste my commands. To do this I would recommend using puTTY. We are now going to create our database and set permissions for it.

mysql -u root -p

SELECT User, Host, Password FROM mysql.user;


CREATE USER wp_svc@localhost;

CREATE USER wp_svc@’%’;

SET PASSWORD FOR wp_svc@localhost=PASSWORD(“Password123”);

GRANT ALL PRIVILEGES ON wordpress.* TO wp_svc@localhost IDENTIFIED BY ‘Password123’;

GRANT ALL PRIVILEGES ON wordpress.* TO ‘wp_svc’@’%’ IDENTIFIED BY ‘Password123’;



You can change the above to whatever parameters you wish, just write them down as you will need them later. I also bound MySQL to the IP address you can do that at /etc/my.cnf if you wish. The code is below.


Obviously, you would change the IP address to the one you are using. And that’s it for the DB.

Webserver Config

First thing we need to do on this machine is disable the firewall again. We also need to disable SELINUX since if we don’t, our packets will never leave this machine (something I struggled with and finally got the help of my good friend Roger H. in order to figure out. Shameless plug for him at his blog here http://www.rhes-tech.com/ – I highly recommend you check him out as he is a brain when it comes to Linux things. So here is the code we need:

service iptables save

service iptables stop

chkconfig iptables off

In order to disable SELINUX from making our life horrible, we are going to set it to Permissive mode. If we fully disable it, it could scream at us. Therefore, use your favorite text editor and edit /etc/sysconfig/selinux file and you want to change the SELINUXTYPE=targeted. It will look like this :

# This file controls the state of SELinux on the system.

# SELINUX= can take one of these three values:

# enforcing – SELinux security policy is enforced.

# permissive – SELinux prints warnings instead of enforcing.

# disabled – SELinux is fully disabled.


# SELINUXTYPE= type of policy in use. Possible values are:

# targeted – Only targeted network daemons are protected.

# strict – Full SELinux protection.


# SETLOCALDEFS= Check local definition changes


Next we are going to install a ton of stuff.

yum install -y httpd

chkconfig –levels 235 httpd on

The above installs Apache web server and starts it at machine start up. Next we need to install PHP as this is what WordPress requires to run. We will also install the supporting modules.

yum install -y php php-mysql

yum -y install php-gd php-imap php-ldap php-odbc php-pear php-xml php-xmlrpc php-mbstring php-snmp php-soap php-tidy curl curl-devel wget

Next we will download the latest version of WordPress (as of this scribbling 4.7) and we will then need to unzip it and then copy it over to the webserver www home directory. Then we will need to add the config to point back to the DB server.

wget http://wordpress.org/latest.tar.gz

tar -xzvf latest.tar.gz

cp -r wordpress/* /var/www/html

cd /var/www/html

cp wp-config-sample.php wp-config.php

Again, using your favorite text editor open the wp-config.php file and change it like below. If you chose different values for your database name and username/password you will need to use that info now.

// ** MySQL settings – You can get this info from your web host ** //

/** The name of the database for WordPress */

define(‘DB_NAME’, ‘wordpress’);

/** MySQL database username */

define(‘DB_USER’, ‘wp_svc’);

/** MySQL database password */

define(‘DB_PASSWORD’, ‘Password123’);

/** MySQL hostname */

define(‘DB_HOST’, ‘’);

/** Database Charset to use in creating database tables. */

define(‘DB_CHARSET’, ‘utf8’);

/** The Database Collate type. Don’t change this if in doubt. */

define(‘DB_COLLATE’, ”);

Once this is done you can go to your website to finish the WordPress install. The address should look something like this. You can use the FQDN or IP address.


When done, your site will be up and ready and look something like this: – CONGRATS

Objective 8.1: Deploy ESXi Hosts Using Auto Deploy

So I have skipped a few sections dealing with troubleshooting. I may circle back around and get to them if I can but I am shooting to try to get all the other stuff done. Besides, I am sure if you have gotten this far – you have had at least a couple problems and have at least a rudimentary knowledge of troubleshooting (REBOOT ALL THE THINGS!! J ).

Looking to try to keep the momentum going we are going to discuss the following topics:

  • Identify ESXi Auto Deploy requirements
  • Configure Auto Deploy
  • Explain PowerCLI cmdlets for Auto Deploy
  • Deploy/Manage multiple ESXi hosts using Auto Deploy

Identify ESXi Auto Deploy requirements

Generally, as sysadmins, we don’t like to do the same thing over and over again. I am not saying we are lazy (some of us are J) but let’s face it, there are much cooler things to learn and do than load a system and put a config on it a couple of hundred times. So for those people, VMware has offered up Auto Deploy and Host Profiles. We will go over Host Profiles in the next objective point of 8, so don’t worry.

We will start off by explaining just what Auto Deploy does. Auto Deploy is a vSphere component that uses PXE boot infrastructure in conjunction with vSphere host profiles to provision and customize one to hundreds of hosts. No state is stored on the ESXi box itself, it is held by Auto Deploy. Now you don’t necessarily HAVE to use Host Profiles, but it does make you job a lot easier once it’s setup. You can even deploy different images such as versions and driver loads to different servers, based on criteria you specify.

We will now list the requirements, as it is always good to begin with those:

  1. The Hosts you are going to provision need to be setup in BIOS mode (not UEFI)
  2. If you are going to use VLANs make sure that is all setup prior and there is connectivity
  3. Verify you have enough storage for the Auto Deploy repository (Best Practice is to allow about 2GB for 4 images)
  4. DHCP server
  5. TFTP server
  6. Admin privileges to the DHCP server
  7. While not a requirement, it would be a good idea to setup a remote syslog server and ESXi Dump Collector in case things go wrong.
  8. PXE does not support IPv6 so make sure you have IPv4 (PXE doesn’t support it, specifically)

How do we configure it?

Configure Auto Deploy

  1. Install the vCenter Windows app or VCSA
  2. You will need to start up the Auto Deploy service
    1. You will need to log into your Web Client
    2. Click on Administration
    3. Under System Configuration, click Services
    4. Select Auto Deploy and select Edit Startup Type
    5. On Windows the service is disabled – select Manual or Automatic to enable it
    6. On the VCSA the service is set to Manual by default, you can select Automatic to have it start unassisted
  3. Configure the TFTP server (there are many different ones to choose from)
    1. In the Web Client go to the Inventory List and select your vCenter server System
    2. Click the Manage tab, select Settings, and click Auto Deploy
    3. Click Download TFTP Boot zip to download the configuration file and unzip the file to the directory in which your TFTP server stores files.
  4. Setup your DHCP server to point to the TFTP server where you just downloaded the config files.
    1. Specify the TFTP server’s IP address in the DHCP server using Option 66 (also called next-server)
    2. Specify the boot file name which is undionly.kpxe.vmw-hardwired in the DHCP option 67 (also called boot-filename)
  5. Set each host you want to provision with Auto Deploy to network boot or PXE boot.
  6. Locate the image profile you want to use and the depot it is located
  7. Write a rule that assigns an image profile to hosts

Next you are going to need to install PowerCLI to be able to create rules that assigns the image profile and optionally a host profile.

Explain PowerCLI cmdlets for Auto Deploy

Help is always just a command away by just typing Get-Help<cmdlet>. Also remember that Powershell isn’t case sensitive and you can tab to complete. You can also clean up output using Format-List or Format-Table. Now.. the commands:

Connect-VIServer 192.x.x.x – This command will, as you might have guessed, connect you to the vCenter that you plan to use for Auto Deploy. The IP address will need to be changed to yours. This command might return a certificate error. This is normal in development environments.

Add-EsxSoftwareDepot <c:\location.zip> – This will add the image profile to the PowerCLI session that you are in so that you can work with it.

Get-EsxImageProfile – This will list out the Image Profiles that are included in the zip that you are using. Usually there are a couple of them in there that may include VMware Tools and one that does not.

New-DeployRule –Name “testrule” –Item “My Profile25” –Pattern “vendor=Acme,Dell”, “ipv4=” – This is a little meatier. This is creating a rule with the name “testrule” that is going to use the image profile “My Profile25” and will only be applied to a system from either Acme or Dell that is using an ip address in the range from Double quotes are required if there are spaces, otherwise they are optional. You can specify –AllHosts instead of pattern to just carpet bomb your machines. If you have a host profile to add to it you can do so with the –Item <name of profile> and it will apply this profile to those hosts.

Add-DeployRule testrule – This adds the rule to the active rule set.

That is all the rules you have to have to have. But there are some more that you might find useful with Auto Deploy. They include

Get-DeployRule – This will get all current rules

Copy-DeployRule –DeployRule <name of rule> -ReplaceItem MyNewProfile – This will copy a rule and change the profile. You cannot edit a rule after it is added. You have to copy and replace.

Deploy/Manage multiple ESXi hosts using Auto Deploy

The beauty of the above is that you can use it for multiple ESXi hosts. I mean that is what it was really meant to be used for. You also have the ability of load balancing the TFTP servers to help distribute the load.

And that’s all I will write on this objective. Next stop, 8.2