Uncategorized

Intro to MongoDB (part 2)

When I left off last part, we were discussing MongoDB’s availability features. We will start next on:

Scalability – We’ve gone over replica sets in availability. For those who come from more of an infrastructure/hardware background. This is very close to something like RAID 1. You are creating multiple copies of a set of data and then placing it on separate physical nodes (instead of drives). This does allow a bit of scalability but is not as efficient as say, RAID 6 is. So, what we will next get into is, sharding. Sharding is a lot closer to what us hardware people would think RAID 6 is. You are breaking pieces of one overall replica and spreading those across physical nodes. For refresher purposes let’s throw in a graphic. I’ve modified it slightly. This diagram makes more sense to me according to the above comparison.

Now if we add in shards this is what we look like.

I have just 3 nodes there but you can scale much bigger. This sharding is automatic and built-in. No need for 3rd party add-ins. Rebalancing is done automatically as you add or subtract nodes. A bit of planning is needed as to how you plan to distribute the data. Sharding is applied based off a shard key. This is defined by a data modeler that describes the range of values that will be used to partition the data into chunks – this is then known as the shard key. Much like you would use a key on a map to tell you how to use it. There are three components to this.

  • Query Router (mongos) – This provides an interface between client apps and the sharded cluster.
  • Config Server – Stores the metadata for the sharded cluster. Metadata includes location of all the sharded chunks and the ranges that define the chunks (each shard is broken into chunks) The Query router cache this data and use it to properly direct where they need to send read and write operations. Config servers also store authentication information and manage distributed locks. These must be unique to a sharded cluster. They store information in a “config” database. Config Servers can be setup as a replica set since keeping these going is necessary for the sharded cluster.
  • Shard – This is a subset of the data. This can be deployed as a replica set.

There is a total of 3 types of sharding strategies. The three types are Range, Hash, and Zone.

  • Ranged Sharding – You would create the shard key by giving it a range and all documents within a range zone would be grouped on the same shard. This approach is great for co-locating data such as all customers within a specific region.
  • Hashed Sharding – Documents are distributed across shards more evenly using a MD5 hash of the shard key, optimizing write performance, and is optimal for ingesting streams of times-series or event data.
  • Zoned Sharding – Developers can define specific criteria for a zone to partition data as needed by the business. This allows much more precise control over where the data is placed physically. If a customer was concerned of data locality this would be a great way to enforce that. Reasons might include GDPR etc.

You can learn more (by watching the MongoDB webinar on sharding here).

Data Security

MongoDB has a number of security features that can be taken advantage of to keep data safe, which is becoming more and more important with the ever-increasing amount of personal information being kept and stored. The main features utilized are:

  • Authentication. MongoDB offers integration with all the main external methods of authentication. LDAP, AD, Kerberos, x.509 Certs. You can take this a step further and implement IP white-listing as well.
  • RBAC. Role-Based Authentication Controls allow for granular user permissions to be assigned. Either to a user or application. Developers can also create specific views just to show pertinent data as needed.
  • Auditing. This will need to be configured but Auditing is offered and can be output to the console, a JSON, or BSON file. The types of operations that can be audited are schema, Replica/Sharded events, authentication and authorization, and CRUD operations (create, read, update, delete)
  • Encryption.
    MongoDB supports both at-rest encryption and transport encryption. Transport encryption is taken care of by support of TLS/SSL certs. They must be a minimum of 128bit key length. As of 4.0 TLS 1.0 is disabled if 1.1 is available. Either self-signed or CA authority certs can be used. Identity verification is also supported for both client and server node members. FIPS is supported but only on Enterprise level. Encryption at-rest is taken care of by a new as of 3.2 storage engine. Default is AES256-CBC.

Next up we will go over a bit of what the hardware should look like.

Intro to MongoDB (part-1)

I don’t like feeling dumb. I know this is a weird way to start a blog post. I detest feeling out of my element and inadequate. As the tech world continues to inexorably advance – exponentially even, the likelihood that I will keep running into those feelings becomes greater and greater. However, to try to combat this, I will have a number of projects to learn new products in the works. Since there is a title on this blog post and I have shortsightedly titled it the tech that I will be attempting to learn, it would be rather anticlimactic to say what is it now. Jumping in….

What is MongoDB?

The first question is what is MongoDB and what makes it different from other database programs out there? Say for example MS SQL or MySQL or PostgreSQL? To answer that question, I will need to describe a bit of the architecture. (And yes, I am learning this all as I go along. I had a fuzzy idea of what databases were and used them myself for many things. But if you asked me the difference between a relational and non-relational DB and I would have had to go to Google to answer you.) The two main types of databases out there are relational and non-relational. Trying to figure out the simple difference between them was confusing. The simplest way of defining it is using the relational model definition from Wikipedia. ” ..all data is represented in terms of tuples, grouped into relations.” The issue with this was it didn’t describe it well enough for me to understand. So, I kept looking for a simpler definition. The one I found and liked is the following, (found on stackoverflow), “I know exactly how many values (attributes) each row (tuple) in my table (relation) has and now I want to exploit that fact accordingly, thoroughly, and to its extreme.” There are other differences – such as how relational databases are more difficult to scale and work with large datasets. You also need to define all the data you will need before you create the relational DB. Unstructured data or unknowns are difficult to plan for and you may not be able to.

So, what is MongoDB then? Going back to our good friend Wikipedia, it is a cross-platform document-oriented database program. It is also defined as a NoSQL Database. There are number of different types of NoSQL though (this is where I really start feeling out of my element and dumb). There are:

  1. Document databases – These pair each key with a complex data structure known as a document. Documents can contain many different key-value pairs, or key-array pairs, or even nested documents.
  2. Graph stores – These are used to store information about networks of data such as social connections
  3. Key-Value stores – are the simplest NoSQL databases. Every single item in the database is stored as an attribute name (or key) together with its value.
  4. Wide column stores – are optimized for queries over large datasets, and store columns of data together, instead of rows. One example of this type is Cassandra

Why NoSQL?

In our speed crazed society, we value performance. Sometimes too much. But still, performance. SQL Databases were not built to scale easily and to handle the amount of data that some orgs need. To this end, NoSQL databases were built to provide superior performance and the ability to scale easily. Things like Auto-Sharding (distribution of data between nodes), replication without third party software or add-ons, and easy scale out, all add up to high performing databases.

NoSQL databases can also be built without a predefined schema. If you need to add a different type of data to a record, you don’t have to recreate the whole DB schema, you can just add that data. Dynamic schemas make for faster development and less database downtime.

Why MongoDB?

Data Consistency Guarantees – Distributed systems occasionally have the bad rap of eventual data consistency. With MongoDB, this is tunable, down to individual queries within an app. Whether something needs to be near instantaneous or has a more casual need for consistency, MongoDB can do it. You can even configure Replica sets (more about those in a bit) so that you can read from secondary replicas instead of primary for reduced network latency.

Support for multi-document ACID transactions as of 4.0 – So I had no idea what this meant at all. I had to look it up. What it means, is that if you needed to make a change to two different tables at the same time, you were unable to before 4.0. NOW you are able to do both at the same time. Think of a shopping cart inventory. You want to remove the item out of your inventory as the customer is buying it. You would want to do those two transactions at the same time. BOOM! Multi-Document transaction support.

Flexibility – As mentioned above, MongoDB documents are polymorphic. Meaning….They can contain different data from other documents with no ill effects. There is also no need to declare anything as each file is self-describing. However……. There is such a thing as Schema Governance. If your documents MUST have certain fields in them, Data Governance will step and in and structure can be imposed to make sure that data is there.

Speed – Taking what I talked about above a bit further in, there are a number of ways and reasons why MongoDB is much faster. Since a single document is the place for reads and writes for an entity, to pull data usually only requires a single read operation. Query language is also much simpler further enhancing your speed. Going even further you can build “change streams” that allow you to trigger actions based off of configurable stimuli.

Availability – This will be a little longer since there is a bit more meat on this one. MongoDB maintains multiple copies of data using a tech called Replica Sets. They are self-healing and will auto-recover and failover as needed. The replicas can be placed in different regions as mentioned previously so that reads can be from a local source increasing speed.

In order to maintain data consistency, one of the members assumes the role of primary. All others acts as secondaries, and they will repeat the operations in the oplog of the primary. If for some reason the primary goes down, one of the secondaries is elected to primary. How does it decide you may ask? I’m glad you asked! It does it based on who has the latest data (based on a number of parameters), who has the most connectivity with the majority of other replicas, and it could use user-defined priorities. This is all happens quickly and is decided in seconds. When the election is done, all the other replicas will start replicating from the new primary. If for some reason the old primary comes back online, it will automatically discover its role has been occupied and will become secondary. Up to 50 members can be configured per replica set.

Well that’s enough for one blog post as I’m about 1200 words already. Next post will continue with sharding and more MongoDB goodness.

Windows Bare Metal Recovery on Rubrik’s Andes 5.0

Rubrik Andes 5.0 is out! There are so many features that have been added and improved upon. One of the many things that has me excited is Bare Metal Recovery. While virtualization has pretty much taken over the enterprise world, there are still reasons to have physical machines. Whether a workload is unable to be virtualized or its presence on a physical machine is a business requirement, there still exists a need for physical protection. So I wanted to do a quick walkthrough to share how Rubrik has improved its ability to perform bare metal recovery (And to go through it myself!).  I won’t go into how to create, or what SLA policies mean here, since there are plenty of good resources ( https://tinyurl.com/yb9k63ax)  Lets get started!

In order to create the PE boot disk, you will need to download the ADK from Windows. This will download the PE environment. What is needed is a boot cd that gives you a PowerShell prompt and network access. You can download the Microsoft ADK from this page: https://docs.microsoft.com/en-us/windows-hardware/get-started/adk-install or the download link here: https://go.microsoft.com/fwlink/?linkid=2026036 What you are downloading here is the setup file that will then download everything else. All the files downloaded will probably be around 7GB. I believe you should be able to use a Windows Server Install CD and go into recovery mode. I have not tested this yet, however.

Once you download the ADK and install that, you will need to download Rubrik’s PE install tool. This is a PowerShell script that will aggregate the files needed and compile them into an ISO that can then be used on a USB key etc.

Download the PE install from Rubrik’s Support site: https://www.rubrik.com/support/

(If you don’t already have one, you will need a username and password to log in)

Once downloaded, you need to unzip the files directly on your C drive.

Those two folders will be created.

Now open a Admin PowerShell prompt and run the following – You may need to adjust the beginning to account for where the .ps1 file is.

.\CreateWinPEImage.ps1 -version 10 -isopath C:\WinPEISO -utilitiespath C:\BMR

It will then begin running the ISO creation process.

At the end of this a bootable ISO will be created in the location shown in the figure (you specified in the PowerShell command above C:\WinPEISO).

You will use this CD to boot the machine to restore the physical machine later on.

Normally the next parts would already be done, since you are interested in the restore. I put this in though, since I was creating everything as I was writing this post and someone out there might not be aware of how to protect the machine in preparation for BMR.

After the machine has been installed with Windows and any other software needed, you should install the Rubrik Connector on the machine and add it to the Rubrik Cluster. To do that you need to log into the Rubrik cluster GUI and click on Windows Hosts and then, Add Windows Hosts, as shown here in the picture

You are then presented with a popup window where you can download the Rubrik Connector and install it. After it is installed, click “Add”

The next step is install the Volume Filter Driver. You can do that by clicking on the checkbox in front of the machine and then clicking on the ellipsis in the top right corner. To be clear, you don’t “need” the Volume Filter Driver for this to work, but it does a better job in keeping track of the changes and should allow you to take faster incrementals. It performs the same job as RCT in Windows or CBT in VMware. 

Click on the checkbox in front of the machine. Then click on the ellipsis, and click on “Install VFD”

The host will need to be restarted (as you can see in the picture on the right side. It is really easy to tell if the VFD is installed with the column on the right. It may take a few minutes for Rubrik to update to the fact you rebooted). Once the host is restarted, you will need to take a snapshot of the machine. You can do this by clicking on the Windows Host name itself and clicking the “Take On Demand Snapshot”.

This will launch a short wizard to take the snapshot

I’m going to click on Volumes and then the ‘C’ drive. The next screen is to add the machine to an SLA protection policy. You don’t have to, but you should. This will keep it continually protected according to the SLA you choose. Click on “Finish” and watch the magic occur.

So in case you were wondering….my first error above was because the machine was not part of a domain. Some of the permissions required, need the machine to be part of a domain.

Once the backup has been completed, you will see a dot on the calendar telling you that a snapshot exists on that day.

In order to restore this, click on that dot. You will then see the snapshots available, shown. At the end of the line you have an ellipsis you can click on to take action on it.

Now you CAN choose individual files with the “Recover Files” option but that won’t help you perform a Bare Metal Restore. The option you are looking for is “Mount”. When you do choose “Mount” you will get a new popup. This will show the drives available. When you click on the C: drive and any other one you need, click on “Next”. The next window gives you more options. You can either mount the snapshot on a host (If you are doing just a volume that needs to be restored) or, since we are doing a bare metal restore, click on No Host on the bottom to expose an SMB share with the VHDX file.

In order to preserve security around the share, you will need to enter who is allowed to access the share. You do that by entering in an IP address. ONLY the IP Addresses you input will be able to access the share.. You probably don’t know what IP address you need to put in here yet, so start your physical machine up with the PE CD and then use ipconfig in the open command line window to find the IP to use.

After the drive in Rubrik is mounted, you can find it by going to the Live Mount menu on the side and selecting Windows Volumes. When you hover over the name, it will give you the option of copying the SMB share to your clipboard. When you move your mouse down to the Path all you need to do is click on it to add to your Clipboard.

The bottom image is what the share will show in case you’re curious.

Since your machine has already been started with the PE ISO, the next step is to run the PowerShell command to begin the restoration process. The PowerShell command is shown to you in the window shown above, but here is an example of what you might use:

powershell -executionpolicy bypass -file \\192.168.2.52\xm5fuc\without_layout\RubrikBMR.ps1
				

The part in blue will be just for that mount and need to be changed. Once you hit enter, a flurry of activity will happen and you will see it copying the data over. Grab a coffee or nice single malt scotch – either will do depending on the time of day. I have a video below of what happened after I clicked enter. It restored 22 GB of files on the C:\ drive in about 15 min. This is over a 1Gb Ethernet connection and using a single virtual node. In closing I love this as a feature and feel it is a great feature with the additional refinements made on this version.

Don’t Backup. Go Forward.

3 Months in…..

Slightly over 3 months in now at my first role as Technical Marketing Engineer with Rubrik, Inc and I couldn’t be happier. The job itself is new things often enough, to where I don’t feel bored. And my team is amazing-I couldn’t ask for a more supportive group of people. The more I work with them, the better it gets. The breadth of knowledge and insight they bring to the table help me immensely. As I’m sitting here at my computer on a Friday night feeling thankful, I thought I would do a quick recap of some of the projects I’ve already been working on and things I’ve done.

First month:

Started off with bootcamp and begun learning about the product. Like past roles with most of the companies I’ve worked for, this was once again like drinking from a firehose. To be clear, I do not feel like I have even scratched the surface of the products, even after 3 months. Along with trying to ramp up on the product and forgetting countless names of co-workers (yeah, I’ve got an exceeding bad memory for names and I definitely apologize to all those whose name I’ve forgotten… ever), I also attended VMworld. While there, I caught up with countless friends and starting experiencing the high regard that people hold for Rubrik. I was able to meet customers who also loved the product, and some even went as far as sharing/paying for cab rides with me and of course dinners. I was able to help setup the Rubrik demo labs and felt like I was able to start contributing to the team. I also did my first presentation in conjunction with Pure Storage (at VMworld ever!).

Second month:

Second month, I started warming up with presentations with a product demo and a webinar. Both helped calm some of my jitters about presenting in front of people. I’ve always been nervous about presenting in front of people and imposter’s syndrome. But I also hate that feeling, and was a major reason why I wanted to be (and was) a VMware certified instructor while working at Dell Emc and a large reason for moving to this role. I’ve always been decently introverted and have worked hard to try to come out of my shell. The community has been a large part of what has made it easier for me to do so. During the month, I ended up back at HQ for team meetings and more team-building activities. This is one of the first teams that I’ve worked for that has truly worked hard at bringing their employees together and becoming family. To end the month out, I started preparing for my first VMUG presentation.

Third month:

The third month I traveled to Phoenix, Az for their UserCon there. I gave my first presentation to a…not packed house . It was actually better this way in my opinion since this allowed me to work into this sort of thing a bit easier. I felt more like it was a conversation and tried to get the attendees more involved in the presentation with me instead of it just being a slideshow. The last part of the month was finished off with going back to HQ to work in the lab. I’ve always loved lab work as it presented a clear problem or goal and you could concentrate on that instead of needing to define things first. I admit freely that I’ve confined myself in that sort of environment for too long though and need to work on my creative side. Which is why we are going to try to blog more.

So what’s next on the agenda? First thing, I have a vacation planned. First one in 5 years where I am actually going somewhere. Heading to a remote cabin in Colorado to spend a week. Get some creative juices flowing again and some rest. I will be visiting a few friends from VMware up there and enjoying a few libations with them. Hopefully the ideas for some blog posts will show up and I’ll begin writing those. After that, I’m still doing a ton of learning. Trying to get a lot better at my Spanish and Rubrik’s products along with the products we support (Azure, Hyper-V, AWS, etc.). I’m sure there will be more information coming out of those. To be continued….

New Beginnings

It was with a bit of regret and a small bit of fear that I turned in my 2 weeks’ notice last week. Even though I technically left Dell 2.5 yrs. ago, Dell wasn’t done with me yet and decided to buy the company I moved to. So essentially, I worked for Dell in some capacity for the last 6 yrs. During that time, I did a bit of everything from front-line phone tech to VMware Certified Instructor. I learned a ton of IT that you never really see until you work in larger environments and made some great life-long friends. I really enjoyed teaching and the feeling I may have helped my students along in their career, and because of that, I decided to get more into the education side of IT. To do this, I moved over to EMC to be a Content Developer for the Enterprise Hybrid Cloud solution (1 month after I joined EMC, Dell announced the buyout and I once again became a Dell employee). I helped develop classes for that for a while before going down the path of Lab Architect.

Shortly after I started the Lab Architect role, I was approached with a possibility of blending all the things I love in a single position and with the sweet addition of getting paid for it as well. The training, talking with customers, building POCs, and blogging. I love the idea of trying to help people with the work that I do, and as I get older (ugh) I personally feel that I need to make more a difference with trying to help people. I believe this position will allow me to do that. I greatly appreciate all the help that everyone has given me up to now and continuing. The VMware community is one of the best communities I’ve ever been a part of and God-willing will continue to be part of for a long while.

Putting on a different paragraph for the TL:DR crowd, I have accepted a new role as Technical Marketing Engineer for Rubrik. My last day with Dell/EMC is 7/27. I am looking forward to working with a team of people who I greatly admire and respect. I have a ton of catchup and work to do in the coming months and pray they have the patience for me . I am extremely excited not only about the people that I get to work with but also the product as well. Rubrik has some really cool technology which I plan on delving way deeper into and seems like they really have an awesome vision on how to handle data to make it really easy to manage and control. I look forward to what’s coming….

Tales of a Small Business Server restore……

I know that many of you have gone through your own harrowing tales of trying to bring environments back online. I always enjoy hearing experiences of these. Why? Because these are where learning takes place. Problems are found and solutions have to be found. While my tale doesn’t involve a tremendous amount of learning per se, I feel there are a few things I did discover along the way that may be useful for someone that has to deal with this later. So let’s being the timeline.

Backstory

The current server is a Microsoft Small Business Server 2011. This server serves primarily as a DNS/File/Exchange server. It houses about 3-400GB of Exchange data, and about 700GB of user data. Now this machine is normally backed up using a backup product called Replibit. This product uses an onsite appliance to house the data and stage for replication to the cloud. So theoretically you will have a local backup snapshot and a remote-site backup. As backups always somehow have challenges associated with them, this seems like an appropriate amount of caution. The server itself is a Dell and is more than robust enough to handle the small business’ needs. There are other issues I would be remiss to not mention. Like the majority of the network is on a 10/100 switch with the single gigabit uplink being used by the SBS server.

Sometime in the wee hours of the morning on
Wednesday….

This was when the server was laid low. Don’t know what exactly caused it, as I haven’t performed a root cause analysis yet, and it’s unlikely to happen now. For the future I will be recommending a new course direction for the customer, as I believe there are better options out there now (Office365, standard Windows Server).

I believe that there was some sort of patch that may or may not have happened about the time the machine went down. Regardless, the server went down and did not come back up. It would not even boot in Safe Mode. It would just continually reboot as soon as Windows began to load. Alerts went off notifying of the outage and the immediate action taken was to promote the latest snapshot to a VM on the backup appliance. This is one of the nice features that Replibit allows. The appliance itself runs on customized Lubuntu distro and virtualization duties are handled by KVM. The VM was started with no difficulty, and with a few tweaks to Exchange, (for some reason it didn’t maintain DNS forwarding options) everything was up and running.

After 20 min of unsuccessfully trying to get the Dell server to start in safe mode or Last Known Config, or any mode I could I decided my energies would be better spent just working on the restore. Due to the users working fine and happy on the vm, the decision was made to push the restore to Saturday to minimize downtime and disruption.

Saturday 8:00am…….

As much as I hate to get up early on a Saturday and do anything besides drink coffee, I got up and drove to the companies’ office. An announcement was made the day before that everyone should be out of email and network etc. Then we proceeded to shut down the VM. Using the recovery USB, I booted into the recovery console and attempted to start a restore of the snapshot that the VM was using to run. I was promptly told, “No” by the recovery window. Reason? The ISCSI target could not be created. This being the first time I had used Replibit personally, I discovered how it works is, the appliance creates an ISCSI target out of the snapshot, then uses that to stream the data back to the server being recovered. Apparently when we promoted the snapshot to a Live VM, it created a delta disk with the changes from Wednesday to Saturday morning. The VM had helpfully found some bad blocks on the 6mo old 2TB Micron SSD in the backup appliance, which corrupted the snapshot delta disk. This was not what I wanted to see.

With the help of Replibit support, we attempted everything we could to start the ISCSI target. We had no luck. We then tried creating an ISCSI target from the previous snapshot. This worked. This was a problem however, because we would lose 3.5 days of email and work. Through some black magic and a couple of small animal sacrifices, we mounted the D drive of the corrupted snapshot with the rest of the week’s data (somehow it was able to differentiate the drives inside the snapshot). I was afraid though, that timestamps would end up screwing us with the DB’s on the servers. Due to the lack of any other options though, we decided to press forward. The revised plan now, was to restore the C drive backup from Tuesday night and then try to copy the data from the D drive from the snapshot using WinSCP. We started the restore – it was about 11am-ish on Saturday. We were restoring 128GB of data only, so we didn’t believe that it would take that long. The restore was cooking at first, 2-350MB/min. But as the time wore on….the timer kept adding hours to the estimate and the transfer rate kept dropping. Let’s fast forward.

Sunday 9:20pm

Yes…. 30+hrs later for 130GB of data, and we were done with just the C drive. At this point, we were sweating bullets. The company was hoping to open as usual Monday morning and with those sort of restore times, it wasn’t going to happen. —Would like to send a special shout out to remote access card manufacturers. Dell’s iDRAC in this case. Without which, I would have been forced to stay onsite during this time and that wouldn’t have been fun—Back to the fun. First thing now was to see if the restore worked and the server would come up. I was going to bring it up in safe mode with networking as the main Exchange DB was on the D drive and I didn’t want the Exchange server to try to come up without that. Or any other services that also required files on the D drive for that matter.

The server started and F8 was pressed. “Safe Mode with Networking” was selected and fingers were crossed. The startup files scrolled all the way down through Classpnp.sys and it paused. The hard drives lit up and pulsed like a Christmas tree. 5 min later the screen flashed and “Configuring Memory” showed back up on the screen. “Fudge!” – this is what happened before the restore, just slower. Rebooted, came back to the item selection screen and this time just chose “Safe Mode”. For whatever reason, the gods were smiling on us and the machine came up. First window up by my hand was a command prompt with a SFC /scannow command run. That finished with no corrupt files found (of course) so I moved on. I then created the D drive as it had overwritten the partition table when the C drive was restored. I had no access to the network of course and needed that to continue with the restoration process. Rebooted again and chose “..with Networking” again. This time it came up.

Now we moved on to the file copy. The D drive was mounted on the backup appliance in the /tmp folder (just mounted mind you, not moved there) on the Linux backup appliance. We connected with WinSCP and chose a couple folders and started the copy. Those folders moved fine, so on to some larger ones……Annnnnd an error message. Ok what was the error? File name was too long. Between the path name and the file name, we had files that exceeded 255 chars. This was on basically a 2008r2 Windows server so there was no real help for files that exceeded that. While NTFS file system itself can accept a filename including path of over 32k characters, the Windows shell API can’t. Well crap. This was not going the way I wanted it to. Begin thought process here. Hmmm Windows says it has a hotpatch that can allow me to work around this… This doesn’t help me with the files that it pseudo-moved already though. I can’t move/delete/rename or do any useful thing to those files, whether in the shell or in Explorer. ( I do discover later that I can delete files locally with filenames past 255 char if I use WinSCP to do so. This does create a lock on the folder though so you will need to reboot before you can delete everything) I can’t run the hotfix in safe mode but I don’t really want to start Windows up in normal mode. I don’t have much choice at this point, so I move the rest of the Exchange DB files over to the D drive. This will allow me to start in regular mode without worrying about Exchange. I now go home to let the server finish the copy of about 350ish GB. A text is sent out that the server is not done and informing the company of the status of our work.

Monday morning 8am

The server is rebooted and it comes up in regular mode – BIG SIGH OF RELIEF – the hotpatch files are retrieved and I try to run them. Every one, even though 2008r2 is specifically called out, informs me that they will not work on my operating system. Well this is turning back into a curse-inducing moment.. again. Through a friend, I learn of a possible registry entry that might let us work with long file names – this doesn’t work either. Through my frantic culling through websites in my search for a solution, I find out there are two programs that do not use the Windows API and so are not hampered by that pesky MAX_PATH variable. (I did find there is a SUBST command I could use at the CLI to try to change the name manually. This is not feasible though as one user has over 50k files that would need to be renamed.) Those programs are RoboCopy and Fast Copy. Fast Copy looks a little dated, I know, but as I found out, it worked really well. On to the next hurdle! These tools require a Windows SMB share to work, so we need to mount a Samba share on the backup appliance and reference the mounted snapshot so we can get to it. This works and a copy is setup to test. 5 minutes in…. 10 minutes in… Seems like it’s working. Fast Copy is averaging a little better than 1GB/min transfer speeds as well. Set it up for multiple folders and decide to leave it in peace and go to bed (it is 12am at this point).

Tuesday morning

All files are moved over at this time. Some of them didn’t pull NTFS permissions with them for some odd reason, but no big deal, I’ll just re-create them manually. Exchange needs to be started. Eseutil to the rescue! The DB were shut down in a dirty state. The logs are also located on the C drive. We are able to find the missing logs though and merge everything back together and are able to get the DBs mounted. At this point, there is just a few “mop-up” things to do. There was one user that lost about 4 days of email since she was on a lone DB by herself and it was hosted on the C drive. She wasn’t happy, but not much we could do with a hardware corruption issue unfortunately.

Lessons learned from this are as follows (This list is not all inclusive). You should test the backup solution you are using before you need it. Some things are unfortunately beyond your control though. Corruption on the hardware on the backup device is one of those things which just seems like bad luck. You should always have a Restore Plan B, C, …. however. To go along with this, realistic RPOs and RTOs should be shared with the customer to keep everyone calm. Invest in good whiskey. And MAX_PATH variables suck but can be gotten around with the programs (whose links I included) above. Happy IT’ing to everyone!

Creating a 2 Tier App for testing

It has been a remarkably long time since my last post, and I apologize for that. Things got in the way…Such as my own laziness, job, laziness. You get the idea.

This blog post was conceived because of the lack of posts out there for this. Granted I may just be dense but it took me a while and some help to get this working and I used a previous blow post from another author for this. This post here http://blog.bertello.org/2015/07/building-a-basic-3-tier-application-for-your-home-lab/ was used as a template but there were a number of problems and things that were left out that caused me issues. So, I took it upon myself to correct those small things and repost it. In full disclosure, I did try to reach out to the blog author, but have not heard back from him yet.

To start out with a bit about my enviro. I created a couple of VMs using my home lab setup of vSphere 6.5. I don’t have anything fancy in it right now, especially since NSX doesn’t run on 6.5 currently. I started the VMs out with what vSphere automatically provisioned for the VMs, 1 vCPU, 2Gb of RAM, and 16GB HD. This can be reduced of course since I am just using CentOS 6.8 minimal install CD and don’t believe there will be a lot of traffic that they need to handle. I ran through the graphical setup and setup a hostname and IP address on each of the machines. The goal of course is to have these machines eventually be on separate network tiers to test out all the features available to us in NSX, such as micro-segmentation. (of course this is once NSX is supported on 6.5 vSphere)

I am using CentOS 6.8 (which is the latest release on 6.x as of this writing) and the main reason why is that I am more familiar with 6.x than 7. Also Linux is free and easy to deploy and doesn’t take much in the way of resources, providing a perfect OS to use. The first thing we need to do is disable the firewall. This IS a lab environment so I am not too worried about hackers etc., and I will be adding NSX firewalls on them later. To accomplish this, type the following:

service iptables save

service iptables stop

chkconfig iptables off

You will do this for both machines. We will concentrate on the database server first. This is only going to be a 2 tier app. We will have a Database server and a Web/PHP/Wordpress server. You can add more however you want to but this is a good start. Perhaps for the third you could add proxy like the blog post above. Personally, I was just going to put the client machine on it to access the first two. But it is all up to you – it’s your world, and if you want a happy little tree in there, you put one in there. J

Database Server Config

We are going to use MySQL like the original blog.

yum install -y mysql mysql-server mysql-devel

The above line will install all the needed pieces of SQL that we will need. We now need to start the service, set it to run at start up, and go through the small setup of creating a admin password and deciding whether we want a default database in addition to the one we create and if we want to allow anonymous users and remote root login.

service mysqld start

chkconfig mysqld on

/usr/bin/mysql_secure_installation

Also another thing I should note is that it is much easier to copy and paste my commands. To do this I would recommend using puTTY. We are now going to create our database and set permissions for it.

mysql -u root -p

SELECT User, Host, Password FROM mysql.user;

CREATE DATABASE wordpress;

CREATE USER wp_svc@localhost;

CREATE USER wp_svc@’%’;

SET PASSWORD FOR wp_svc@localhost=PASSWORD(“Password123”);

GRANT ALL PRIVILEGES ON wordpress.* TO wp_svc@localhost IDENTIFIED BY ‘Password123’;

GRANT ALL PRIVILEGES ON wordpress.* TO ‘wp_svc’@’%’ IDENTIFIED BY ‘Password123’;

FLUSH PRIVILEGES;

Exit

You can change the above to whatever parameters you wish, just write them down as you will need them later. I also bound MySQL to the IP address you can do that at /etc/my.cnf if you wish. The code is below.

bind_address=192.168.1.81

Obviously, you would change the IP address to the one you are using. And that’s it for the DB.

Webserver Config

First thing we need to do on this machine is disable the firewall again. We also need to disable SELINUX since if we don’t, our packets will never leave this machine (something I struggled with and finally got the help of my good friend Roger H. in order to figure out. Shameless plug for him at his blog here http://www.rhes-tech.com/ – I highly recommend you check him out as he is a brain when it comes to Linux things. So here is the code we need:

service iptables save

service iptables stop

chkconfig iptables off

In order to disable SELINUX from making our life horrible, we are going to set it to Permissive mode. If we fully disable it, it could scream at us. Therefore, use your favorite text editor and edit /etc/sysconfig/selinux file and you want to change the SELINUXTYPE=targeted. It will look like this :

# This file controls the state of SELinux on the system.

# SELINUX= can take one of these three values:

# enforcing – SELinux security policy is enforced.

# permissive – SELinux prints warnings instead of enforcing.

# disabled – SELinux is fully disabled.

SELINUX=permissive

# SELINUXTYPE= type of policy in use. Possible values are:

# targeted – Only targeted network daemons are protected.

# strict – Full SELinux protection.

SELINUXTYPE=permissive

# SETLOCALDEFS= Check local definition changes

SETLOCALDEFS=0

Next we are going to install a ton of stuff.

yum install -y httpd

chkconfig –levels 235 httpd on

The above installs Apache web server and starts it at machine start up. Next we need to install PHP as this is what WordPress requires to run. We will also install the supporting modules.

yum install -y php php-mysql

yum -y install php-gd php-imap php-ldap php-odbc php-pear php-xml php-xmlrpc php-mbstring php-snmp php-soap php-tidy curl curl-devel wget

Next we will download the latest version of WordPress (as of this scribbling 4.7) and we will then need to unzip it and then copy it over to the webserver www home directory. Then we will need to add the config to point back to the DB server.

wget http://wordpress.org/latest.tar.gz

tar -xzvf latest.tar.gz

cp -r wordpress/* /var/www/html

cd /var/www/html

cp wp-config-sample.php wp-config.php

Again, using your favorite text editor open the wp-config.php file and change it like below. If you chose different values for your database name and username/password you will need to use that info now.

// ** MySQL settings – You can get this info from your web host ** //

/** The name of the database for WordPress */

define(‘DB_NAME’, ‘wordpress’);

/** MySQL database username */

define(‘DB_USER’, ‘wp_svc’);

/** MySQL database password */

define(‘DB_PASSWORD’, ‘Password123’);

/** MySQL hostname */

define(‘DB_HOST’, ‘192.168.1.81’);

/** Database Charset to use in creating database tables. */

define(‘DB_CHARSET’, ‘utf8’);

/** The Database Collate type. Don’t change this if in doubt. */

define(‘DB_COLLATE’, ”);

Once this is done you can go to your website to finish the WordPress install. The address should look something like this. You can use the FQDN or IP address.

http://<WebServer-FQDN>/wp-admin/install.php

When done, your site will be up and ready and look something like this: – CONGRATS

Blog

The first series of blog posts I am going to try to do will be Brown Bag Blogs for the VCP 6 Delta Beta. If any ideas to make it better or suggestions, fee free to let me know.