Dell Outlet – Alienware Aurora R7 bargain find

While I was benchmarking the ASUS ROG, I was on Dell.com outlet and looking at their deals for Memorial Day. Lo and behold I found one! The system I found was the following:

Alienware Aurora R7
i7-8700 (6c/12t)
16 GB RAM (2×8 Hynix 2666Mhz)
256 GB (PC401 Hynix NVMe 1.2)
2 TB (Toshiba 7200 RPM)
Dell Nvidia 1080Ti 11 GB DDR6 (says it’s an MSI)
DVD-+RW (HL-DT-ST GU90N)
Intel Z370 chipset
850W 80plus PS

And I got all this for the paltry sum of …. $985. Add tax etc. and I was still under $1100 or less than the ROG. Yes, I am aware that it is not a valid comparison since it is a laptop vs desktop. However, I did decide to benchmark it as well and compare the two. To be fair, the ASUS was very snappy and impressed me in a lot of ways. From the 120Hz screen to just general responsiveness. It was very crisp and I loved it. The lure of a 1080Ti for under a grand was too tempting though and so the ROG will be going back to Best Buy and the Alienware will be taking its place as the main gaming machine. The XPS 13 9380 maintains its spot without needing to fight for remote work machine. The XPS 13 is awesome, but that’s a different post.

To begin with we have the unboxing. The machine was packaged well with lots of padding to keep the machine secure during shipping. It came with an Alienware branded keyboard and mouse. Both were decent but nothing special. The keyboard did look like it was lighted but turned out that it was not. It did have a good feel to the key actions allowing for very fast typing and satisfying soft click action.

The machine itself has plenty of ports, with both USB-C and A and enough ports for 7.1 sound and all the other usual ports expected from an enthusiast machine. Inside is pretty cramped with the power supply actually needing to swing out in order to get to the motherboard and components below. There is room for one NVMe drive and 4 hard drives, although two of them will need to be 2.5″ or smaller. All the other components inside are dwarfed by the 1080 Ti card. Once the machine was turned on, I couldn’t help but notice I heard nothing. This machine’s processor is liquid cooled, the fans are quiet, and add in the sound deadening of the case and I have to almost put my head right next to it to hear it operating normally. This may change when I start gaming on it, but it is nice to have a quiet machine in my home office – since I have 7 rack mount servers on the other side.

As for the components, most of them are more generic sourced components versus something you would pick up yourself to put a system together obviously but during my benchmarking they did really well. The 1080 Ti is a generation old now as the 2080 Ti are out but still a very capable card as my benchmarks show. I’ll still go into them as much as I can of course.

Hardware

Processor: Not much new here. Most people are well aware of Intel’s i7-8700 8th Gen Coffee Lake processor. This one is running 6 cores with Hyperthreading for 12 Logical. It is running at a base frequency of 3.2 Ghz and turbo boosts to 4.6 Ghz. TDP is 65W and a total of 12 MB of Cache. It also has onboard graphics in the manner of UHD 630.

RAM: My system came with 2×8 GB RAM sticks running at 2666 Mhz. The motherboard used allows for up to 64GB at up to 2933Mhz. The newer Aurora R8 allows for faster RAM to be used.

NVME Hynix SSD: This is one of the parts that Dell has been using in a lot of their systems. It came originally in my XPS 13 as well (before I swapped it for a Samsung 970 Pro) Some of the other one I’ve had haven’t fared that great but I was actually surprised at the numbers on this one. You can buy this one from Amazon in bulk packaging for about $50. Add to it that most manufacturers don’t have great numbers for their smaller drives and this pleasantly surprised me.

Video Card: There is a lot of info out there on the 1080 Ti card as well so I won’t go too far into it here other than to say its fast. 3584 Cores, 11Gbps bandwidth and the need for about 250 Watts just by itself. It’s a serious card.

Chipset: Intel Z370 wasn’t the top end chipset but it offered overclocking features and solid support for things like RAID, Optane and 1×16, or 2×8 PCI-e slots, depending on the design. This motherboard in the Alienware can support two large graphics cards and even has the power connectors dangling to tempt you every time you open the case. Unfortunately, since I don’t have the i7-8700K processor the overclocking features remain locked to me.

Software: The normal complement of software is installed. General Windows 10 Home install, with all the extra trimmings that nobody ever wants. There is also the install of McAffee on there as well. Otherwise its not too heavy with junk software on the system just some Alienware and Dell utilities to keep the machine’s drivers and firmware up to date along with the control software for case lighting. A few uninstalls and I’m ready to start with my own installs of both the benchmark software and also latest drivers (and Microsoft Windows updates).

Benchmarks:

Compared to the ASUS ROG, this numbers averaged about 2x as good. I started off with 3DMark’s Time Spy and finished with a score of 8846 compared with 4471 of the ROG

Just because I wanted another comparison I ran Sky Diver as well, even though the test isn’t specifically meant for higher-end gaming machines. I ended up roughly about 2x the performance again with 45,482 vs the ASUS’s 22,004.

The next test, got to stretch the legs, so to speak of the 1080 card. The test was Fire Strike and was meant for gaming machines. I ended up with a score of 20,811 on it.

The final 3DMark test I ran was the Time Spy 4k resolution test.

I then moved on to the PC Mark 10 benchmark where once again, the desktop made its presence known with a score of 6004 to the 4288 from the ASUS.

I also ran Cinebench R20 on it and it scored pretty well although being a bit lower clocked than say the i7-7700. I have no doubt that if it was just a bit higher clocked I wouldn’t have any issue with kicking the i7-7700 to the curb.

The Multi-Core and Single Core was much higher than the ASUS at 1745 and 361 for the ASUS. Finally I also wanted to run a benchmark on the PC401 Hynix NVMe drive.

Not bad for an OEM part but obviously is still quite a bit behind the Samsung 970 Pro.

Overall, I believe this system to be pretty awesome and should last me for a while. It came with a year onsite warranty, so that gives me plenty of time to put together the new Ryzen build that I want to put together over the space of the next 6 months or so. A little different purpose in mind for the Ryzen so the Alienware definitely will be hanging around my desktop for a while, looking pretty.

New Gaming Rig – ASUS ROG Zephyrus G GA502

I am forever in search of deals when it comes to computer hardware. This is possibly one of the reasons why I have 7 Dell servers sitting in a half-rack in my house (leading to disapproving stares from the wife). When I saw this laptop in a sale on Bestbuy.com ($1050), I thought, “What a really good deal!” I had just bought an XPS 13 and an XPS 15 and had no real reason I could justify buying it. Fortunately, I was never one to worry about reasons and good sense when it came to good deals on fast computer hardware. This is definitely a weakness.

Normally when making a purchase of this magnitude I try to do a bit more research on it. But I couldn’t find any regular benchmarks done by the normal places I check. Indeed, the only place I could find anything was from a person who had purchased one on another forum and made his own benchmarks. Based on these and the personal need to know more, I decided to buy the laptop. I further told myself that If I did thorough benchmarks and posted them, I might allow myself to keep the machine, provided it performed well enough. (Dangling that carrot)

Going into this purchase I was a bit unsure for two reasons. Intel is currently at their 9th gen i series processors and their i9 are currently sitting at 8-cores / 16-threads. This machine was sporting a newer AMD Ryzen 7 3750H which is a 4-core / 8-thread processor based on 12nm Zen+. I wouldn’t most likely ever purchase an i9 due to the price (no matter how much I would love such a machine). It wouldn’t really be fair, in my own opinion, to compare it to one. It would be much fairer to compare to say, an I7. That being said, the i7’s are still sitting at 6-core /12 threads and for that, would seem to be quite a bit faster. The second reason is we are on the cusp of the new Ryzen 3rd Gen Zen 2 chips coming out and I am really excited what they will be able to do with the 7nm chips. I decided to buy it anyway.

Be aware, there is no bias in this review as I do not work for any computer vendor and I bought this computer with my own money. That being said, I would not be averse to accepting computers for testing purposes (hint, hint Dell, Lenovo, HP, etc..). So enough of background. Let’s get into it.

Unboxing

The machine is packaged well enough. The box is relatively spartan inside. You have a few sheets of paper consisting of the warranty information and other pieces of info. You have the laptop and the power brick. The machine itself is beautifully simple. There are no weird lines, no extravagant lights. Just a small logo on the back that lights red. Overall, I am a big fan of the aesthetics. If I was nit-picking, the material picks up a lot of finger grease though, and there is no webcam. The ports are distributed on both sides, with the back being reserved for getting rid of hot air. The right side has 2 USB-A 3.0 ports and a Kensington lock (and another vent). The left side has a headphone jack, a USB-A and C port, HDMI 2.0, ethernet, and power plug. The USB-C is not Thunderbolt and can’t provide power to the laptop (which isn’t a surprise considering the brick included is a 180w power supply) That said, I think there are plenty of ports for this price point and while I wouldn’t mind another USB-C with Thunderbolt, I don’t really need it considering the laptop should have enough video oomph to do what I want it to do.

The system is relatively lightweight coming in at just over 4.5 lbs. The bezels on the screen have been shaved down, though not to the same extent as the XPS 15. I took a few pictures side by side to compare.

The Hardware

First the processor. The processor is an AMD Ryzen 3750H. The ‘H’ denotes the higher power chips vs the ‘U’ chip line being the lower power, longer lasting chip. The rest of the information (and below graphic) I will pull directly from AMD’s website here.

As you can tell there is built-in Vega graphics. Total of 10 GPU cores, and the processor is a 35W chip, hopefully providing decent battery life.

Next the memory is Samsung DDR4 RAM running at 2666Mhz. In my laptop it was running this on a single slot.

The graphics is a combination of the Vega 10 on-chip graphics and the Nvidia 1660 Ti with Max-Q for better battery life, with 6GB of DDR6 Micron RAM. Its interesting to note that when I was digging into the HW info, I noticed that the Nvidia card was running at only a x8 bus instead of a PCI-E x16. I guess we will see how this plays out later on. There is a way to boost the speed of the Nvidia to a higher clock speed so that may take up some of the slack.

For Hard Drive duties, Asus decided to go with the Intel 660p NVMe M.2 512GB drive. This is the first one I have seen come from a OEM PC maker, though I’ve seen them advertised for a while. This drive is using Quad-Level NAND flash instead of the previously common TLC or V-NAND (2-bit MLC) chips found on the much faster Samsung Pro NVMe cards. Generally speaking, a QLC drive will provide good performance at a much lower price point than the other two options, seeing 2TB drives for under $200 already. Where the drive slows down is mostly in the longer (larger) transfers and long-term reliability. That being said, the drive is still much faster than the normal 5400 or even 7200 RPM rust drives commonly found in laptops.

The system comes with Realtek everything. The Wi-Fi card is a Realtek 8821CE card supporting AC in a 1×1 antennae. The Bluetooth 5.0 card is also Realtek (presumably same card) as is the RJ45 port on the side.

The keyboard is backlit as expected with N key rollover. This means that each key is scanned and picked up independently when pressed – allowing you to press a lot of keys at the same time, (useful for key combos in games for example) and it will flawlessly pick up each stroke. The backlight keyboard color is white and non-RGB. It doesn’t have lighted zones or any of the other higher end capabilities. That being said, I like the white and the key press is solid and feels good when typing on it, with a decent key depth when depressed. Again, to be nitpicky, the keys seem a little offset and that makes it a little harder to get comfortable typing on it and the touchpad, while capable enough, is a bit on the small side compared to other laptops. Again, it is a grease magnet…

The final piece of hardware that I was excited for on this laptop was the 120Hz 1080p non-touch LED panel. This is a vIPS, non-glare (no shiny glass yayyyy!) panel capable of either 60Hz or 120Hz refresh rate. The video card should have no trouble using the better refresh rates on games as long as you aren’t trying to run some crazy settings on it. The increased refresh rate should be a lot easier on the eyes for office tasks as well. So far, the LED panel is pretty smooth in regular Windows 10 transitions and I’m enjoying it immensely. There isn’t a whole lot of info (none actually) for this LED panel I have been able to find. The actual model is Sharp LM156LF-GL and it is a 6-bit IPS panel. I can’t find a driver for it or any other info. The viewing angles are good on it and I don’t see any light bleed on the edges which is nice.

As far as the battery, I can’t find anything in the listed specs other than possible battery life, which we all know is bogus. There will be testing on that. HWinfo lists the manufacturer as being ASUSTek with a listed capacity of 75,887 mWH or approximately a 76WH battery. So not bad, but not quite as hefty as say the 97Wh that is an option with my XPS 15.

The Software

Nothing out of place here. The system comes with Windows 10 (1809) and has minimal bloat – that doesn’t already come with Windows. Asus kept all of their own software relatively minimum, even eschewing McAfee (yayyy) on the system. There are some handy programs that are loaded on there. MyASUS which is a program to connect your machine to ASUS’ website to keep it up to date with drivers and firmware. SonicStudio which is a sound and recording program. Realtek audio console, Game Visual which allows you to change screen settings for different profiles, and finally Armoury Crate. This program allows you to change settings on your computer (speed / fan acoustics / key combos / and more) for different game profiles. You can also connect to it using a mobile app on your phone so you can change it via that while you are doing other things with your computer. Overall, I like the fact that the ROG like my XPS seems to stay pretty clean. I just wish Microsoft would stop trying to force their extra apps on us.

Benchmarks

To start with, let me go over the setup. I have installed nothing but driver and BIOS updates and Windows 10 updates. Everything should be the latest as of this posting. No tweaks were made, everything is as it came out of the box. The profiles and overclocking of the GPU mentioned above, was not done. Depending on the results, I may go back to try to tweak some of the settings for the NVMe drive, but I want a baseline first. I want to see how good this $1k machine can actually perform from the factory. The benchmark programs I bought off of Steam (3DMark and PCMark) myself and Cinemark will be thrown in for good measure. I can do benchmarks for anything else if anyone has any requests as I am rather new to this more detailed benchmarking thing. The card, according to ASUS’s website is said to be about as fast as the old 1070 card. So that is what I am looking for as far as numbers. If those ARE the numbers I get back, I will be extremely impressed with this system for a grand.

The first score was on Time Spy. Once again, no tweaks were made on the system yet. I compared it in the graphic to some i7-7700HQ and 1070 Max-Q cards with 8GB of RAM. And my tests came out pretty favorable. The 5197 score was the highest in the DB for that setup.

I then tested it on the Sky Diver 1.0 test and once again got pretty decent results considering the cost and such of this machine. Again, the 29209 score was the highest in the database.

For the PCMARK 10 test I added the i7-8750H just for comparison sake. Once again it showed well considering. It is notable as well that the 2nd result had a Samsung Pro NVMe drive in it as well as 32GB of RAM. (One thing to note, I went back after loading Nvidia’s newest driver and re-ran the PCMARK 10 and got a score of 4345.)

It is harder to compare these to other notebooks in this section because I don’t have a better control over what they have inside them. I am trying to maintain relative hardware though as much as its within my control. Next, I moved on to Cinebench R20. Both Single and Multi-Core tests were ran.

Lags a bit behind the 7th Gen i7 but not bad, considering the clock speed difference. I then tried to tweak the drivers a bit to see if I can get any more performance out of the benchmarks. To do this I selected the ‘Turbo’ profile from ASUS’s Armoury Crate program and re-ran the same tests. Before I even share the results, I could see a marked difference in the smoothness of the video being rendered. The fans were spinning their little hearts out but even still it just sounded like a loud breeze going by, not a turbine spinning up. That being said I’m sure that for some this would be a bit loud and would need to use headphones for gaming. As far as heat on the bottom, it got warm on the bottom but never uncomfortable.

For the third test I loaded Nvidia’s newest driver on the machine. As soon as I did, the machine had trouble switching to the Nvidia card instead of using the Vega. I tried to remedy this by making the Nvidia card the default card and re-ran the tests. That seemed to work. I guess loading the Nvidia driver by itself broke the switching but would need to test this further.

Left column in 3rd Place is no profile and original driver. 1st Place in the middle is Turbo profile and older driver. 2nd place is the new Driver and Turbo Profile. I’m a little confused by the result there. But overall not a tremendous difference a few Mhz overclock gives it.

I ran the Time Spy again to see how that would stack up. I’m seeing the same results repeat themselves – exactly. In third place, the original drivers and no profile. 1st place, original drivers and Turbo profile, and in 2nd, the new drivers and Turbo profile. You can notice though the speed of the 1660 is the same speed as the original. Leading me to believe that perhaps ASUS’s program is not able to overclock it with Nvidia’s driver, yet it still performs better than the original pointing to a better driver itself.

The results bothered me a bit so I went back and reinstalled and rebooted. After I did that, I ran the Sky Diver again. This time I got the results I expected. The new driver with Turbo profile came in first.

I ran some CrystalDisk numbers as well on the Intel 660p Card just to see where we were sitting. The results look pretty close to the stated specs for the drive. This is quite a bit lower than say a Samsung Pro drive would be. For comparison’s sake, I have run the same test on my own Samsung 970 Pro NVMe drive in my XPS 9380 13″ laptop on the second.


There is more to speed than just benchmarks but you can see the difference in the numbers shown.

One more “benchmark”. I downloaded Fortnite on it since that is one of the games I will be playing (it’s for my son…) on it. I ran on EPIC settings and set framerate to be 120 fps. During the whole game I don’t believe I fell below 65 or so, with the majority of the time running about 80s. Which is better than I could play it on my desktop with a GTX 1060 3GB card with an i7-2600.

Battery Life

Final test was battery life. I ran a movie on loop at 50% brightness and didn’t do anything else but wait. The movie was an MP4 file streamed from the NVMe drive. Even 50% brightness was more than adequate to watch the movie and at 120Hz the movie looked great. All said and done, the laptop was able to get 4:20 on a full battery. Not too shabby for a larger gaming machine. Of course this would drop quite a bit if gaming.

Conclusion

All said and done, so far it is a pretty sweet laptop and for the price a good deal. I find it interesting they included space for another NVMe drive but not a second RAM slot. That, to me, is just weird. I will be running more games on it to fully determine if I will keep it or not, and will update if anyone is interested. If I do decide to keep it I will definitely be swapping the drives. One other thing, it was a bit difficult getting into the case. ASUS did not design this for easy access, in my opinion.

Can you upgrade and Upsize your VCSA?

While brainstorming about one of our labs, the question was raised on whether you can upsize your VCSA while upgrading to a newer version. Specifically, from 6.5u2 to 6.7U1 (build 8815520 to 11726888). We wanted to upgrade to the latest version but we also believe we had outgrown the original VCSA size that we deployed. VMware has made this really simple. I did a quick test in my home lab, and that is what this post will be based on.

To start you obtain the VCSA .iso that you are going to upgrade to. After you download it, you go ahead and mount it. Just like you would with a normal install/upgrade, run the appropriate installer. For me it is the Windows one which is located under the \vcsa-ui-installer\win32 directory. The installer.exe launches the following window:

We chose the Upgrade icon here. The next screen lets you know this is a 2-stage process. How it will perform this is:

  1. It will deploy a new appliance that will be your new vCenter.
  2. All of your current data and configurations will be moved over from the old VCSA to the new.

After the copy process is complete it will power off the old VCSA but not delete it. Move to the next screen and accept the License Agreement. The third screen looks like this:

You need to put in the information of the source VCSA that you will be migrating from here. Once you click Connect To Source it will ask you for more info. Specifically, what is your source VCSA being hosted on. This could be a single host or it could be another vCenter.

You will be asked to accept the SSL Certificates. The next screen will ask you for where you are going to put the new appliance. This can be either a host or a vCenter instance.

Step 5 is setting up the target appliance VM. This is the new VCSA that you will be deploying. Specifically, what do you want to name it and what the root password is.

Step 6 is where we can change the size of the deployment. I had a tiny in the previous deployment and I decided that was too small. This time I want to go one step up to the “Small” size. You can see the deployment requirements listed below in a table.

Next step is configuring your network settings.

And the last screen to this stage is just to confirm all your settings. This will then deploy the appliance (during which you grab a nice glass of scotch and wait…preferably something nice like my Macallan 12yr)
Once that has been finished, you are off to Part 2 of the process: Moving your information over. The first screen you will be presented with (after running checks) is Select Upgrade Data. You will be given a list of the data you can move over and approximate number of scotches you will need for the wait. (Maybe that last part is made up, but hey you can find out anyway amirite?)

Since my environment that I am moving is relatively pristine, I don’t have much data to move. It is estimating 39 min but it actually took less time. You make your decision (seems pretty straightforward what kind of data you would be interested in) and move to the next screen which is whether you join VMware’s CEIP program or Customer Experience Program. The last screen before the operation kicks off is a quick summary and then a check box at the bottom asking you to make sure you were a decent sysadmin and backed up the source vCenter before you start this process. I personally did not on this, but like I said there was no data on it anyways. So we kick off the operation.

Clicking Finish gives you notification box that the source vCenter will be shutdown once this is complete. Acknowledge that and away we go!

Once completed successfully, you are given the prompt to enter into your new vCenter which I have done here and here is the brand new shiny.

I will also link the video here to the process. The video is about 15 min long (truncated from about 45 min total) Disclaimers include: There are many more things you will need to think about before doing this to a production environment. Among them are, will all the versions of VMware products I have work together. You can find that out by referencing here:

https://www.vmware.com/resources/compatibility/sim/interop_matrix.php#interop
Interoperability Matrix for VMware products

You also need to make sure you can Upgrade from your current version to the selected version by going to the same page above but on the Upgrade Path

Another really important thing to consider is what order you need to upgrade your products. You can find that for 6.7 here.

https://kb.vmware.com/s/article/53710
Update sequence for vSphere 6.7 and its compatible VMware products (53710)

Intro to MongoDB (part 2)

When I left off last part, we were discussing MongoDB’s availability features. We will start next on:

Scalability – We’ve gone over replica sets in availability. For those who come from more of an infrastructure/hardware background. This is very close to something like RAID 1. You are creating multiple copies of a set of data and then placing it on separate physical nodes (instead of drives). This does allow a bit of scalability but is not as efficient as say, RAID 6 is. So, what we will next get into is, sharding. Sharding is a lot closer to what us hardware people would think RAID 6 is. You are breaking pieces of one overall replica and spreading those across physical nodes. For refresher purposes let’s throw in a graphic. I’ve modified it slightly. This diagram makes more sense to me according to the above comparison.

Now if we add in shards this is what we look like.

I have just 3 nodes there but you can scale much bigger. This sharding is automatic and built-in. No need for 3rd party add-ins. Rebalancing is done automatically as you add or subtract nodes. A bit of planning is needed as to how you plan to distribute the data. Sharding is applied based off a shard key. This is defined by a data modeler that describes the range of values that will be used to partition the data into chunks – this is then known as the shard key. Much like you would use a key on a map to tell you how to use it. There are three components to this.

  • Query Router (mongos) – This provides an interface between client apps and the sharded cluster.
  • Config Server – Stores the metadata for the sharded cluster. Metadata includes location of all the sharded chunks and the ranges that define the chunks (each shard is broken into chunks) The Query router cache this data and use it to properly direct where they need to send read and write operations. Config servers also store authentication information and manage distributed locks. These must be unique to a sharded cluster. They store information in a “config” database. Config Servers can be setup as a replica set since keeping these going is necessary for the sharded cluster.
  • Shard – This is a subset of the data. This can be deployed as a replica set.

There is a total of 3 types of sharding strategies. The three types are Range, Hash, and Zone.

  • Ranged Sharding – You would create the shard key by giving it a range and all documents within a range zone would be grouped on the same shard. This approach is great for co-locating data such as all customers within a specific region.
  • Hashed Sharding – Documents are distributed across shards more evenly using a MD5 hash of the shard key, optimizing write performance, and is optimal for ingesting streams of times-series or event data.
  • Zoned Sharding – Developers can define specific criteria for a zone to partition data as needed by the business. This allows much more precise control over where the data is placed physically. If a customer was concerned of data locality this would be a great way to enforce that. Reasons might include GDPR etc.

You can learn more (by watching the MongoDB webinar on sharding here).

Data Security

MongoDB has a number of security features that can be taken advantage of to keep data safe, which is becoming more and more important with the ever-increasing amount of personal information being kept and stored. The main features utilized are:

  • Authentication. MongoDB offers integration with all the main external methods of authentication. LDAP, AD, Kerberos, x.509 Certs. You can take this a step further and implement IP white-listing as well.
  • RBAC. Role-Based Authentication Controls allow for granular user permissions to be assigned. Either to a user or application. Developers can also create specific views just to show pertinent data as needed.
  • Auditing. This will need to be configured but Auditing is offered and can be output to the console, a JSON, or BSON file. The types of operations that can be audited are schema, Replica/Sharded events, authentication and authorization, and CRUD operations (create, read, update, delete)
  • Encryption.
    MongoDB supports both at-rest encryption and transport encryption. Transport encryption is taken care of by support of TLS/SSL certs. They must be a minimum of 128bit key length. As of 4.0 TLS 1.0 is disabled if 1.1 is available. Either self-signed or CA authority certs can be used. Identity verification is also supported for both client and server node members. FIPS is supported but only on Enterprise level. Encryption at-rest is taken care of by a new as of 3.2 storage engine. Default is AES256-CBC.

Next up we will go over a bit of what the hardware should look like.

Intro to MongoDB (part-1)

I don’t like feeling dumb. I know this is a weird way to start a blog post. I detest feeling out of my element and inadequate. As the tech world continues to inexorably advance – exponentially even, the likelihood that I will keep running into those feelings becomes greater and greater. However, to try to combat this, I will have a number of projects to learn new products in the works. Since there is a title on this blog post and I have shortsightedly titled it the tech that I will be attempting to learn, it would be rather anticlimactic to say what is it now. Jumping in….

What is MongoDB?

The first question is what is MongoDB and what makes it different from other database programs out there? Say for example MS SQL or MySQL or PostgreSQL? To answer that question, I will need to describe a bit of the architecture. (And yes, I am learning this all as I go along. I had a fuzzy idea of what databases were and used them myself for many things. But if you asked me the difference between a relational and non-relational DB and I would have had to go to Google to answer you.) The two main types of databases out there are relational and non-relational. Trying to figure out the simple difference between them was confusing. The simplest way of defining it is using the relational model definition from Wikipedia. ” ..all data is represented in terms of tuples, grouped into relations.” The issue with this was it didn’t describe it well enough for me to understand. So, I kept looking for a simpler definition. The one I found and liked is the following, (found on stackoverflow), “I know exactly how many values (attributes) each row (tuple) in my table (relation) has and now I want to exploit that fact accordingly, thoroughly, and to its extreme.” There are other differences – such as how relational databases are more difficult to scale and work with large datasets. You also need to define all the data you will need before you create the relational DB. Unstructured data or unknowns are difficult to plan for and you may not be able to.

So, what is MongoDB then? Going back to our good friend Wikipedia, it is a cross-platform document-oriented database program. It is also defined as a NoSQL Database. There are number of different types of NoSQL though (this is where I really start feeling out of my element and dumb). There are:

  1. Document databases – These pair each key with a complex data structure known as a document. Documents can contain many different key-value pairs, or key-array pairs, or even nested documents.
  2. Graph stores – These are used to store information about networks of data such as social connections
  3. Key-Value stores – are the simplest NoSQL databases. Every single item in the database is stored as an attribute name (or key) together with its value.
  4. Wide column stores – are optimized for queries over large datasets, and store columns of data together, instead of rows. One example of this type is Cassandra

Why NoSQL?

In our speed crazed society, we value performance. Sometimes too much. But still, performance. SQL Databases were not built to scale easily and to handle the amount of data that some orgs need. To this end, NoSQL databases were built to provide superior performance and the ability to scale easily. Things like Auto-Sharding (distribution of data between nodes), replication without third party software or add-ons, and easy scale out, all add up to high performing databases.

NoSQL databases can also be built without a predefined schema. If you need to add a different type of data to a record, you don’t have to recreate the whole DB schema, you can just add that data. Dynamic schemas make for faster development and less database downtime.

Why MongoDB?

Data Consistency Guarantees – Distributed systems occasionally have the bad rap of eventual data consistency. With MongoDB, this is tunable, down to individual queries within an app. Whether something needs to be near instantaneous or has a more casual need for consistency, MongoDB can do it. You can even configure Replica sets (more about those in a bit) so that you can read from secondary replicas instead of primary for reduced network latency.

Support for multi-document ACID transactions as of 4.0 – So I had no idea what this meant at all. I had to look it up. What it means, is that if you needed to make a change to two different tables at the same time, you were unable to before 4.0. NOW you are able to do both at the same time. Think of a shopping cart inventory. You want to remove the item out of your inventory as the customer is buying it. You would want to do those two transactions at the same time. BOOM! Multi-Document transaction support.

Flexibility – As mentioned above, MongoDB documents are polymorphic. Meaning….They can contain different data from other documents with no ill effects. There is also no need to declare anything as each file is self-describing. However……. There is such a thing as Schema Governance. If your documents MUST have certain fields in them, Data Governance will step and in and structure can be imposed to make sure that data is there.

Speed – Taking what I talked about above a bit further in, there are a number of ways and reasons why MongoDB is much faster. Since a single document is the place for reads and writes for an entity, to pull data usually only requires a single read operation. Query language is also much simpler further enhancing your speed. Going even further you can build “change streams” that allow you to trigger actions based off of configurable stimuli.

Availability – This will be a little longer since there is a bit more meat on this one. MongoDB maintains multiple copies of data using a tech called Replica Sets. They are self-healing and will auto-recover and failover as needed. The replicas can be placed in different regions as mentioned previously so that reads can be from a local source increasing speed.

In order to maintain data consistency, one of the members assumes the role of primary. All others acts as secondaries, and they will repeat the operations in the oplog of the primary. If for some reason the primary goes down, one of the secondaries is elected to primary. How does it decide you may ask? I’m glad you asked! It does it based on who has the latest data (based on a number of parameters), who has the most connectivity with the majority of other replicas, and it could use user-defined priorities. This is all happens quickly and is decided in seconds. When the election is done, all the other replicas will start replicating from the new primary. If for some reason the old primary comes back online, it will automatically discover its role has been occupied and will become secondary. Up to 50 members can be configured per replica set.

Well that’s enough for one blog post as I’m about 1200 words already. Next post will continue with sharding and more MongoDB goodness.

Windows Bare Metal Recovery on Rubrik’s Andes 5.0

Rubrik Andes 5.0 is out! There are so many features that have been added and improved upon. One of the many things that has me excited is Bare Metal Recovery. While virtualization has pretty much taken over the enterprise world, there are still reasons to have physical machines. Whether a workload is unable to be virtualized or its presence on a physical machine is a business requirement, there still exists a need for physical protection. So I wanted to do a quick walkthrough to share how Rubrik has improved its ability to perform bare metal recovery (And to go through it myself!).  I won’t go into how to create, or what SLA policies mean here, since there are plenty of good resources ( https://tinyurl.com/yb9k63ax)  Lets get started!

In order to create the PE boot disk, you will need to download the ADK from Windows. This will download the PE environment. What is needed is a boot cd that gives you a PowerShell prompt and network access. You can download the Microsoft ADK from this page: https://docs.microsoft.com/en-us/windows-hardware/get-started/adk-install or the download link here: https://go.microsoft.com/fwlink/?linkid=2026036 What you are downloading here is the setup file that will then download everything else. All the files downloaded will probably be around 7GB. I believe you should be able to use a Windows Server Install CD and go into recovery mode. I have not tested this yet, however.

Once you download the ADK and install that, you will need to download Rubrik’s PE install tool. This is a PowerShell script that will aggregate the files needed and compile them into an ISO that can then be used on a USB key etc.

Download the PE install from Rubrik’s Support site: https://www.rubrik.com/support/

(If you don’t already have one, you will need a username and password to log in)

Once downloaded, you need to unzip the files directly on your C drive.

Those two folders will be created.

Now open a Admin PowerShell prompt and run the following – You may need to adjust the beginning to account for where the .ps1 file is.

.\CreateWinPEImage.ps1 -version 10 -isopath C:\WinPEISO -utilitiespath C:\BMR

It will then begin running the ISO creation process.

At the end of this a bootable ISO will be created in the location shown in the figure (you specified in the PowerShell command above C:\WinPEISO).

You will use this CD to boot the machine to restore the physical machine later on.

Normally the next parts would already be done, since you are interested in the restore. I put this in though, since I was creating everything as I was writing this post and someone out there might not be aware of how to protect the machine in preparation for BMR.

After the machine has been installed with Windows and any other software needed, you should install the Rubrik Connector on the machine and add it to the Rubrik Cluster. To do that you need to log into the Rubrik cluster GUI and click on Windows Hosts and then, Add Windows Hosts, as shown here in the picture

You are then presented with a popup window where you can download the Rubrik Connector and install it. After it is installed, click “Add”

The next step is install the Volume Filter Driver. You can do that by clicking on the checkbox in front of the machine and then clicking on the ellipsis in the top right corner. To be clear, you don’t “need” the Volume Filter Driver for this to work, but it does a better job in keeping track of the changes and should allow you to take faster incrementals. It performs the same job as RCT in Windows or CBT in VMware. 

Click on the checkbox in front of the machine. Then click on the ellipsis, and click on “Install VFD”

The host will need to be restarted (as you can see in the picture on the right side. It is really easy to tell if the VFD is installed with the column on the right. It may take a few minutes for Rubrik to update to the fact you rebooted). Once the host is restarted, you will need to take a snapshot of the machine. You can do this by clicking on the Windows Host name itself and clicking the “Take On Demand Snapshot”.

This will launch a short wizard to take the snapshot

I’m going to click on Volumes and then the ‘C’ drive. The next screen is to add the machine to an SLA protection policy. You don’t have to, but you should. This will keep it continually protected according to the SLA you choose. Click on “Finish” and watch the magic occur.

So in case you were wondering….my first error above was because the machine was not part of a domain. Some of the permissions required, need the machine to be part of a domain.

Once the backup has been completed, you will see a dot on the calendar telling you that a snapshot exists on that day.

In order to restore this, click on that dot. You will then see the snapshots available, shown. At the end of the line you have an ellipsis you can click on to take action on it.

Now you CAN choose individual files with the “Recover Files” option but that won’t help you perform a Bare Metal Restore. The option you are looking for is “Mount”. When you do choose “Mount” you will get a new popup. This will show the drives available. When you click on the C: drive and any other one you need, click on “Next”. The next window gives you more options. You can either mount the snapshot on a host (If you are doing just a volume that needs to be restored) or, since we are doing a bare metal restore, click on No Host on the bottom to expose an SMB share with the VHDX file.

In order to preserve security around the share, you will need to enter who is allowed to access the share. You do that by entering in an IP address. ONLY the IP Addresses you input will be able to access the share.. You probably don’t know what IP address you need to put in here yet, so start your physical machine up with the PE CD and then use ipconfig in the open command line window to find the IP to use.

After the drive in Rubrik is mounted, you can find it by going to the Live Mount menu on the side and selecting Windows Volumes. When you hover over the name, it will give you the option of copying the SMB share to your clipboard. When you move your mouse down to the Path all you need to do is click on it to add to your Clipboard.

The bottom image is what the share will show in case you’re curious.

Since your machine has already been started with the PE ISO, the next step is to run the PowerShell command to begin the restoration process. The PowerShell command is shown to you in the window shown above, but here is an example of what you might use:

powershell -executionpolicy bypass -file \\192.168.2.52\xm5fuc\without_layout\RubrikBMR.ps1
				

The part in blue will be just for that mount and need to be changed. Once you hit enter, a flurry of activity will happen and you will see it copying the data over. Grab a coffee or nice single malt scotch – either will do depending on the time of day. I have a video below of what happened after I clicked enter. It restored 22 GB of files on the C:\ drive in about 15 min. This is over a 1Gb Ethernet connection and using a single virtual node. In closing I love this as a feature and feel it is a great feature with the additional refinements made on this version.

Don’t Backup. Go Forward.

3 Months in…..

Slightly over 3 months in now at my first role as Technical Marketing Engineer with Rubrik, Inc and I couldn’t be happier. The job itself is new things often enough, to where I don’t feel bored. And my team is amazing-I couldn’t ask for a more supportive group of people. The more I work with them, the better it gets. The breadth of knowledge and insight they bring to the table help me immensely. As I’m sitting here at my computer on a Friday night feeling thankful, I thought I would do a quick recap of some of the projects I’ve already been working on and things I’ve done.

First month:

Started off with bootcamp and begun learning about the product. Like past roles with most of the companies I’ve worked for, this was once again like drinking from a firehose. To be clear, I do not feel like I have even scratched the surface of the products, even after 3 months. Along with trying to ramp up on the product and forgetting countless names of co-workers (yeah, I’ve got an exceeding bad memory for names and I definitely apologize to all those whose name I’ve forgotten… ever), I also attended VMworld. While there, I caught up with countless friends and starting experiencing the high regard that people hold for Rubrik. I was able to meet customers who also loved the product, and some even went as far as sharing/paying for cab rides with me and of course dinners. I was able to help setup the Rubrik demo labs and felt like I was able to start contributing to the team. I also did my first presentation in conjunction with Pure Storage (at VMworld ever!).

Second month:

Second month, I started warming up with presentations with a product demo and a webinar. Both helped calm some of my jitters about presenting in front of people. I’ve always been nervous about presenting in front of people and imposter’s syndrome. But I also hate that feeling, and was a major reason why I wanted to be (and was) a VMware certified instructor while working at Dell Emc and a large reason for moving to this role. I’ve always been decently introverted and have worked hard to try to come out of my shell. The community has been a large part of what has made it easier for me to do so. During the month, I ended up back at HQ for team meetings and more team-building activities. This is one of the first teams that I’ve worked for that has truly worked hard at bringing their employees together and becoming family. To end the month out, I started preparing for my first VMUG presentation.

Third month:

The third month I traveled to Phoenix, Az for their UserCon there. I gave my first presentation to a…not packed house . It was actually better this way in my opinion since this allowed me to work into this sort of thing a bit easier. I felt more like it was a conversation and tried to get the attendees more involved in the presentation with me instead of it just being a slideshow. The last part of the month was finished off with going back to HQ to work in the lab. I’ve always loved lab work as it presented a clear problem or goal and you could concentrate on that instead of needing to define things first. I admit freely that I’ve confined myself in that sort of environment for too long though and need to work on my creative side. Which is why we are going to try to blog more.

So what’s next on the agenda? First thing, I have a vacation planned. First one in 5 years where I am actually going somewhere. Heading to a remote cabin in Colorado to spend a week. Get some creative juices flowing again and some rest. I will be visiting a few friends from VMware up there and enjoying a few libations with them. Hopefully the ideas for some blog posts will show up and I’ll begin writing those. After that, I’m still doing a ton of learning. Trying to get a lot better at my Spanish and Rubrik’s products along with the products we support (Azure, Hyper-V, AWS, etc.). I’m sure there will be more information coming out of those. To be continued….

VMworld 2018 post-summary

Wow, so there was a ton of activity that happened last week. VMworld 2018 US edition has now passed and was amazing. This particular one was pretty sweet for me as this marked a number of firsts for me. While I’ve been before, this is the first time I’ve played a role other than just visiting sessions and HOL’s. While that was enjoyable and a great learning experience, being able to experience the setup, breakdown and behind the scenes of what goes on for a company’s booth, was completely eye-opening. The sheer amount of work involved was completely exhausting. Not to mention the work continued after hours as well. There were parties, customer dinners, and planning sessions non-stop. I can’t even begin to say how much I enjoyed working with the Rubrik marketing team and also being able to socialize with all the great community that is always there at these events. But what actually went on? I will describe some of the activities I was able to be part of, but also some of the highlights that happened.

Saturday – I arrived mid-morning and was able to get to my hotel, through check-in, and back to the expo around 10:30-11am. This is where some of the work began for our team. I helped setup the servers and environment for the booth that would be used for demos. Other members of our team were already there and working hard before I even got there. The expo floor looks really weird at this point as there is not much put together and just lots of equipment and building blocks lying around. While the construction crew worked on the booth itself, we continued working on the demo environment until about 6ish (with the 2hr time change for me, ended up being a long day having started around 5am CST). We were well taken care of as most nights we had dinners already planned for us.

Sunday – We continued working on finishing the demo environment and worked on setting up the demo stations. The construction on the booth was nearing completion and things were really starting to take shape. As a side note, the team that worked on our booth did really considering I think our booth was one of the best-looking and ambitious ones there – no bias of course . Everything was ready to go when the expo floor opened up at 5pm for the Welcome Reception. The welcome reception went well and I was able to mill around a bit finding friends I haven’t seen for a while. After dinner I pretty much passed out.

Monday – This was another great day, lots of check in’s through the day back at the booth and seeing great friends and getting ready for that night. I had my first ever booth presentation at the Pure booth as well. Been a while since I’ve spoke in front of strangers in this capacity so it was a bit unnerving. In full disclosure, even when I was an Instructor at Dell, I still was a bundle of nerves. Always been a bit of an introvert but constantly working on trying to change that. What made it even more exciting was that I was allowed to raffle a couple of VIP passes to bypass the line getting into our party later that night. The presentation went well and was able to present Rubrik’s tech and how we integrate with Pure to about 50 attendees.

Moving on from there we had the big party that night. Run DMC and The Roots were the main attraction. Even the DJ music leading up to it was good. Everyone had a lot of fun and we ended up with about 1500+ attendees for the party. There were large lines waiting to get in so the employee bands came in handy.

Tuesday – Recovering from the night before was a little difficult but was able to get up and checked on demo machines to make sure everything was running smooth for the demos. Then I went to see more people I haven’t seen in forever. Evening was taken up with team meetings and other fun stuff.

Wednesday – Brought an end to the solutions expo. That meant we could start packing everything up. Which we did. We ended up needing to run over some to the next day, but we were able to get the majority of equipment turned off and organized for packing. Later that night I went to what started as a LAN party but ended up as a Cards Against Humanity. There may have been a few incidents that involved security being called .

Thursday – We finished up and then I was able to grab a flight out at 1.50pm and made it home around 9pm-ish. Ended up inside for the weekend as I caught some sort of flu or cold bug (yay planes and conferences) and still trying to get over it as I’m writing this. Some of the things I enjoyed as far as announcements:

Announcements:

20TH Anniversary for VMware!

Tattoos on Pat G./Sanjay P./Yanbing Li. – Though the permanence of some of them is questioned

vSphere 6.7 Update 1 – This is bringing a bunch of updates most notable Full Featured HTML5 client and vMotion and snapshot capabilities for vGPUs.

vSphere Platinum Edition – This new licensing includes AppDefense

New versions of vRealize Operations (7.0) and Automation (7.5)

Amazon RDS on vSphere – Relational DBs on VMware AWS. This will allow companies to run RDS and not have to worry about the management of it. Management can be done through a single, simple interface. You can also use it to create a hybrid setup between on-site and cloud enabling all sorts of use cases. SQL, Oracle, PostgreSQL, MySQL, and MariaDB will all be supported.

Amazon AWS expansion to Asia Pacific Region and Sydney – This marks that VMware’s presence extends to all major geographies.

Lower price of Entry for VMC on AWS – 3 Host min, license optimization for MS/Oracle apps. There is also a single host SDDC to test with and play around with. (This was intro’d a bit before VMworld.) You can specify host affinity for VMs and number of cores that an application requires.

VSAN on EBS – Scale from 15-35TB per host in increments of 5TB.

Accelerated live migration – VMware HCX now allows you to migrate just about any VM from on-premises to VMC

Project Dimension – Combines VMware Cloud Foundation (in HCI) with a Cloud Control Plane. So far this is looking like something like Azure Dev Stack, where VMware will take care of the hardware and software patching for the SDDC and the customer worries about apps at the customer site.

ESXi on 64-Bit ARM – details are still light.

These are not every single one of the announcements but the ones I most relate to.

My info was sourced from the following places and …. Being there.

https://www.vmware.com/radius/vmworld-2018-innovation/

https://www.cio.co.nz/article/645860/amazon-relational-database-service-on-vmware-launched-at-vmworld/

https://www.forbes.com/sites/patrickmoorhead/2018/09/04/aws-dell-arm-and-edge-announcements-dominate-vmworld-2018/#31ffd25536c4

New Beginnings

It was with a bit of regret and a small bit of fear that I turned in my 2 weeks’ notice last week. Even though I technically left Dell 2.5 yrs. ago, Dell wasn’t done with me yet and decided to buy the company I moved to. So essentially, I worked for Dell in some capacity for the last 6 yrs. During that time, I did a bit of everything from front-line phone tech to VMware Certified Instructor. I learned a ton of IT that you never really see until you work in larger environments and made some great life-long friends. I really enjoyed teaching and the feeling I may have helped my students along in their career, and because of that, I decided to get more into the education side of IT. To do this, I moved over to EMC to be a Content Developer for the Enterprise Hybrid Cloud solution (1 month after I joined EMC, Dell announced the buyout and I once again became a Dell employee). I helped develop classes for that for a while before going down the path of Lab Architect.

Shortly after I started the Lab Architect role, I was approached with a possibility of blending all the things I love in a single position and with the sweet addition of getting paid for it as well. The training, talking with customers, building POCs, and blogging. I love the idea of trying to help people with the work that I do, and as I get older (ugh) I personally feel that I need to make more a difference with trying to help people. I believe this position will allow me to do that. I greatly appreciate all the help that everyone has given me up to now and continuing. The VMware community is one of the best communities I’ve ever been a part of and God-willing will continue to be part of for a long while.

Putting on a different paragraph for the TL:DR crowd, I have accepted a new role as Technical Marketing Engineer for Rubrik. My last day with Dell/EMC is 7/27. I am looking forward to working with a team of people who I greatly admire and respect. I have a ton of catchup and work to do in the coming months and pray they have the patience for me . I am extremely excited not only about the people that I get to work with but also the product as well. Rubrik has some really cool technology which I plan on delving way deeper into and seems like they really have an awesome vision on how to handle data to make it really easy to manage and control. I look forward to what’s coming….

VCIX-NV Objective 2.2 – Configure and Manage Layer 2 Bridging

Underneath todays objective we have the following topics:

  • Add Layer 2 Bridging
  • Connect Layer 2 Bridging to the appropriate distributed virtual port group

So first let’s do a little background on what we are doing and more importantly the why. Layer 2 bridging in this case, is an ability we have in NSX to take a workload that currently only exists in our NSX world and “bridge” that with the outside world. We could use this to reach a physical server being used as a proxy or gateway for example. We create this bridge between a logical switch inside NSX and it routes out to a VLAN. I am going to borrow the picture from the NSX guide to try to simplify it a bit more. (credit to VMware for the pic)

In the above picture you have the NSX VXLAN 5001 that is running our workload inside ESXi. We have a need to communicate to a physical workload labeled as such. In order to do that, we have an NSX Edge logical router that has L2 bridging enabled on it. The Bridging tab itself will allow us to choose a logical switch and distributed port group that will be connected together. To do this here are the following steps:

  1. If you don’t already have one, you will need to deploy a new Logical Router. To do that, you will need to go to the NSX Edges subsection of the Networking and Security screen of NSX.
  2. Click on the green + icon on the middle pane
  3. The first information you will need to fill out will be Install Type, and Name. The rest of the options we won’t go over in this walkthrough.
  4. We will need to select Logical Router as the Install Type and then type in a name.
  5. On the next screen, we will need to input a password for the Edge device.
  6. On the Configure Deployment Screen, we will need to add an actual appliance here by clicking on the green + icon.
  7. This popups with a screen for us to select where we wish to place the device’s compute and storage resources.
  8. On the Configure Interfaces screen, I’ve chosen to connect it to my management network. You don’t really need to configure an interface as the actual bridging work will be done by a different mechanism.
  9. You can click past the Default Gateway screen.
  10. Click Finish on the Ready to Complete screen and away you go.

Now the actual bridging mechanism is found by going into the Edge itself

  1. Double click on the Edge device you are going to use for bridging.
  2. Click on Manage, then on Bridging tabs in the center pane.
  3. To add a bridge, click on the green + arrow
  4. Give the Bridge a name, select the Logical Switch you are bridging, and the VLAN Port Group you will be bridging to. (Just as a side note, none of the normal dv Port Groups will show up unless you have a VLAN assigned to them. Something I discovered while writing this )
  5. Once you click Ok, you will exit out to the Bridging screen again, and you will now need to publish your changes to make it work.
  6. Once published, you will have a Bridging ID
  7. You can have more than one Bridge using the same Edge device, but they won’t be able to bridge the same networks. In otherwords you can’t use a bridge to connect the same VXLAN to two different VLAN Port Groups.

And that covers this objective. Stay tuned for the next objective!

Mike