TFD15 Primer: Datacore

DataCore_LogoDataCore Software is presenting at their first Tech Field Day event, which I find somewhat surprising. I find this surprising as the company was founded in 1998. Their flagship product, SANsymphony was first released in 2011. When you consider this, as well as the fact their flagship product is a storage virtualization platform, you can likely see where I am coming from. The idea of software-based storage solutions clearly isn’t a new one. However, in the past few years, these types of solutions have gone from fringe use-cases to the mainstream.

LOOKING THROUGH WINDOWS TO GET TO YOUR STORAGE

One of the characteristics that stood out to me the first time SANsymphony was brought to my attention was that it requires a Windows Server to run on. In a world where storage virtualization runs at the hypervisor-level (e.g. vSAN), or bare metal (e.g. ioFABRIC on a physical node), this seems like an odd choice. The good news is that I am sure this will be a topic which we will touch on.

There definitely are some potential benefits I can see to this approach though. For example, why re-invent the operating system if you don’t need to? To say that Windows is a wide-spread operating system is probably an understatement. Along with such a large footprint comes a large list of drivers for hardware. Think of all the various HBA cards, network cards, etc., that have drivers for Windows. Now take into account that a product like SANsymphony doesn’t need to provide those drivers. That immediately eliminates a ton of development overhead.

Now also think about the benefits of being able to tell a customer that your product will likely work in their environment; not only will it work, but chances are you can use your existing hardware. This is where you can start to see the logic to this approach. Of course, there might be other reasons which we may uncover – you’ll need to tune in to find that out. Since it runs in Windows, you may also find yourself in a situation where you don’t need a dedicated storage admin.

THOUGHTS

DataCore-FeatureSetOne of the great things about software-based solutions is that if you need a feature, chances are you can write code to accomplish it. With CPUs not slowing down anytime soon, and with the number of cores on a chip increasingly going in an upward trend, CPU cycles can be cheap. Because of this, SANsymphony is a feature-full solution. All of the features that you would expect to be in there are in there. And once again, through the beauty of software, additional features can be added in the future with new releases.

Because I only know a bit about DataCore, I am quite looking forward to seeing what they bring to the table. I am hoping that we’ll at least touch on what led them down the Windows route. Along with that, what challenges have they faced because of it, and how have they overcome it.

Be sure to pop over to the Tech Field Day page to catch DataCore’s presentation. They will be presenting live Friday, September 29th at 10:30 AM PDT.

Disclaimer: I was invited to participate in Tech Field Day as a delegate. All of my expenses, including food, transportation, and lodging are being covered by Gestalt IT. I did not receive any compensation to write this post, nor was I requested to write this post. Anything written above was on my own accord.

VCP7-DTM Exam Experience and Study Tools

VCP7-DTMEarlier this year I decided that I wanted to renew my VCP cert via a different track (as opposed to getting a VCAP). The reason being is that I was just looking to broaden my knowledge. I don’t do a lot of super technical work on the daily basis, so something like the VCAP might be “overkill” (for lack of a better term). It was a toss-up between the VCP-NV and VCP-DTM. In the end, DTM won out.

Going into this, I had some Horizon experience. I have setup a couple of Horizon View deployments – one was a 5.x and the most recent one is running 7.x. So I was familiar with a lot of concepts. What I wasn’t expecting was how much stuff other than Horizon View is actually on the exam.

THE BLUEPRINT

One of the things that I actually like about VMware certification exams is the blueprint. They are by no means a fool-proof way to study for the exam. But, they certainly are a fantastic tool to use to make sure you hit the key points. As mentioned above, I have been somewhat familiar with Horizon View for a while now. But this exam covers a whole bunch of other technologies. Things like VMware Mirage, VMware User Environment Manager, and VMware Identity Manager are all covered in this exam. Then you also have to consider things like Workspace ONE – there is a lot to think about!

My first step was understanding what each component was or wasn’t. Understanding the differences between what each piece is supposed to accomplish is fairly key (at least to me). Once I had my head wrapped around the intended uses, pieces really started to click.

TRAINING MATERIALS – VIDEOS

I leveraged Greg Shields’ VCP7-DTM Pluralsight courses quite heavily. I watched all of the modules once through while marking up the blueprint. The beauty of these videos is that you can easily rewind a few seconds or pause when you need to. After the initial viewing, I spent some time going over my notes and figuring out what I thought were my weak areas. One of the great thing about Pluralsight courses is that you can use the transcript to find the exact part you are looking for.

TRAINING MATERIALS – HANDS ON LABS

VMware-HOLFurther to the videos, I also did a few Hands-On Labs, specifically the labs on Rob Beekman’s list. I also did a handful of other ones, which I, unfortunately, did not make a note of. I tend to have a hard time doing hands-on labs as I tend to just follow the instructions and not necessarily think about what I am doing. Often I need to remind myself that I am trying to learn, and not that I am just trying to make it through a guide.

One aspect that I did find very useful was the ability to use some of these labs for pre-built environments to poke around in. Remember, you don’t need to follow the lab guides. If you have an area that you are weak in (e.g. VMware Identity Manager), you can find the appropriate lab, fire it up, and just poke around in there.

TRAINING MATERIALS – READING

I referenced Sean Massey’s excellent Horizon 7 material. Although it isn’t written out to be a VCP7 study guide, it was immensely useful in seeing how to deploy Horizon 7. In fact, I used this guide as the basis for my current Horizon 7 deployment at work. By following the steps, I was able to go from a newbie to an administrator in relatively short order.

I also referenced a lot of material from Mastering VMware Horizon 7 – Second Edition, written by Peter von Oven and Barry Coombs (a Veeam Vanguard for 2015 & 2016 🙂 ). The book covers Horizon 7, and although there is a lot more than just Horizon on the exam, having another source for detail was very welcome.

TRAINING MATERIALS – HOME LAB

Lastly, my home lab is probably one of the most useful tools I had. As great as the resources above are, I tend to learn by doing. I followed along with the Pluralsight videos a fair bit with regards to building the labs. For stuff like setting up SSL certificates, or building templates, I used Sean Massey’s material to help me get through. Lastly, I went through the deployment of components at least a couple of times. In fact, I think I am going to blow away the home lab and rebuild from scratch now as there are lots of “artifacts” kicking around. Overall, being able to get things up and running (or not) in the home lab was very valuable. Things like dealing with SSL issues, or ensuring DNS was working, was very useful albeit tedious and frustrating at times.

EXAM EXPERIENCE

So, I don’t think I stated this yet, but I sat this exam twice. I am an expert procrastinator. Being true to that, I waited until three weeks prior to my VCP expiring to sit the exam (VMworld 2017 US). I made sure to cram in the days leading up to the exam, as well as on the plane ride down. Because I didn’t want to spend the whole time thinking about the exam, I booked it for the Sunday … and I failed with something like a 286. By my reckoning, I had probably two questions wrong. Oh well …

With time being a big deal, and the retake policy requiring a 7 day wait, I ended up booking the soonest available time which worked with my schedule. That was this past Friday … and my VCP is set to expire this Monday. Not much room for error! “Luckily” I had already sat the exam once and knew my weak areas – as mentioned above, Horizon is but one portion of the exam. I’m happy to report that I passed with a 340 – not as high as I would like. But honestly, given the improvement in a short amount of time combined with my rather limited day-to-day experience, I’m happy with the pass. After all, it’s not like VMware hand’s out anything other than a Pass or Fail.

Overall, I really wish I could sit down and review the exam with an instructor. I know that is not possible. It would just be nice to know what I got wrong, and why. I found that there were definitely a few questions that could be open to interpretation. Similarly, I found there were a few questions that seemed to throw in extra information for no apparent reason. I’m not a fan of tests that try to “trick” folks – but I suppose that is where the VCAPs come in. Those exams tend to focus more on proving what you know, and not letting you match your knowledge to the closest answer.

NEXT STEPS

So, where do I go from here? Hopefully, I won’t let my VCP get so close to expiration again. More than anything, the stress involved with having to take another $4500 class was detrimental. I have a pretty big backlog of personal projects to take care of as well. As for certifications, I still have the VMCE-A to polish off, and AWS has been getting my attention. That being said, I would be lying if the thought of going for a VCAP wasn’t floating around either.

Veeam User Group Q3 2017 Recap

A big thank you to the 50+ attendees who made it out to the Q3 SouthWest Ontario Veeam User Group (SWOVUG) yesterday. As always, it is great to see returning faces, as well as new folks. The one downside I find personally as the group grows is it is hard to chat with everyone. Not a bad problem to have though!

We were also fortunate enough to have Paula Melvin from Veeam’s Ohio office come up for the event. In addition to that, the Guest of Honour was Vanny Vanguard – the “unofficial, official mascot of the Veeam Vanguard” (pictured above).

THANK YOU EXAGRID

I’d like to thank Marc Crespi from ExaGrid again for giving a great technical presentation. Based on some quick polling, most folks weren’t familiar with ExaGrid, but I think the vision of their product quickly came through. We had lots of great discussion around what their solution does, how it does it, and how it can be used. Veeam and ExaGrid have been long-time partners, and you can find some more information about their product here: http://www.exagrid.com/exagrid-products/supported-data-backup-applications/veeam-backup/

WHAT’S COMING IN V10

The bulk of the remaining time was spent on reviewing upcoming features in Veeam Backup & Replication V10. Chris McDonald and Mario Marquez from Veeam did a bang-up job of walking us through some of the announced features. There were definitely plenty of questions and comments along with way, which is great. I’m always a big believer in that if we can turn these events into conversations, then we’ll all gain more value from them.

VEEAM CDP

Continuous Data Protection (CDP) was a popular topic, and personally, it is one that I am really looking forward to. The basics of it are that Veeam Backup & Recovery will leverage an existing (and fully supported) VMware API called vSphere APIs for I/O Filtering (VAIO). This allows for RPOs down to 15 seconds (yes, seconds) via the replication engine. Currently, replication requires a snapshot. This causes performance issues ranging from I/O issues on your storage to making the VM unavailable due to VM stun. VAIO, on the other hand, will not use snapshots, and thus will have minimal impact on your infrastructure. Given that it is a VMware API, this will only be available in vSphere.

VEEAM NAS SUPPORT

NAS support was another feature that seemed to have a lot of discussion around it. This one surprised me a bit as I didn’t think we would see much interest, but I was wrong on that. In a nutshell, Veeam Backup & Replication will be able to backup NFS of SMB (although it will probably end up being labeled CIFS, which is incorrect) file shares. One key thing to understand is that there are no agents to install on the NAS to accomplish this. We touched on the versioning feature that will be available, as well as rollbacks. The idea behind a rollback is that it only restores what changes. This is in contrast to a restore, which would restore all data, regardless of a change or not. Where this might be a big helper is in the case of recovering from a ransomware attack.

SIDE CONVERSATIONS

One of the things I enjoy most is the conversations I get to have with attendees. I truly wish I had time to talk with everyone, but it is amazing how time flies. A few of the highlights were conversations around replacing USB storage, leveraging Veeam Agents (even though the workloads are virtual), as well as seeding Agent backup jobs. I won’t go into much detail around those, but I wanted to mention them to highlight a point. Part of the reason why I organize these user groups is to get a group of IT professionals together in one spot. We are all peers, and there is a brain power in the room. I encourage everyone to take the opportunity to chat with others. Whether they are old acquaintances or someone new, chances are you can have a great conversation with them.

Along those same lines, if you have an interest in presenting at an upcoming meeting, by all means, feel free to reach out. I like to stress that as far as user presentations go, we don’t really have a heck of a lot in the way of “rules”. What I mean by that is it could be 10 minutes, or it could be 30 minutes. Maybe you have a problem that you just solved that you can share; chances are others may experience that issue. On the flip side, maybe you have a problem that you are looking to share – why not use a room full of smart folks to try and solve that problem?

As always, you are more than welcome to reach out with any questions, comments, or concerns about the SouthWest Ontario Veeam User Group.

Catch Me If You Can – September 2017 Edition

Things have definitely been busy as of late, between VMworld, studying, and yeah … work. September is not shaping up to be any different either.. Whenever I have a few “events” all close together, I like to toss together a quick post to let folks know where I will be. The point is so that if anyone reading this is there, that hopefully they’ll let me know and we can get together for a quick “hi”if nothing else.

So, without much more banter, feel free to find me at some of the events below in September:

  • SouthWest Ontario Veeam User Group (SWOVUG) – I have this on September 12th. This time around, we will have Exagrid helping out with sponsorship. In addition to that, we’ll be reviewing what we know about Veeam Backup & Recovery V10. The goal is to always make these meetings as interactive as possible, and generally speaking, we run out of time before we can answer everything. A good problem to have, and hopefully one that continues.
  • Toronto VMware User Group – I’m hoping to make it into Toronto for this. 8 AM starts are always interesting for me as it typically means I need to be out of the house by about 6:15 AM the latest. The nice thing about these quarterly meetings is that they tend to be much more intimate when compared to the UserCon. Don’t get me wrong, I love the UserCon, but I don’t think I could do four of those a year.
  • Tech Field Day 15 – I am thrilled that I be at another Tech Field Day event, this time back in Silicon Valley. The event runs from September 27th to 29th and as per usual, is shaping up to be another great event. I could probably write a whole post just on the value (for delegates and presenting companies). I’ll skip that for the moment. But be sure to watch out for my “primer” posts. I am aiming to get those out prior to the event. I personally like to research presenting companies beforehand. Writing also helps me process, so these are a good fit for me. If they help others in the process, excellent! This will be my third “full” event, in addition to the TFDx that I was recently at during VMworld 2017.

As always, feel free to reach out and say hi if you happen to see. I always love catching up with acquaintances, new and old.

Tech Field Day Extra – Kingston Technologies

I was invited to attend a session at Tech Field Day Extra recently as part of VMworld 2017. In my case, the session was being put on by Kingston, the storage and memory folks. I wasn’t sure what to expect, given that Kingston has been in the industry for years. 30 years, in fact! There are not too many tech companies that can hold that claim. Over that time, they have managed to remain a private company, and have grown to have distribution in 125 countries. Not to mention that they have expanded to many different market segments – just take a look at the image above. No small feat!

Kingston is likely well-known to most readers as they have numerous product lines that go across many different realms. There is a gaming division for memory products, a consumer division for SSDs, and even a NVMe division (Kingston Digital) aimed at enterprises. This last division is where we spent most of the time.

After we touched on what NVMe is (essentially a driver for the OS to understand flash media) we started digging into the details of its newest offerings and its use cases. In particular, we looked at the DCP1000 and DCU1000 storage. The difference between the two is the form factor: DCP1000 is delivered via PCIe whereas the DCU1000 uses the newer U.2 format. These U.2 drives come in a drive bay-compatible housing, with four drives packed into each one.

So why the two formats? PCIe for storage is quickly getting to the “legacy” point in its life. Sure it’s fast, but it isn’t the newest or greatest anymore. The PCIe solution is great for those older servers that may not have a U.2 form factor available. The down side with this form factor include no hot-swap capabilities, it tends to be more expensive, and it is hard to scale. How many PCIe slots do your servers have? That’s not to say that you should dismiss the DCP1000 – the inner guts of it are still quite impressive.

IT’S WHAT IS ON THE INSIDE THAT COUNTS

TFDx-2017-KingstonDCPCardsThe two offerings are available in sizes of up to 4TB, but some of the “secret sauce” for their performance comes from two techniques. First up, each card/drive has four physical storage drives in there. With this, you can do something like a RAID 0 to present the 4TB volume. You can also use something like software RAID if it fits the use case. A side effect of this approach is density: if you have a 24-bay system, you can cram 96 drives into there.

From there, PCIe switching is able to present the drive to the OS and allow it to make writes to all four drives. Although the switching adds a bit of overhead and deters performance slightly, in the grand scheme of things it is almost unnoticeable. Your CPU would be the bottleneck before the PCIe switching ever was.

LIFE EXPECTANCY

One of the more common concerns that enterprise admins have with flash is their life expectancy. Flash wears out over time, plain and simple. Most vendors will try to compensate for this by adding extra capacity to the unit. Once a cell shows signs of wear, a new cell replaces the old one.

With that in mind, Kingston over-provisions %28 of the drive to help ensure endurance and performance. Further to this, the drive’s endurance has a rating of one full write per day. Although you can find drives with much higher ratings, you generally see quite the uptick in price at that point. An interesting stat from the conversation is that about %65 of all SATA SSDs deployed into servers have the requirement of one drive write per day. Based on sheer volume, I can see how selling a drive to that segment makes sense.

USE CASES

TFDx-VMworld2017-KingstonSpecsAs much as everyone likes things to be fast, there isn’t always a need. Currently, the cost of these drives is somewhere around the $.85 to $.90 / GB range. Not really that “expensive” given the performance. Could you use this in your gaming rig? Sure, but chances are it’ll be overkill. One card is capable of yielding about one million IOPS …. The storage geek in me thinks back to not that long ago, when that sort of performance would require a full rack of disks.

Video rendering and other large processing tasks seem to be the commonly identified use-cases. Along with that, read-caching systems would be a good fit, especially considering the one write per day endurance rating. Although I did not get a chance to see it, Kingston had a demo running at their booth showing the real-time rendering of video in 8K.

I suppose that these would also offer phenomenal performance for cat pictures as well ..

You can catch all of Kingston’s presentations on the Tech Field Day site.

Disclaimer: I was invited to participate in Tech Field Day Extra at VMworld 2017. Travel, accommodations, and most other costs were covered by my employer. Gestalt IT did provide lunch, which was a very yummy pasta and some lightly seasoned garlic bread (?). And coffee … Kingston did hand out one 64GB flash drive to each delegate. I was not requested to write the above, but rather it was written on my own accord.