Matt That IT Guy

Racking And Stacking In A Cloud-Based World

Veeam

Veeam V9: First Impressions & 3 Awesome Features

After much anticipation, Veeam V9 was released earlier today. Normally I would hold off before installing a new upgrade, however after using Veeam for years now, I know they have some great QA, and in this case the benefits of the new features outweigh the risk for my environment.

There are lots of places to find details about what is in Veeam 9 (including this blog), but I wanted to highlight a handful of the features that will have an immediate impact on my environment.

Veeam V9 Installer
The Veeam installer helps you not shoot yourself in the foot.

First off, let me say that the upgrade was very painless. All I had to do was start the Veeam One installer, reboot, start the Veeam V9 installer, reboot, and away I went. No dependencies, data migrations, or compliance issues – just a nice straight forward process. One of the little things that I liked was the fact the upgrade path is limited to the supported order. If you take a look at the image on the right, you can see that the Backup & Replication server and Backup & Backup & Replication Console (yay, console!) are both grayed out and state that I need to upgrade another component first.

The upgrade process was about half an hour, and when I was done I started poking around for some of the new features.

First off is Direct NFS – this isn’t an backup job option that needs to be per se enabled (it is handled by your backup proxy), but rather it is a change to how VMs are processed. Previously when backing up from NFS, Veeam would talk to the Hypervisor which would read the blocks and pass it back to Veeam. Now Veeam talks directly to the primary storage, completely skipping the hypervisor. One less component in the mix should lead to faster read/write times to storage.

BitLooker – this is a great addition. Previsouly Veeam would backup a whole VM, essentially reading the VMDK file and backing that up. That’s great and all, but there is a lot of junk data in there. How many servers have a ‘temp’ folder filled with random stuff – do you really need to be backing that up? Similarly, do you need to backup the pagefile.sys file on all of your servers? Depending on how much RAM these VMs are allocated, some of those pagefiles can get quite large.

You can now exclude deleted files from your backups.
You can now exclude deleted files from your backups.

Bitlooker allows you to exclude files and folders from your backup (e.g. exclude c:\temp). Further to that, it will allow you to exclude blocks that are marked as deleted. When a file is deleted in Windows, the space isn’t actually wiped clean – Windows just removes that file from the Master File Table, effectively forgetting about it and allowing future data to occupy the space. Unfortunately because the data is still occupying space it is getting backed up – BitLooker recognizes this fact and skips over these files.

The last feature that I see having an immediate impact in my environment is some maintenance features that were added in. The first new feature, dubbed corruption guard, adds the ability to schedule a health check on your backups. From what I gather this is almost like a checkdisk, but for your backup files. The process will scan through the backup file and look for errors. The really cool part is that if an error is found, Veeam will automatically attempt to fix the file by pulling the right storage blocks from production. It will be interesting to see what sort of overhead this adds to the job.

Scheduling routine maintenance.
Scheduling routine maintenance.

Along with the corruption guard, there is now a feature for forever-incremental backups that will defragment and compact the backup files. But there is a little more to it than that. The process will actually go through your backups and remove ‘dead’ data – items such as deleted VMs that are past their retention point – and shuffle this stuff off into a separate archive file. I haven’t played with this yet, but I imagine that on top of trimming down your backup dataset you will also see an increase in restore performance due to the defragmentation.

All in all I have only scratched the surface of V9 – I haven’t even looked at the replication side of things. My next steps are to let the backups run for a few days and then do some benchmarking. Right off the hop I want to see how backup times compare from v8 to v9, and after that I’ll be looking into optimization.

My initial impressions: v9 is off to a fantastic start.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: