In the first episode of the two-part video series, you will get to know the key questions you need to ask yourself before you start building a data backup & restoration strategy. Afterwards, I will walk you through some backup implementation techniques which you can deploy within your organization.

Whether you are running a business, managing IT infrastructure, running a small development team, or are having any set-up dependent on data availability, you would derive some value & ideas from this video.


Video Transcript

In a business a lot of things that happen in the background are under appreciated, and Backup is one of them. It is rarely asked for, but it is the most sought after if something bad happens to your data - whether it is a ransomware attack, or data accidentally deleted, or there’s a hardware failure.

In this episode, I’ll talk about what all to consider while designing a data backup & restoration strategy. If you are in DevOps, or are managing IT Infrastructure, or are running a business where you have to rely on data availability, then this video may be of some value.

••

Questions for Business/Operations

In an enterprise, a typical technology landscape has some sort of a Business Continuity and Disaster Recovery Plan. Backups are an integral component of that.

The common knowledge is that you zip a folder, and save it someplace safe - such as Google Drive, or say use Apple’s Time Machine, and you’re good. But as a business owner, to maintain Business Continuity, and having a usable Disaster Recovery Plan, you first need to answer a few questions:

The first question is → What is the RPO - the “Recovery Point Objective” - of your organization? In case of a disaster, how much data loss your organization is willing to suffer - 3 days / 12 hours or 1 hour? That eventually will determine the data backup frequency, its strategy, & the costs.

The second question is → Have you set the RTO - the “Recovery Time Objective”? That is, in case of an IT incident, how soon you need to recover the data and make the systems running?

Then, How many days worth of backup do you need? The answer to it might also depend on the agreements with your customers on the data backup & storage requirements.

And, finally, would you need the data as a “Hot” backup - which means it is readily available for restoration, or it could be a “Cold” backup which might mean it is written on a slower or cheaper storage medium for archival & long term storage. Hot backups will be the ones that will be put to use in case of an IT incident, and will contain data that’s essential to run a business. On the other hand, the data that is required for auditing/legal reasons can be considered as “Cold” backup. Just remember - Hot, Warm & Cold Backups have different meanings depending on the context.

The answers to these questions will then be used by the IT team to build a data backup strategy, and provide you with a cost estimate.

••

Questions for IT Team

As part of an IT team who’s responsible for Business Continuity and Disaster Recovery Planning, you need to get clarity on the business objectives in terms of RPO and RTO etc from the stakeholders. You then also have to consider other parameters to define your backup strategy and capacity planning.

The first question you need to answer is → What type of backup? a full backup each time, or taking a full backup the first time and subsequent backups are to be incremental?

The next question is, How will you address the ever growing storage - do you have sufficient hardware and redundancy, or will you be using cloud storage?

Then, How much are you willing to spend? The expenses could be related to storage, data transfer costs, hardware, licensing and personnel costs.

What about taking backup of “databases” which are changing in real time? Or software that keeps data in “memory” rather than writing on a disk? How will you address those situations?

The most important question would be → How would you ensure data security? Is encryption also required for data at rest - especially if they are transferred to a secondary location?

Finally, How would you determine whether you can rely on the backups that you’ve created? Do you have a sandbox environment where you can restore and test the integrity of the backups you are creating?

When you devise a data backup strategy, you have to first identify your audience and get a sense of how important the data is for them. You’ve to realise that unless it is just files - such as PowerPoint or Excel Spreadsheets - they may not always know where all their data is kept. If they’re using a CRM application then it possibly is maintaining its data in a different location, or if they’ve a database or docker containers, then it might be difficult to identify the actual path where the data resides on that system. It is your duty to guide them.

••

Backup Techniques

For individual users, it’s best to have at least some backup solution in place - Apple MacOS has Time Machine, Microsoft Windows has Backup, Linux has a few equivalents such as Deja Dup & Cronopete. These backups can be moved to a remote location or to a portable hard drive periodically.

But in an organization, that has multiple servers running alongside user machines, there is a need to have a more central and robust backup mechanism.

For that, the easiest next step is to move towards Full & Incremental backups. The backups are done either manually, or automatically at predefined intervals. They are easy to implement, especially in smaller organizations, though they require monitoring and the IT team needs to work with the business team to identify the frequency and the schedule of backups, and set expectations on the restoration and data availability.

There are off-the-shelf solutions that help you do that. They are expensive, can scale enterprise wide, and take the load off from your shoulders in terms of logistics. They can do backup, scheduling & recovery at an enterprise level, including special scenarios such as Database Servers, Microsoft Exchange or Active Directory. For large organizations it works out quite well since there is software support, and usable graphical interfaces to manage the backups & scheduling. If you are enterprising enough, you could do the same through free and open source tools too. The only disadvantage would be that, unlike a mostly pre-defined methodology offered by enterprise backup solutions, you’ll be dictating the backup strategy end-to-end and you’ll be on your own on identifying and addressing the points of failure.

The gold standard for taking backups is CDP - Continuous Data Protection. In a CDP type setup, every “save” is backed up, effectively creating multiple versions of a single file. Instead of synchronizing file-level differences, it saves the underlying “block-level” or byte level differences - that means if a few bytes are modified of a 100MB file, then only those modified bytes will be synchronized - not the entire file; thus saving storage and bandwidth. Since it is near real time - of course, ignoring the network transfer differences - theoretically it can offer an RPO - the Recovery Point Objective - of Zero. That is in case of a major IT incident - the chances of data loss are Zero. Practical implementation though shifts between Continuous & Near Continuous.

For the cloud environments such as AWS, and virtual machines - such as VMWare or Xen or KVM, there are options of creating “snapshots” which take backup of the entire setup periodically. You may utilise them too.

I work with a number of small and medium enterprises, and I have noticed that a daily incremental backup is good enough for a majority of use cases. Especially for software development teams or the DevOps teams who are managing a select set of servers, having control of backups provide a peace of mind, and a good night sleep.

Since I manage a large set of Linux servers, I always gravitate towards rsync which allows me to create differential backups, without any prohibitive costs and provides more control on my setup. In the next episode, I will talk about some open source or free tools, including rsync, that you can smartly stitch together to create an effective data backup strategy, as well as prepare an environment where you can try out the data restoration ability. Stay Tuned!

••