Cloud Backup
 

We`re just sending through your details

Please give us a few moments whilst we get your account ready.

OKAY

What, where, when and how to Backup

What, where, when and how to Backup

posted in Product ● 15 Jan 2015

The following article discusses data backup focusing on what data should be protected as well as when, where and how data it should be protected. Its purpose is to provoke thought regarding how organisations should best go about backing up data, outlining the advantages and disadvantages of the various approaches that can be taken using various real world examples.

We all know that we should backup our data, that much is clear. Organisations, regardless of industry, create, alter and utilise data daily, in an ever increasing number of ways and on a larger scale than ever before. Data is the lifeblood of organisations. It represents countless hours of productivity, transactions, collaborations and more. Choosing what to protect should therefore seem relatively straightforward then? Surely we want to backup everything? The answer, however, very much depends on your approach to recovering from any given incident of data loss and whether your data holds varying degrees of importance within your organisation.

Granular File and Folder Backup

For example, if you suffered the total loss of a file server, how would you go about recovering it? You may choose to replace the hardware (if physical) or create a new virtual machine (if virtual) and then restore the files and folders after having rebuilt the base operating system (either from physical media or a system image). If this approach was taken you would likely backup the user data and nothing else. In this example, we can see the “how” determines the “what” and in doing so focuses our backup strategy on what is important within our given scenario.

If we examine another scenario where the total loss of an application server has occurred, you could follow a similar approach to that outlined above. This is fine if the rebuild time is considered acceptable but what if it is not? The acceptable time taken to recover data is commonly referred to as your recovery time objective (RTO) and is a measure used by organisations to determine worst case recovery windows. To view it another way, it is a measure of how long an organisation can afford to be without access to its data or without “business as usual” capability. The ability to satisfy recovery time objectives will largely depend on the backup approach being deployed. In scenarios where a total loss of any given server has occurred, it is likely that the aforementioned backup approach will result in a more cumbersome and possibly longer recovery process. For this reason, some would argue that the above approach, whilst suited to granular recovery of data, is not well suited to complete server recovery.

Image-based Backup

In such cases, an alternative approach can be taken involving the use of image backups of devices, facilitating faster full system recoveries. The recovery is faster because all of the required data is contained in the image, which can be “spun up” when and where required. Depending on whether the device is physical or virtual, you may see this approach referred to as Bare Metal Recovery (BMR, for physical recovery) or Virtual Disaster Recovery (VDR, for virtualised recovery).

Application Level backup

In addition to file and folder backup and image-based backup for full systems, you may wish to consider application level backup for frequent backup of critical applications, as this will provide the ability to perform point-in-time recovery of the given application. Many backup technologies feature “plugins” that effectively enable native backup for specific applications such as Microsoft SQL, Exchange or Sharepoint. Using granular files & folders backup and/or application backup in conjunction with image level backup provides the best of both worlds as it allows for the most appropriate recovery option in any given scenario.

The ability to perform this optimal approach will largely depend on the backup solution implemented. Ideally, the solution should be simple, highly secure and automated, whilst supporting the three levels of recovery (files and folders, native application support and image level protection). It should also have the ability to facilitate the recovery of data to any location in order to support the need to invoke disaster recovery.

The table below summarises the recovery options and the ideal use cases.

Redstor wins AI/Machine Learning Innovation award

Redstor has won another award! This time for AI/Machine Learning Innovation.

Continue reading

Microsoft fends off the second-largest DDoS attack ever recorded

In August 2021, a hacker launched the second-largest distributed denial-of-service (DDoS) attack on record, attempting to take down an unnamed Microsoft customer’s internet services.

Continue reading

Chip shortages prompt data protection reviews

With semi-conductor shortages expected to last until 2023 and the global supply chain feeling the pinch as the demand for “chips” surges in an increasingly smart world, there are very few industries left unaffected.

Continue reading

Download The Ultimate MSP Growth Guide

  • This field is for validation purposes and should be left unchanged.