The recent cyber-attack on discount retailer The Works, emphasises the need for organisations of all sizes to invest in ransomware prevention measures.Continue reading
Please give us a few moments whilst we get your account ready.
The following article discusses data backup focusing on what data should be protected as well as when, where and how data it should be protected. Its purpose is to provoke thought regarding how organisations should best go about backing up data, outlining the advantages and disadvantages of the various approaches that can be taken using various real world examples.
We all know that we should backup our data, that much is clear. Organisations, regardless of industry, create, alter and utilise data daily, in an ever increasing number of ways and on a larger scale than ever before. Data is the lifeblood of organisations. It represents countless hours of productivity, transactions, collaborations and more. Choosing what to protect should therefore seem relatively straightforward then? Surely we want to backup everything? The answer, however, very much depends on your approach to recovering from any given incident of data loss and whether your data holds varying degrees of importance within your organisation.
Granular File and Folder Backup
For example, if you suffered the total loss of a file server, how would you go about recovering it? You may choose to replace the hardware (if physical) or create a new virtual machine (if virtual) and then restore the files and folders after having rebuilt the base operating system (either from physical media or a system image). If this approach was taken you would likely backup the user data and nothing else. In this example, we can see the “how” determines the “what” and in doing so focuses our backup strategy on what is important within our given scenario.
If we examine another scenario where the total loss of an application server has occurred, you could follow a similar approach to that outlined above. This is fine if the rebuild time is considered acceptable but what if it is not? The acceptable time taken to recover data is commonly referred to as your recovery time objective (RTO) and is a measure used by organisations to determine worst case recovery windows. To view it another way, it is a measure of how long an organisation can afford to be without access to its data or without “business as usual” capability. The ability to satisfy recovery time objectives will largely depend on the backup approach being deployed. In scenarios where a total loss of any given server has occurred, it is likely that the aforementioned backup approach will result in a more cumbersome and possibly longer recovery process. For this reason, some would argue that the above approach, whilst suited to granular recovery of data, is not well suited to complete server recovery.
In such cases, an alternative approach can be taken involving the use of image backups of devices, facilitating faster full system recoveries. The recovery is faster because all of the required data is contained in the image, which can be “spun up” when and where required. Depending on whether the device is physical or virtual, you may see this approach referred to as Bare Metal Recovery (BMR, for physical recovery) or Virtual Disaster Recovery (VDR, for virtualised recovery).
Application Level backup
In addition to file and folder backup and image-based backup for full systems, you may wish to consider application level backup for frequent backup of critical applications, as this will provide the ability to perform point-in-time recovery of the given application. Many backup technologies feature “plugins” that effectively enable native backup for specific applications such as Microsoft SQL, Exchange or Sharepoint. Using granular files & folders backup and/or application backup in conjunction with image level backup provides the best of both worlds as it allows for the most appropriate recovery option in any given scenario.
The ability to perform this optimal approach will largely depend on the backup solution implemented. Ideally, the solution should be simple, highly secure and automated, whilst supporting the three levels of recovery (files and folders, native application support and image level protection). It should also have the ability to facilitate the recovery of data to any location in order to support the need to invoke disaster recovery.
The table below summarises the recovery options and the ideal use cases.
Reading, April 28, 2022 – Redstor, the cloud-first backup platform of choice for MSPs, today announced the appointment of accomplished channel sales executive Mike Hanauer in a newly created role of Chief Revenue Officer (CRO). Known across the market for his revenue-generating successes with top data protection, recovery and security companies, Hanauer will spearhead global expansion plans for Redstor’s category-leading SaaS platform.Continue reading