The Covid-19 pandemic has forced many managed service providers to seek faster, easier and more scalable ways to manage their customers’ data.Continue reading
When backup processes are highly complex and require manual intervention, the opportunity for things to go wrong increases dramatically.
For a real-world example of this, we needn’t look very far. On the 31st of January 2017, GitLab (GL) suffered a data loss incident when a Systems Administrator accidentally deleted a live database. The database was around 300GB in size and by the time the cancel delete command was sent, there was only 4.5GB of data left.
Gitlab had taken the decision just months before to move away from its reliance on cloud storage and to instead build and operate its own storage platform. The stated reason for the decision was that the move would “make GL more efficient, consistent, and reliable as we will have more ownership of the entire infrastructure”.
GL had five separate backup and replication tools in place to protect against data loss on its storage platform, so when the process began to restore the accidentally deleted data, a simple recovery seemed likely. After a short while however, it was established that there was only one potentially viable backup of the data, which had been taken 6 hours prior.
As the problem was discovered and steps were being taken to resolve it and restore data, GL updated an official incident report.
Part of this report reads:
“So in other words, out of 5 backup/replication techniques deployed none are working reliably or set up in the first place.”
4613 regular projects, 74 forks, 350 imports totaling 5037 projects were lost. Along with user data for 707 people and in the region of 5000 comments.
Data was deleted by accident and GL must have believed that with 5 separate data protection solutions in place, their data was protected and would be able to be restored if needed.
The real failure then, came about because of a complex backup provision that required a lot of management and human intervention. It seems unlikely that a robust business continuity plan or disaster recovery plan had been put into place and if it had, it certainly hadn’t been tested for reliability. In fact, it would seem that simply testing the validity of backups was overlooked as well.
Having detailed recovery plans may seem onerous for a small organisation or a start-up business. While they do take a little time to do properly, they are necessary. For a company with an established customer base of thousands and investors of around $25m in 2016, they are absolutely vital.
Redstor’s automated cloud backup service protects data on all systems, regardless of where they reside. Importantly, the service requires minimal management and provides detailed success and failure reports, enabling management by exception.
The centralised reporting functionality within Redstor Backup technology allows organisations to audit and control their backed up data with minimal management overhead, while the ability to backup and restore data from a range of sources including virtualized environments enables management via a single pane of glass.
Redstor develops the technology and owns the intellectual property that underpins its award-winning backup service. Further to this, Redstor own and manage the hardware and infrastructure that the service runs on and provide the support for the service. In short, if something were to go wrong, there would be no reliance on anyone outside of Redstor to fix the issue.
With Redstor’s InstantData feature, data can be accessed instantly via a range of recovery options:
If disaster strikes, data can also be delivered to site or a disaster recovery location via hardware of your choice ready for access via the recovery console.
By supporting backups for the entirety of an organisation’s data, with support for virtual and physical machines, laptops, desktops, servers and cloud applications, Redstor can prevent data loss and meet industry regulations and compliance around data security.