Cloud Backup

We`re just sending through your details

Please give us a few moments whilst we get your account ready.


Simplify your backup provision with a single, automated, offsite backup solution - protecting data regardless of where it resides.

Simplify your backup provision with a single, automated, offsite backup solution - protecting data regardless of where it resides.

posted in Backup & RecoveryCyber-Security ● 17 Feb 2017

When backup processes are highly complex and require manual intervention, the opportunity for things to go wrong increases dramatically.

For a real-world example of this, we needn’t look very far. On the 31st of January 2017, GitLab (GL) suffered a data loss incident when a Systems Administrator accidentally deleted a live database. The database was around 300GB in size and by the time the cancel delete command was sent, there was only 4.5GB of data left.

Gitlab had taken the decision just months before to move away from its reliance on cloud storage and to instead build and operate its own storage platform. The stated reason for the decision was that the move would “make GL more efficient, consistent, and reliable as we will have more ownership of the entire infrastructure”.

GL had five separate backup and replication tools in place to protect against data loss on its storage platform, so when the process began to restore the accidentally deleted data, a simple recovery seemed likely. After a short while however, it was established that there was only one potentially viable backup of the data, which had been taken 6 hours prior.

As the problem was discovered and steps were being taken to resolve it and restore data, GL updated an official incident report. Part of this report reads:

“So in other words, out of 5 backup/replication techniques deployed none are working reliably or set up in the first place.”

4613 regular projects, 74 forks, 350 imports totaling 5037 projects were lost. Along with user data for 707 people and in the region of 5000 comments.

Data was deleted by accident and GL must have believed that with 5 separate data protection solutions in place, their data was protected and would be able to be restored if needed.

The real failure then, came about because of a complex backup provision that required a lot of management and human intervention. It seems unlikely that a robust business continuity plan or disaster recovery plan had been put into place and if it had, it certainly hadn’t been tested for reliability. In fact, it would seem that simply testing the validity of backups was overlooked as well.

Having detailed recovery plans may seem onerous for a small organization or a start-up business. While they do take a little time to do properly, they are necessary. For a company with an established customer base of thousands and investors of around $25m in 2016, they are absolutely vital.

Redstor’s automated cloud backup service protects data on all systems, regardless of where they reside. Importantly, the service requires minimal management and provides detailed success and failure reports, enabling management by exception.

The centralized reporting functionality within Redstor Backup technology allows organisations to audit and control their backed up data with minimal management overhead, while the ability to backup and restore data from a range of sources including virtualized environments enables management via a single pane of glass.

Redstor develops the technology and owns the intellectual property that underpins its award-winning backup service. Further to this, Redstor own and manage the hardware and infrastructure that the service runs on and provide the support for the service. In short, if something were to go wrong, there would be no reliance on anyone outside of Redstor to fix the issue.

With Redstor’s InstantData feature, data can be accessed instantly via a range of recovery options:

  • Instant Data Temporary – gives you access to your backup data, as if it were a local drive in Windows Explorer.
  • Instant Data Permanent – gives you immediate access to recovered files on your local drive, presenting files instantly and pulling back their component blocks on demand.
  • FSR – Automates the recovery of your physical or virtual servers, as a virtual disks in HyperV or VMware format, taking all the complexity out of a full system restore.

If disaster strikes, data can also be delivered to site or a disaster recovery location via hardware of your choice ready for access via the recovery console.

By supporting backups for the entirety of an organization’s data, with support for virtual and physical machines, laptops, desktops, servers and cloud applications, Redstor can prevent data loss and meet industry regulations and compliance around data security.

Cyber-attack on The Works is a warning to others

The recent cyber-attack on discount retailer The Works, emphasises the need for organisations of all sizes to invest in ransomware prevention measures.

Continue reading

Redstor Appoints Channel Leader Mike Hanauer as CRO to Spearhead Global Sales Expansion

Reading, April 28, 2022 – Redstor, the cloud-first backup platform of choice for MSPs, today announced the appointment of accomplished channel sales executive Mike Hanauer in a newly created role of Chief Revenue Officer (CRO). Known across the market for his revenue-generating successes with top data protection, recovery and security companies, Hanauer will spearhead global expansion plans for Redstor’s category-leading SaaS platform.

Continue reading

What is the Digital Operational Resilience Act?

The Digital Operations Resilience Act (DORA) is the European Union’s attempt to streamline the third-party risk management process across financial institutions.

Continue reading

Download The Ultimate MSP Growth Guide

  • This field is for validation purposes and should be left unchanged.