The recent cyber-attack on discount retailer The Works, emphasises the need for organisations of all sizes to invest in ransomware prevention measures.Continue reading
Please give us a few moments whilst we get your account ready.
Data tiering refers to the process of assigning values (tiers) to different data sets, dependent on their stage in the data lifecycle. Once tiered data can often be marked for clean-up, deletion or archiving dependent on internal policies.
While implementing data tiering practices, organizations may also gain a further understanding of what data they have and where structured and unstructured data resides within an IT environment. Figures vary but it has been estimated that in some cases unstructured data makes up nearly as much as 80% of data; being able to draw more value from this data can increase customer insights and revenues for an organization. Not being able to find data is likely to make organizations liable in case of an audit.
As mentioned, data continues to grow and the challenges associated with this can be costly to organizations of all sizes. As data grows, organizations must ensure systems are in place to manage and track new data through its lifecycle. Primary storage provisions need to be available and systems such as backup and disaster recovery also need to be able to scale commercially and technically to cope.
Provisioning primary storage leads to an increase in capital expenditure and with companies such as Microsoft indicating that costs could rise by up to 22% in 2017, in the UK, this could be a concerning outlook to any organization predicting company growth with little handle on how their data is likely to grow and why.
With costs increasing making efficiency savings is a logical step but currently, resources are being used unnecessarily. A study by Dell stating that nearly 80% of data goes unused after the first 90-days of being created; this data will more than likely stay on expensive primary storage arrays backed up to the cloud and replicated to a DR site for years to come.
Data tiering and subsequently archiving or purging data from primary systems will free up space and immediately if nothing else, will allow an organization to utilize its current storage infrastructure for a longer period of time or negate the need to spend unpredicted CAPEX budget upgrading. It will also allow policies to be fulfilled around the lifecycle of data, meaning that backup and retention policies can be more easily adhered to. By reducing the capacity of used data on primary storage platforms, systems can perform better and hot or active data can benefit from faster speeds and higher power on those systems.
Using an automated tiering solution will allow system administrators to set policies and rules that will be able to show which tier data should sit in. By using an automated system IT staff can save time, rather than doing it manually, and concentrate on more business critical tasks.
With data tiered, organizations can look to archive data and make use of cost-effective Public cloud or low cost on-premise storage e.g. dense disk storage.
Data tiering has direct financial benefits to an organization too. By freeing up space on high-power primary storage capital expenditure is optimized and data processes can fully benefit. Migrating data to low-cost on-premise storage or cloud storage platforms will also reduce operational expenses.
Reading, April 28, 2022 – Redstor, the cloud-first backup platform of choice for MSPs, today announced the appointment of accomplished channel sales executive Mike Hanauer in a newly created role of Chief Revenue Officer (CRO). Known across the market for his revenue-generating successes with top data protection, recovery and security companies, Hanauer will spearhead global expansion plans for Redstor’s category-leading SaaS platform.Continue reading