Kubernetes data protection represents a massive opportunity. Around 30% of global organizations are currently running containerised applications in production – by 2022, Gartner predicts that figure will be as high as 75%.
Continue readingPlease give us a few moments whilst we get your account ready.
Data tiering refers to the process of assigning values (tiers) to different data sets, dependent on their stage in the data lifecycle. Once tiered data can often be marked for clean-up, deletion or archiving dependent on internal policies.
While implementing data tiering practices, organisations may also gain a further understanding of what data they have and where structured and unstructured data resides within an IT environment. Figures vary but it has been estimated that in some cases unstructured data makes up nearly as much as 80% of data; being able to draw more value from this data can increase customer insights and revenues for an organisation. Not being able to find data is likely to make organisations liable in case of an audit.
As mentioned, data continues to grow and the challenges associated with this can be costly to organisations of all sizes. As data grows, organisations must ensure systems are in place to manage and track new data through its lifecycle. Primary storage provisions need to be available and systems such as backup and disaster recovery also need to be able to scale commercially and technically to cope.
Provisioning primary storage leads to an increase in capital expenditure and with companies such as Microsoft indicating that costs could rise by up to 22% in 2017, in the UK, this could be a concerning outlook to any organisation predicting company growth with little handle on how their data is likely to grow and why.
With costs increasing making efficiency savings is a logical step but currently, resources are being used unnecessarily. A study by Dell stating that nearly 80% of data goes unused after the first 90-days of being created; this data will more than likely stay on expensive primary storage arrays backed up to the cloud and replicated to a DR site for years to come.
Data tiering and subsequently archiving or purging data from primary systems will free up space and immediately if nothing else, will allow an organisation to utilise its current storage infrastructure for a longer period of time or negate the need to spend unpredicted CAPEX budget upgrading. It will also allow policies to be fulfilled around the lifecycle of data, meaning that backup and retention policies can be more easily adhered to. By reducing the capacity of used data on primary storage platforms, systems can perform better and hot or active data can benefit from faster speeds and higher power on those systems.
Using an automated tiering solution will allow system administrators to set policies and rules that will be able to show which tier data should sit in. By using an automated system IT staff can save time, rather than doing it manually, and concentrate on more business critical tasks.
With data tiered, organisations can look to archive data and make use of cost-effective Public cloud or low cost on-premise storage e.g. dense disk storage.
Data tiering has direct financial benefits to an organisation too. By freeing up space on high-power primary storage capital expenditure is optimised and data processes can fully benefit. Migrating data to low-cost on-premise storage or cloud storage platforms will also reduce operational expenses.
Watch our product demos to find out more about our solution.
Kubernetes data protection represents a massive opportunity. Around 30% of global organizations are currently running containerised applications in production – by 2022, Gartner predicts that figure will be as high as 75%.
Continue readingEvery day more than 350,000 new types of malware are unleashed on the internet. The scale of the problem is so massive, it is no longer enough to have traditional anti-virus software, solely defending against known threats.
Continue reading