The recent cyber-attack on discount retailer The Works, emphasises the need for organisations of all sizes to invest in ransomware prevention measures.Continue reading
Please give us a few moments whilst we get your account ready.
Internationally, every day, 2.5 quintillion bytes of data is produced. It’s hard to wrap the brain around that idea, right? And how about this: 90% of the world’s data was generated in the last two years. When people started buzzing about “big data” in the early 2000s, they were only seeing the tip of the iceberg.
Today however, the term “big data” is so pedestrian, it’s difficult to imagine data which isn’t big. Small data does exist, by the way, and its explanation as determined by Wikipedia is, “… data that is small enough in size for human comprehension”. It makes one ponder the irony: we produce data sets that are so large and complex that we, as humans, struggle to comprehend it.
The exponential growth of data generation (both in terms of production and age), can be attributed to the increase in users and activity on social networking applications, e-commerce, and other applications which rely on personal information gathering and storing. It is a by-product of user digital interaction. In 2016 there is an estimated 1.6 billion users daily on Facebook, and approximately 243 000 images uploaded every minute. That is a lot of data (usernames, passwords, photos, “likes” per photo, etc.) to compute and store.
This velocity leads to the development of a number of “things” in order to better accommodate, interpret and comprehend big data (an action which, ironically, also produces more data). The software, hardware and skills it requires to effectively manage this amount of data is quite phenomenal if you think about it. Where does it all go?
Cloud computing is what lead to big data as a term losing its initial appeal and distinctness. Scalable cloud storage and data growth developed in parallel. The bigger the cloud, the bigger the data. The cloud also provided an opportunity for all data to be stored in repositories, known as data lakes, or to provide real-time access to data which contributed to data interaction and the prevention of data becoming stagnant. These are the kinds of big data considerations that have influenced the evolution of cloud backup and storage and will continue to do so for the foreseeable future.
It leaves us with big data. And yes, it is still a thing. Data and the collection and analyses thereof will continue to grow and we will have the urgency to continue to accommodate this growth. Perhaps in the year 2032 when our successors look back on our understanding of big data, it will seem as though we were only aware of the tip of the iceberg today.
Reading, April 28, 2022 – Redstor, the cloud-first backup platform of choice for MSPs, today announced the appointment of accomplished channel sales executive Mike Hanauer in a newly created role of Chief Revenue Officer (CRO). Known across the market for his revenue-generating successes with top data protection, recovery and security companies, Hanauer will spearhead global expansion plans for Redstor’s category-leading SaaS platform.Continue reading