Who isn’t looking for faster cloud data migrations? What about smarter cloud data tiering?
At Komprise our mission is smarter, faster unstructured data management – manage data across any file and object storage without proprietary interfaces – no stubs, no agents, use open standards, deliver native data access everywhere. We want our customers to be able to analyze, mobilize, and manage unstructured data at any scale across their storage and clouds – without lock-in.
So, why do enterprise IT organizations, who are facing the challenges of massive data volume growth and shrinking budgets, think their only option is to buy more storage when they reach capacity? Consider this:
- 70% of data in most enterprise organizations is cold data and has not been accessed in months, yet this data sits on expensive storage and consumes the same backup resources as hot data.
- 60% of the storage budget is not really spent on storage. It’s spent on secondary copies of data for data protection – backups, backup software licenses, replication, and disaster recovery. (IDC)
But, isn’t cloud storage the answer?
Consider this:
- 50% of the 175 zettabytes of data worldwide in 2025 will be stored in public cloud environments. (IDC)
- 80% of businesses will overspend their cloud infrastructure budgets, according to due to a lack of cloud cost optimization. (Gartner)
So, costs continue to rise, backup times lengthen, disaster recovery remains unreliable, and it has become increasingly difficult to get maximum value from Flash and cloud storage solutions. Meanwhile, you’re trying to get file data workloads to the cloud faster to save money and ensure your IT infrastructure and operations are agile and efficient.
5 Ways to Get Enterprise File Data Workloads to the Cloud Smarter and Faster
Here are 5 ways to get to the cloud smarter and faster as you embark on your journey to the cloud for your file and object data:
- Establish a Cold Data Management Strategy. Since the bulk of data is cold, finding and tiering cold data can save millions, since it offloads data from expensive storage and backups. (Read the Pfizer case study)
- Tier without Tears. Data tiering has been a solution for years, but it was limited to a storage vendor’s solutions and was highly proprietary. Tiering needs to be frictionless with no disruption to users and applications across multi-vendor storage, without proprietary lock-in. Even after data is moved, it needs to be accessed by users and applications exactly the same way as before the move. And, migrating the data to another platform or use by 3rd party applications should not create unnecessary rehydration of data which erodes the savings. (Read Cloud Data Tiering Choices white paper)
- Don’t be a Block Head. The way tiering is done can significantly change how your actual savings affect your options to access cold data. Cold data can be tiered at the block level or at the file level, and there are many differences between the two. Storage vendors are now using block-level tiering to move data out of the file server and into an object or cloud tier. All file access must be done through the original file server. The moved blocks cannot be directly accessed from their new location, such as the cloud, because they are meaningless without all the other data blocks and the file context and attributes (the file’s metadata). Storage tiering, also known as pools solutions, use block-based tiering so be sure you understand the limitations and lock-in. Block-based cloud tiering also creates significant egress and data transfer costs due to unnecessary rehydration. (Read the Block versus File Tiering white paper)
- Choose Open Standards. File level tiering fully preserves file access at each tier by keeping the metadata and file attributes along with the file no matter where it lives, even on object storage and cloud. (Read Why Standards-Based File-Level Tiering Matters)
- Quantify the Data Storage Cost Savings. They say measure what matters, right? So establish the right metrics and share the results (see an example of an IT health check in this Komprise customer video). Some benefits of file-based cloud tiering include:
- Maximizes space savings by eliminating the need to rehydrate data for common operations such as data access, backups, and migration which significantly shrinks your storage footprint.
- Provides up to 3x greater savings because it not only reduces cold files on the primary tier, it also shrinks backup footprint without rehydration, and it shrinks DR footprint without rehydration.
- Should be storage agnostic, which puts you in control of your data without lock-in to either the storage vendor or the data management solution itself. (Read Quantifying the Business Value of Komprise.)
The Benefits of Open, Transparent File Tiering for Cloud Data Management
Here is a table from our paper, “Why Standards-Based File Tiering Matters” that will help you understand the impact of open and transparent file-level tiering for Intelligent Data Management:
In my next post I’ll review some recommendations for optimizing cloud data costs and accelerating cloud data migrations.
In the meantime, be sure to read our white paper: Cloud Tiering: Storage-Based vs. Gateways vs. File-Based – Which is Better and Why? and learn more about the easy, fast, no lock-in path to the cloud for file and object data.
———-