Why Cloud Native Data Access Matters


Unlock the Potential of Unstructured Data

Cloud transformation is top of mind for our customers. As a result, hybrid cloud storage is where they are focused. When it comes to migrating data to the cloud, customers have some choices to make. They can work with their existing data storage vendors and adopt their cloud file storage options or they can choose a different path.

As Komprise co-founder and COO Krishna Subramanian points out in our most recent Smart Data Migration webinar, this question is pivotal when enterprise IT organizations take a data-centric instead of a storage-centric approach to cloud migrations.

Before You Migrate Data to the Cloud Remember:
  • The cloud is not just a cheap storage locker;
  • The cloud is an active platform with tremendous compute power;
  • Users want to run new kinds of analytics on unstructured data.

The bottom line: Don’t limit the potential of your data by locking data into a proprietary format. Cloud native data access is essential to unleash the potential of the cloud. Cloud native is a way to move data to the cloud without lock in, which means that your data is no longer tied to the file system from which it was originally served.

By 2025, Gartner estimates that more than 95% of new digital workloads will be deployed on cloud-native platforms, up from 30% in 2021.

In the webinar, Krishna reviews the importance of cloud native data access and maximizing the potential of your data. She summarizes it this way:

Maximize Data Efficiency

Leverage the full potential of the cloud and be able to use all tiers of cloud storage on your data. For example, Amazon FSx versus Glacier Instant Retrieval: there are significant cost differences. When you move your data to the cloud, and are considering cloud tiering, ensure you can take advantage of all of the efficiencies the cloud has to offer by moving data as it ages to lower-cost storage.

Maximize Data Access

When you move data in cloud native format, users should be able to access the data not only as a file, but also as a native object—which is necessary for leveraging cloud-native analytics and other services. Access to your data should not have to go through your file storage layer, as this incurs licensing fees and requires adequate capacity.

Maximize Data Services

Make it easy for your users to search and find the data they need and send it to data lakes and analytics services, most of which operate at the object layer. Cloud-native access ensures that your data can leverage these services.

Why Storage-Centric Tiering is an Issue

So why not take a storage-centric approach to cloud tiering? Storage-based tiering goes all the way back to hierarchical storage management days, where you could have different tiers or platters within a storage environment: SSDs, SATA drives, etc. and the storage operating system could move blocks of data to different tiers and it was transparent to users.

Now many of these storage vendors are saying they can move blocks to the cloud and treat cloud object storage as a platter inside of their storage OS. While this may be good for some use cases such as snapshots, the data is not cloud native. You’re tiering blocks of data to a location in the cloud that remains in a proprietary OS, meaning this data has limited use in the cloud other than as an archive.


Cloud file storage is great to store files, but it’s not your data management platform. Use the full power of the cloud with an independent data management platform. The rest of the webinar reviews the Komprise architecture, the Global File Index and dives into a demonstration that highlights the potential of cloud native data. I’ve embedded it below for easy viewing.

Read the white paper: Block versus File Tiering


Getting Started with Komprise:

Contact | Data Assessment