From Data Storage to Hybrid Data Lifecycle Management

Komprise-mark-in-circleThis article was adapted from its original version on Blocks & Files.

On-premises and public cloud storage vendors can add data lifecycle management features to their products – but that only confirms the need for a global, supplier-agnostic data lifecycle management capability, said Komprise CEO Kumar Goswami in an interview with storage publication BlocksandFiles. Below are highlights from the interview:

Blocks & Files: Komprise analyzes your file and object data regardless of where it lives. It can transparently tier and migrate data based on your policies to maximize cost savings without affecting user access. Can Komprise’s offering provide a way to optimize cloud storage costs between the public clouds?

Kumar Goswami: Komprise enables customers to test different scenarios by simply changing the policies for data movement and visualizing the cost impacts of these choices. By running such what-if analysis, customers can choose between different destinations and then make their decision. Komprise also supports cloud-to-cloud data migrations. Komprise gives customers the analytics to make informed decisions for data lifecycle management and data mobility to implement their choices.

Blocks & Files: How would you view AWS’s S3 Intelligent Tiering and the AWS Storage Gateway with integration between the cloud and on-premise, offline AWS-compatible storage. What does Komprise offer that is better than this?

Goswami: AWS has a rich variety of file and object storage tiers to meet the various demands of data, with significant cost and performance differences across them. Most customers have a lot of rarely-accessed cold data. They can leverage highly cost-efficient tiers like Glacier Instant Retrieval that are 40 percent to 60 percent cheaper.

On AWS, you cannot directly access the tiered data from the lower tier without rehydration. Nor can you set different policies for different data, because it automatically manages data within itself. AWS S3 Intelligent Tiering is useful if you have small amounts of S3 data, you don’t have file data, you don’t have analytics-based data management, don’t require policy-based automation and are worried about irregular access patterns.

Komprise delivers data lifecycle management across AWS FSX, FSXN, EFS, S3 tiers and Glacier tiers. It preserves file-object duality so you can transparently access the data as a file from the original source and as an object from the destination tiers – without rehydration – using our patented Transparent Move Technology (TMT). Customers with hundreds of terabytes to petabytes of data want the flexibility, native access and analytics-driven automation that Komprise delivers.

Komprise-TMT-Cloud-Tiering-2048x1226

Blocks & Files: Azure’s File Sync Tiering stores only frequently accessed (hot) files on your local server. Infrequently accessed (cool) files are split into namespace (file and folder structure) and file content. The namespace is stored locally and the file content stored in an Azure file share in the cloud. How does this compare to Komprise’s technology?

Goswami: Hybrid cloud storage gateway solutions like Azure File Sync Tiering are useful if you want to replace your existing NAS with a hybrid cloud storage appliance. Komprise is complementary to these solutions and transparently tiers data from any NAS including NetApp, Dell, and Windows Servers to Azure. This means you can still see and use the tiered data as if they were local files and you can access the data as native objects in the cloud without requiring a move to a new storage system.

Blocks & Files: NetApp’s BlueXP provides a Cloud Tiering Service that can automatically detect infrequently used data and move it seamlessly to AWS S3, Azure Blob or Google Cloud Storage – it is a multi-cloud capability. When data is needed for performant use, it is automatically shifted back to the performance tier on-prem. How does Komprise position its offering compared to BlueXP?

Goswami: NetApp BlueXP provides an integrated console to manage NetApp arrays across on-premises, hybrid cloud and cloud environments. So it’s a good integration of NetApp consoles to manage NetApp environments, but it does not tier other NAS data.

Also, NetApp’s tiering is proprietary to ONTAP and is block-based, not file-based. Block-based tiering is good for storage-intrinsic elements like snapshots because they are internal to the system. However they cause expensive egress, limit data access and create rehydration costs plus lock-in for file data. To see the differences between NetApp block-based tiering and Komprise file tiering, please read our paper on tiering choices.

Storage vendors stand to gain by having customers in their most expensive tiers. They are recognizing the demand and starting to offer some data management. But the storage vendor business model still derives from driving revenues from their storage operating system; they offer features for their own storage stack and tie the customer’s data into their proprietary operating system.

Blocks & Files: There are many suppliers offering products and services in the file and object data management space. Do you think there will be a consolidation phase as the cloud file services suppliers (CTERA, Lucid Link, Nasuni, Panzura), data migrators (Datadobi, Data Dynamics), file services metadata-based suppliers (Hammerspace) ILMs such as yourself, and filesystem and services suppliers (Dell, NetApp, Qumulo, WekaIO) reach a stage where there is overlapping functionality?

Goswami: As data growth continues, customers will always need places to store the data and ways to manage the data. Both of these needs are distinct and getting more urgent as the scale of data gets larger and more complex with the edge, AI and data analytics. The market across these is large: data management is already estimated at about a $18B market.

Typically for such big markets, you will see multiple solutions targeting different parts of the puzzle. Our focus is storage-agnostic, analytics-driven data management. This helps customers cut costs and realize value from their file and object data no matter where it lives. We see our focus as broader than ILM. It is unstructured data management, which will broaden even further to data services.

Blocks & Files: If you think that such consolidation is possible, how will Komprise’s strategy develop to ensure its future?

Goswami: Komprise is focused on being the fastest, easiest and most flexible unstructured data management solution to serve our customers’ needs. To do this effectively, we continue to innovate our solution and we partner with the storage and cloud ecosystem. We have and will continue to build the necessary relationships to offer our customers the best value proposition.

My cofounders and I have built two prior businesses, and have learned that focusing on the customer value proposition and continually improving what you can deliver is the best way to build a business. We are barely scratching the surface of unstructured data management and its potential. Think about the edge. Think about AI and ML. Think about all the different possibilities that no single storage vendor will be able to deliver. We are focused on creating an unstructured data management solution that solves our customers’ pain-points around cost and value today while bridging them seamlessly into the future.

Getting Started with Komprise:

Contact | Komprise Blog