• A
    • Adaptive Data Management

      What is adaptive unstructured data management?

      As data footprint continues to grow, businesses are struggling to manage petabytes of data, often consisting of billions and billions of files. To manage at this scale, intelligent automation that learns and adapts to your environment is needed.

      Data management needs to happen continuously in the background and not interfere with active usage of storage or the network by users and applications. This is because unstructured data management is an ongoing function, much like a housekeeper of data. Just as you would not want your housekeeper to be clearing dishes as your family is eating at the dinner table, data management needs to run non-intrusively in the background.

      To do this, an adaptive data management solution is needed – one that knows when your file system and network are in active use and throttles itself back, and then speeds back up when resources are available. An adaptive data management system learns from your usage patterns and adapts to the environment.

      In The 10 Principles of Komprise Intelligent Data Management, adaptive data management is summarized this way:

      Komprise throttles back as needed when your data storage or network are in active use, so you never have to monitor or schedule when Komprise runs.

      Komprise_KomprisePrinciples_blogsocial-768x402

      Getting Started with Komprise:

    • AI Compute

      The computing ability required for machines to learn from big data to experience, adjust to new inputs, and perform human-like tasks. Komprise cuts the data preparation time for AI projects by creating virtual data lakes with its Deep Analytics feature.

      AI compute refers to the computational resources required for artificial intelligence systems to perform tasks, such as processing data, training machine learning models, and making predictions. These resources can be provided by various hardware and software platforms, including GPUs, TPUs, cloud computing, and edge computing devices. The amount of AI compute needed depends on the complexity of the AI system and the amount of data being processed.

      Unstructured data management solutions and unstructured data workflows are increasingly being used to not only ensure greater data storage cost but also to deliver the right data to the right destination at the right time.

      What is unstructured data in AI?

      AI needs unstructured data – are you ready?

      Komprise-blog-Artifical-Intelligence-needs-unstructured-data-THUMB
      AI needs unstructured data

      Getting Started with Komprise:

    • Amazon (AWS) S3 Intelligent Tiering

      S3 Intelligent Tiering is an Amazon storage class aimed at data with unknown or unpredictable data access patterns. See our S3 Intelligent Tiering glossary entry for further information.AWS_logo_featured_600x400-1

      Learn more about AWS cloud tiering, cloud data migration and the Komprise AWS partnership.

      Komprise-Smart-Data-Migration-for-AWS-White-Paper-SOCIAL-768x402

      Getting Started with Komprise:

    • Amazon FSx

      What is Amazon FSx?

      Amazon FSx is a fully managed service, high-performance file systems in the cloud that runs on AWS.

      Customers can choose between four file systems:

      • NetApp ONTAP
      • OpenZFS
      • Windows File Server
      • Lustre

      Komprise supports for Amazon FSx for NetApp ONTAP with a focus on Smart Data Migration. As an Advanced AWS partner, the Komprise cloud data migration solution is able to “right place” data to reduce costs and increase data value.

      Read the AWS partner press release and blog post Komprise and AWS FSx for Netapp ONTAP.

      fsx-for-ontap-tile1

      For more information on Komprise File Data Migration to the Cloud be sure to check out our Path to the Cloud section of the website and download the Smart Data Migration for AWS white paper.

      Other Resources:

      Getting Started with Komprise:

    • Amazon Glacier (AWS Glacier)

      Arctic_Glacier

      What is Amazon S3 Glacier (AWS Glacier)?

      Amazon S3 Glacier, also known as AWS Glacier, is a class of cloud storage available through Amazon Web Services (AWS).  Amazon S3 Glacier is a lower-cost storage tier designed for use with data archiving and long-term backup services on the public cloud infrastructure.

      Amazon S3 Glacier was created to house data that doesn’t need to be accessed frequently or quickly. This makes it ideal for use as a cold storage service, hence the inspiration for its name.

      Amazon S3 Glacier retrieval times range from a few minutes to a few hours with three different speed options available: Expedited (1-5 minutes), Standard (3-5 hours), and Bulk (5-12 hours).

      Amazon S3 Glacier Deep Archive offers 12-48-hour retrieval times. The faster retrieval options are significantly more expensive, so having your data organized into the correct tier within AWS cloud storage is an important aspect of keeping storage costs down.

      Other Glacier features:
      • The ability to store an unlimited number of objects and data
      • Data stored in S3 Glacier is dispersed across multiple geographically separated Availability Zones within the AWS region
      • An average annual durability of 99.999999999%
      • Checksum uploads to validate data authenticity
      • REST-based web service
      • Vault, Archive, and Job data models
      • Limit of 1,000 vaults per AWS account

      Main Applications for Amazon S3 Glacier Storage

      There are several scenarios where Glacier is an ideal solution for companies needing a large volume of cloud storage.

      1. Huge data sets. Many companies that perform trend or scientific analysis need a huge amount of storage to be able to house their training, input, and output data for future use.
      2. Replacing legacy storage infrastructure. With the many advantages that cloud-based storage environments have over traditional storage infrastructure, many corporations are opting to use AWS storage to get more out of their data storage systems. AWS Glacier is often used as a replacement for long term tape archives.
      3. Healthcare facilities’ patient data. Patient data needs to be kept for regulatory or compliance requirements. Glacier and Glacier Deep Archive are ideal archiving platforms to keep data that will hardly need to be accessed.
      4. Cold data with long retention times. Finance, Research, Genomics, and Electronic Design Automation and Media, Entertainment are some examples of industries where cold data and inactive projects may need to be retained for long periods of time even though they are not actively used.  AWS Glacier storage classes are a good fit for these types of data.  The project data will need to be recalled before it is actively used to minimize retrieval delays and costs.

      Amazon S3 Glacier vs S3 Standard

      Amazon’s S3 Standard storage and S3 Glacier are different classes of storage designed to handle workloads on the AWS cloud storage platform.

      • S3 Glacier is best for cold data that’s rarely or never accessed
      • Amazon S3 Standard storage is intended for hot and warm data that needs to be accessed daily and quickly

      The speed and accessibility of S3 Standard storage comes at a much higher cost compared to S3 Glacier and the even more economical S3 Glacier Deep Archive storage tiers. Having the right data management solution is critical to help you identify and organize your hot and cold data into the correct storage tiers, saving a substantial amount on storage costs.

      Benefits of a Data Management System to Optimize Amazon S3 Glacier

      migrationisvpartner-150x150A comprehensive suite of unstructured data management and unstructured data migration capabilities allow organizations to reduce their data storage footprint and substantially cut their storage costs. These are a few of the benefits of integrating an analytics-driven data management solution like Komprise Intelligent Data Management with your AWS storage:

      Get full visibility of your AWS and other storage data

      Across AWS and other cloud platforms to understand how much NAS data is being accrued and whether it’s hot or cold so you make better data storage investment and data mobility decisions.

      Intelligent tiering and life cycle management for AWS storage

      Optimize and improve how you manage files and objects across EFS, FSX, S3 Standard and S3 Glacier storage classes based on access patterns.

      Intelligent AWS data retrievals

      Don’t get hit with unexpected data retrieval fees on S3 Glacier – Komprise enables intelligent recalls based on access patterns so if an object on Glacier becomes active again, Komprise will move it up to an S3 storage class.

      Bulk retrievals for improved AWS user performance

      Improve performance across entire projects from S3 Glacier storage classes – if an archived project is going to become active, you can prefetch and retrieve the entire project from S3 Glacier using Komprise so users don’t have to face long latencies to get access to the data they need.

      Minimize AWS storage costs

      With analytics-driven cloud data management that monitors retrieval costs, egress costs and other costs to minimize them by promoting data up and recalling it intelligently to more active storage classes.

      Access AWS data natively

      Access data that has been moved across AWS as objects from Amazon S3 storage classes or as files from File and NAS storage classes without the need for additional stubs or agents.

      Reduce AWS cloud storage complexity

      Reduce the complexity of your cloud storage and NAS environment and manage your data more easily through an intuitive dashboard.

      Optimize the AWS storage savings

      Komprise Intelligent Data Management allows you to better manage all the complex data storage, retrieval, egress and other costs. Know first. Move smart. Take control.

      Easy, on-demand scalability

      Komprise provides you with the capacity to add and manage petabytes without limits or the need for dedicated infrastructure.

      Integrate data lifecycle management

      Integrate easily with an AWS Advanced Tier partner such as Komprise for lifecycle management or other use cases.

      Move data transparently to any tier within AWS

      Your users won’t experience any difference in terms of data access. You’ll notice a huge difference in cost savings and unstructured data value with Komprise.

      Create automated data management policies and data workflows

      Continuously manage the lifecycle of the moved data for maximum savings. Build Smart Data Workflows to deliver the right data to the right teams, applications, cloud services, AI/ML engines, etc. at the right time.

      Streamline Amazon S3 Glacier Operations with Komprise Intelligent Data Management

      Komprise’s Intelligent Data Management allows you to seamlessly analyze and manage data across all of your AWS cloud storage classes so you can move data across file, S3 Standard and S3 Glacier storage classes at the right time for the best price/performance. Because it’s vendor agnostic, its standards-driven analytics and data management work with  the largest storage providers in the industry and have helped companies save up to 50% on their cloud storage costs.

      If you’re looking to get more out of your AWS storage, contact a data management expert at Komprise today and see how much you could save on data storage costs. Read the white paper: Smart Data Migration for AWS.

      Komprise-Smart-Data-Migration-Page-SOCIAL

      Getting Started with Komprise:

    • Amazon S3 (AWS S3)

      Amazon Simple Storage Service, known as Amazon S3 or AWS S3, is an object storage service that offers industry-leading scalability, data availability, security, and performance.

      See S3 in our glossary for further information.

      Learn more about Komprise Intelligent Data Management for AWS data storage.

      Komprise-Smart-Data-Migration-for-AWS-White-Paper-SOCIAL-768x402

      Getting Started with Komprise:

    • Analytics-driven Data Management

      Analytics-driven data management is a core principle of the standard-based platform of Komprise Intelligent Data Management that’s based on data insight and automation to strategically and efficiently manage and move unstructured data at massive scale. With Komprise, you can know first, move smart, and take control of massive unstructured data growth while cutting 70% of your enterprise data storage costs, including backup and cloud costs.

      Analyze-3@3x-400x400

      Know First: Get insight into your data before you invest. See across your data storage silos, vendors, and clouds to make informed storage and backup decisions.

      • Analyze any NAS, S3
      • Plan and project storage cost savings
      • Search, tag, build virtual data lakes with a global file index

      Cloud-Migration-3@3x-400x400Move Smart: Ensure the right data is in the right place at the right time. Establish analytics-driven policies to manage data based on its need, usage, and value.

      Deliver-Value-3@3x-400x400Take Control: Get back to the business at hand while reducing your storage, backup, and cloud costs and get the fastest, easiest path to the cloud for your file and object data.

      • Ensure you have data mobility and avoid storage-vendor lock-in
      • Open, standards-based platform
      • Native cloud access

      Read the Komprise Architecture Overview white paper.

      PttC_pagebanner-2048x639

      Getting Started with Komprise:

    • Archival Storage

      What is Archival Storage?

      Archival Storage is a source for data that is not needed for an organization’s everyday operations, but may have to be accessed occasionally.

      By utilizing an archival storage, organizations can leverage to secondary sources, while still maintaining the protection of the data.

      Utilizing archival storage sources reduces primary storage costs required and allows an organization to maintain data that may be required for regulatory or other requirements.

      Data archiving, also known as data tiering, is intended to protect older information that is not needed for everyday operations, but may have to be accessed occasionally. Data Archival and Tiering storage is a tool for reducing your primary storage need and the related costs, rather than acting as a data recovery tool.

      solutions_that_archiveWhy Archival Storage?

      • Some data archives allow data to be read-only to protect it from modification, while other data archiving products treat data as to allow users to modify it.
      • The benefit of data archiving is that it reduces the cost of primary storage. Alternatively, archive storage costs less because it is typically based on a low-performance, high-capacity storage medium.
      • Data archiving takes a number of different forms. Options can be online data storage, which places archive data onto disk systems where it is readily accessible. Archives are frequently file-based, but object storage is also growing in popularity. A key challenge when using object storage to archive file-based data is the impact it can have on users and applications. To avoid changing paradigms from file to object and breaking user and application access, use data management solutions that provide a file interface to data that is archived as objects.
      • Another archival system uses offline data storage where archive data is written to tape or other removable media using data archiving software rather than being kept online. Data archiving on tape consumes less power than disk systems, translating to lower costs.
      • A third option is using cloud data storage, such as those offered by Amazon and Microsoft Azure – this can be less expensive if done right, but requires ongoing investment. A Smart Data Migration strategy is essential.
      • The data archiving process typically uses automated software, which will automatically move “cold” data via policies set by an administrator. Today, a popular approach to data archiving is to make the archive “transparent” – so the archived data is not only online but the archived data is fully accessed exactly as before by users and applications, so they experience no change in behavior. The patented Komprise Transparent Move Technology is designed to allow you to transparently archive and tier data.

      Getting Started with Komprise:

    • AWS Snowball

      What is AWS Snowball Edge?

      AWS Snowball Edge is a hardware appliance used to migrate petabyte-scale data into and out of Amazon S3, mitigating issues with large-scale data transfers including high network costs, limited connectivity such as in remote locations, long transfer times, and security concerns. Beyond data transfer and cloud data migration use cases, the Snowball Edge device features on-board storage and compute power to enable local processing and analytics at the edge. Once transferred into AWS S3, an organization can move the data into other storage classes as needed.

      Snowball appliances are shipped to the customer and deployed on the customer’s network. Data is copied to the Snowball appliance and then return shipped to AWS where the data is copied to the appropriate AWS storage tier and made available for access.

      According to Hackernoon, Snowball Edge has been used in oil rigs, with the U.S. Department of Defense, and in an emergency situation for the U.S. Geological Survey needing to quickly export data from its data center during a volcanic eruption.

      Considerations for AWS Snowball Edge

      Enterprises have two options for AWS Snowball:

      • AWS Snowball Edge Storage Optimized devices provide both block storage and Amazon S3-compatible object storage, and 40 vCPUs. They are well suited for local storage and large scale-data transfer. It’s possible to combine up to 12 devices together and create a single S3-compatible bucket that can store nearly 1 petabyte of data.
      • Snowball Edge Compute Optimized devices provide 52 vCPUs, block and object storage, and an optional GPU for use cases including machine learning and full motion video analysis.
      • Snowball supports specific Amazon EC2 instance types and AWS Lambda functions, so you can develop and test in the AWS Cloud, then deploy applications on devices in remote locations to collect, pre-process, and ship the data to AWS.
      • Snowball can transport multiple terabytes of data and multiple devices can be used in parallel or clustered together to transfer petabytes of data into or out of AWS.

      Cloud Tiering to AWS

      By using Komprise for cloud tiering to AWS, you can save not only on your on-premises storage but also on your cloud costs. Users get transparent access to the files moved by Komprise from the original location, and with Komprise moving data in native format, you can give users direct, cloud-native access to data in AWS while eliminating egress fees and rehydration hassles.

      Learn more about the benefits of moving data in cloud native format.

      Smart Data Migration for AWS

      smart-file-data-migration-aws-thumbA smart data migration strategy for enterprise file data means an analytics-first approach ensuring you know which data can migrate, to which class and tier, and which data should stay on-premises in your hybrid cloud storage infrastructure. This paper introduces the benefits of a smart data migration strategy for file workloads to AWS cloud storage services. Komprise and AWS enable your organization to:

      • Understand your NAS & object data usage and growth.
      • Estimate the ROI of AWS storage in your environment.
      • Migrate smarter to Amazon FSx for NetApp ONTAP.
      • Access moved data as files without stubs or agents.
      • Reduce complexity and scale on-demand.
      • Deliver native data access in the cloud without lock-in.
      Read the white paper: Smart Unstructured Data Migration for AWS
      Learn more about your Cloud Tiering choices.
      Learn more about Komprise for AWS.

      Getting Started with Komprise:

    • AWS Storage

      What is AWS Cloud Storage?

      The AWS cloud service has a full range of options for individuals and enterprises to store, access and analyze data. AWS offers options across all three types of cloud data storage object storage, file storage and block storage.

      Here are the Amazon StorageAWS Storage choices:

      • Amazon Simple Storage Service (S3): S3 is a popular AWS service that provides scalable and highly durable object storage in the cloud.
      • AWS Glacier: Glacier provides low-cost highly durable archive storage in the cloud. It’s best for cold data as access times can be slow.
      • Amazon Elastic File System (Amazon EFS): EFS provides scalable network file storage for Amazon EC2 instances.
      • Amazon Elastic Block Store (Amazon EBS): This service provides low-latency block storage volumes for Amazon EC2 instances.
      • Amazon EC2 Instance Storage. An instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches and scratch data, and consists of one or more instance store volumes exposed as block devices.
      • AWS Storage Gateway. This is a hybrid storage option that integrates on-premises storage with cloud storage. It can be hosted on a physical or virtual server.
      • AWS Snowball. This data migration service transports large amounts of data to and from the cloud and includes an appliance that’s installed in the on-premises data center.

      Komprise-AWS-Snowball-blog-SOCIAL-1

      Each of these Amazon storage classes has several tiers at different price points – so it is important to put the right data in the right storage class at the right time to optimize price and performance.

      Komprise Intelligent Data Management for AWS Storage

      Komprise helps organizations get more value from their AWS storage investments while protecting data assets for future use through analysis and intelligent data migration and cloud data tiering.

      AWS-Use-Case-Table-2

      Learn more at Komprise for AWS.

      Getting Started with Komprise:

    • Azure Data Box

      What is Azure Data Box?

      Microsoft Azure Data Box is a hardware appliance designed to allow customers to import or export large amounts of data—more than 40TB— into and out of Azure offline. It is especially helpful when there is zero or limited network connectivity. Microsoft ships customers a proprietary Data Box storage device with a rugged casing to protect and secure data during the transit. A customer may choose Data Box for a one-time or the occasional cloud migration or an initial bulk data transfer followed by periodic transfers.

      Microsoft also promotes the Data Box as a solution for exporting data from Azure back on-premises for disaster recovery or other needs or to move to another cloud service provider.

      Microsoft-Azure-Data-Box-Komprise-Architecture-Diagram-1

      There are three different types of physical Data Box solutions based on data size:

      • Data Box: This device has 100TB capacity and uses standard NAS protocols and common copy tools. It features AES 256-bit encryption for safer transit.
      • Data Box Heavy: This larger device is designed to lift 1PB of data to the cloud.
      • Data Box Discs: Discs have capacity of 8TB SSD with a USB/SATA interface featuring 128-bit encryption. Customers can buy in packs of up to five for a total of 40TB.

      Considerations for Cloud Migrations Using Azure Data Box 

      Azure Data Box is a good solution to consider if online data transfer is not possible either because the network bandwidth is limited or because it can take too long. But offline transfers can be very tedious and error prone if done manually. Choosing what data to migrate, moving the data into Azure Data Box, and then ensuring the data lands in the cloud can be time consuming to manage. Managing access control and security of file data, and ensuring transfer of all metadata and permissions of files can be very tedious. Often, enterprises want to move some file data to the cloud and keep the rest on-premises. In such situations, using Azure Data Box manually without any automation becomes even more tricky because it can disrupt users and applications.

      Azure Data Box Gateway for Inline Data Transfers

      Azure also offers a virtual appliance called Azure Data Box Gateway that resides on-premises and enables customers to write data to it using NFS and SMB protocols. The device then transfers the data to Azure block, Blob, or Azure File. But Azure Data Box gateway has several limitations and can be used only for very small amounts of data in limited circumstances. See full set of limitations here.

      Komprise allows you to migrate large amounts of data reliably and effortlessly to Azure using its patented Elastic Data Migration, which is 27 times faster than alternatives. You can also use Komprise to transparently tier data to Azure. Tiering cold data is a great way to offload 80% of your data to the cloud without any disruption to users and applications. 

      By using Komprise for cloud tiering to Azure, you can save not only on your on-premises storage but also on your cloud costs since you do not have to tier to Azure Files, you can tier directly to Azure Blob. Users get transparent access to the files moved by Komprise from the original location, and with Komprise moving data in native format, you can give users direct, cloud-native access to data in Azure while eliminating egress fees and rehydration hassles. 

      Learn more about your Cloud Tiering choices 

      Learn more about Komprise for Microsoft Azure.

      Getting Started with Komprise:

    • Azure Storage

      What is Azure Storage?

      Microsoft Azure hosts a complete array of cloud data storage options to meet the diverse data needs of enterprises today, including backup, tiering, data lakes, structured and unstructured data management. Azure Storage Services include:

      • Azure Blobs: This is a scalable object store best suited for storing and accessing unstructured data and to support analytics and data lake projects.
      • Azure Files: File shares for cloud or on-premises deployments that you can access through the Server Message Block (SMB) protocol.
      • Azure Queues: Allows for asynchronous message between application components.
      • Azure Tables: A NoSQL solution for schema-less storage of structured data.
      • Azure Disks: Allows data to be persistently stored in blocks and accessed from an attached virtual hard disk.
      • Azure Data Lake Storage: A storage platform for ingestion, processing, and visualization that supports common analytics frameworks and provides automatic geo-replication.

      Greater Azure Storage Savings and Value with Komprise

      Komprise helps organizations get the most value from their Azure storage investments while protecting data assets for future use through analysis and intelligent data migration and cloud data tiering.

      Azure-Use-Case-Table

      Learn more at Komprise for Azure file and object data management and migration.

      Komprise-Hypertransfer-Migration-White-Paper-SOCIAL-2-768x402

      Getting Started with Komprise:

    • Azure Tiering

      What is Azure Tiering?

      Azure Storage offers several classes of cloud data storage for customers. However, to maximize savings and ROI from the cloud, IT directors need to consider tiering strategies. Cloud tiering moves less frequently used data, also known as cold data, from expensive on-premises file storage or Network Attached Storage (NAS) or cloud file storage such as Azure Files to cheaper levels of storage in the cloud, typically object storage classes aka Azure Blob storage. 

      Cloud tiering enables data to move across different storage tiers – and different cloud tiering solutions support different storage options. We will cover both the storage tiers in the Azure cloud and the options available to do cloud tiering for Azure.

      Azure Files and Azure Blob have different tiers of storage at different price points:

      Azure Files is Microsoft’s file storage solution for the cloud. As with all file storage solutions, it is more expensive than object storage solutions such as Azure Blob, especially when you add the required replication and data protection costs for files. Azure File Storage Hot tier is more than 1.9 times more expensive than Azure Blob Cool. 

      Azure Files supports two storage tiers: Standard and Premium.

      • Standard file shares are created in general purpose (GPv1 or GPv2) storage accounts; 
      • Premium file shares are created in FileStorage storage accounts.

      What is Azure Blob?

      Azure Blob is Microsoft’s object storage solution for the cloud

      Azure Blob storage is optimized for storing massive amounts of unstructured data. It’s enabled for the following access tiers:

      • Hot: storing data that is accessed frequently.
      • Cool: storing data that is infrequently accessed and stored for at least 30 days.
      • Archive: storing data that is rarely accessed and stored for at least 180 days with flexible latency requirements

      According to Microsoft:

      “You can upload data to your required access tier and change the blob access tier among the hot, cool, or archive tiers as usage patterns change, without having to move data between accounts. All tier change requests happen immediately and tier changes between hot and cool are instantaneous.”

      What is Azure File Sync?

      Azure Files has a service called Azure File Sync which enables an on-premises Windows Server to do cloud tiering to file storage in the cloud, not object storage. 

      Azure File Sync acts as a gateway that caches data locally and puts cold file objects in Azure File cloud storage. When enabled, Azure Files Sync stores hot files on the local Windows server while cool or cold files are split into namespace (file and folder structure) and file content. The namespace is stored locally, and the file content is stored in an Azure file share in the cloud. Azure will automatically tier cold data based on volume or age thresholds. See Microsoft Cloud Tiering overview.

      Considerations for Microsoft Azure Cloud TieringCold-Data-Tiering

      Cloud tiering can save organizations up to 70% on on-premises storage costs when done correctly. But there are several limitations of Azure Cloud Tiering that you need to consider:

      Azure File Sync only tiers to Azure Files and leads to higher cloud costs.

      Azure Files is a file service in Azure and it is almost double the cost of the Azure Blob Cool tier. Since file storage is not resilient, data on Azure Files most commonly needs replication, snapshots and backups – leading to higher data management costs. An ideal cloud tiering solution should tier files from your NAS to an object storage environment to maximize savings. Otherwise, you are paying for higher costs in the cloud.  

      Azure File Sync only tiers blocks of data to the cloud and leads to 75% higher cloud egress costs.

      This means you cannot directly access your files in Azure; you have to go through the on-premises Windows Server to get your data. This leads to 75% higher cloud egress costs, and it limits the use of your data in the cloud. To learn more about the differences between block tiering and file tiering, read our block-level tiering vs file-level tiering white paper to learn more. For an analysis of the cloud egress costs of solutions like Azure File Sync Cloud Tiering, read the Cloud Tiering whitepaper.

      Azure File Sync is only available on Windows Server environments.

      Most organizations today have multiple file server and NAS environments. Using a different tiering strategy for each environment is tedious, error prone, and difficult to manage. Consider an unstructured data management solution that works across your multiple storage vendor environments and transparently tiers and archives data.

      Komprise enables enterprise IT organizations to quickly analyze data and make smart decisions on where data should live based on age, usage and other requirements. Komprise works across your multi-vendor NAS and object environments and clouds via standard protocols such as NFS, SMB and object. By using Komprise for cloud tiering to Azure, you can save not only on your on-premises storage but also on your cloud costs since you do not have to tier to Azure Files, you can tier directly to Azure Blob. Users get transparent access to the files moved by Komprise from the original location, and with Komprise moving data in native format, you can give users direct, cloud-native access to data in Azure while eliminating egress costs and data rehydration hassles. 

      Learn more about your Cloud Tiering choices 

      Learn more about Komprise for Microsoft Azure

      Smart-Data-Migration-for-Azure-White-Paper-SOCIAL-768x402
      Komprise Smart Data Migration for Azure. Smarter. Faster. Proven.

      Getting Started with Komprise:

  • B
    • Block-level Tiering

      Moving blocks between the various tiers to increase performance where hot blocks and metadata are kept in the higher, faster, and more expensive data storage tiers, and cold data blocks are migrated to lower, less expensive ones. Lacking full context, these moved blocks cannot be directly accessed from their new location. Komprise uses the more advanced file-level tiering. Read the white paper “Block-Level Tiering vs. File-Level Tiering

      block_file_tiering-768x584

      Getting Started with Komprise:

    • Bucket Sprawl

      Bucket sprawl refers to the problem of having a large number of data storage buckets, also known as an object storage bucket, often in cloud data storage environments, that are created and left unused or forgotten over time. This can happen when individuals or teams create buckets for specific projects or tasks, but fail to properly manage and delete them once they are no longer needed.

      What is a Cloud Bucket?

      A cloud bucket is a container for storing data objects in cloud storage services such as Amazon S3, Google Cloud Storage, or Microsoft Azure Storage. Cloud buckets can hold a variety of data types including images, videos, documents, and other files.

      Cloud buckets are typically accessed and managed through an API or web-based interface provided by the cloud storage provider. They offer a scalable and cost-effective way to store and retrieve large amounts of data, and can be used for a variety of applications including backup and disaster recovery, content delivery, and web hosting.

      Cloud buckets provide a number of benefits over traditional on-premises data storage solutions, including ease of use, cost-effectiveness, scalability, and availability. However, it is important to properly manage and secure cloud buckets to ensure that sensitive data is protected and costs are kept under control.

      The Problem with Cloud Bucket Sprawl

      Cloud bucket sprawl can lead to a number of issues, including increased data storage costs, decreased efficiency in accessing necessary data, and potential security risks if sensitive information is stored in forgotten or unsecured buckets. To avoid bucket sprawl, it is important to have a system in place for regularly reviewing and managing storage buckets, including identifying and deleting those that are no longer necessary.

      Cloud Data Management for Bucket Sprawl

      In the blog post: Making Smarter Moves in a Multicloud World, Komprise CEO and cofounder Kumar Goswami introduced Komprise cloud data management capabilities this way:

      It gives customers a better way to manage their cloud data as it grows, (combat “bucket sprawl”), gives visibility into their cloud costs, and provides a simple way to manage data both on premises and in the cloud. Komprise now provides enterprises with actionable analytics to not only understand their cloud data costs but also optimize them with data lifecycle management.

      Learn more about Komprise cloud data management.

      Infographic: How to Maximize Cloud Cost Savings

      cloud_cost_optimization-768x645

      Getting Started with Komprise:

  • C
    • Capacity Planning

      Capacity planning is the estimation of space, hardware, software, and connection infrastructure resources that will be needed a period of time. In reference to the enterprise environment, there is a common concern over whether or not there will be enough resources in place to handle an increasing number of users or interactions. The purpose of capacity planning is to have enough resources available to meet the anticipated need, at the right time, without accumulating unused resources. The goal is to match the resource of availability to the forecasted need, in the most cost-efficient manner for maximum data storage cost savings.

      True data capacity planning means being able to look into the future and estimate future IT needs and efficiently plan where data is stored and how it is managed based on the SLA of the data. Not only must you meet the future business needs of fast-growing unstructured data, you must also stay within the organization’s tight IT budgets. And, as organizations are looking to reduce operational costs with the cloud (see cloud cost optimization), deciding what data can migrate to the cloud, and how to leverage the cloud without disrupting existing file-based users and applications becomes critical.

      Data storage never shrinks, it just relentlessly gets bigger. Regardless of industry, organization size, or “software-defined” ecosystem, it is a constant stress-inducing challenge to stay ahead of the storage consumption rate. That challenge is not made any easier considering that typically organizations waste a staggering amount of data storage capacity, much of which can be attributed to improper capacity management.

      Are you making capacity planning decisions without insight?

      Komprise enables you to intelligently plan storage capacity, offset additional purchase of expensive storage, and extend the life of your existing data storage by providing visibility across your storage with key analytics on how data is growing and being used, and interactive what-if analysis on the ROI of using different data management objectives. Komprise moves data based on your objectives to secondary storage, object storage or cloud storage, of your choice while providing a file gateway for users and applications to transparently access the data exactly as before.

      With an analytics-first approach, Komprise provides visibility into how data is growing and being used across storage silos. Storage administrators and IT leaders no longer have to make storage capacity planning decisions without insight. With Komprise Intelligent Data Management, you’ll understand how much more storage will be needed, when and how to streamline purchases during planning.

      costsavings_fig4-1

      Getting Started with Komprise:

    • Checksum

      Checksum is a calculated value that’s used in NAS data analytics to determine the integrity of data. The most commonly used checksum is MD5, which Komprise uses to manage chain of custody and integrity reporting per file.

      Learn more about Komprise Elastic Data Migration for smart, fast and proven file and object data migrations.

      Learn tips on a clean cloud data migration on the Komprise blog.

      Komprise-Tips-for-Cloud-Data-Migration-Blog-SOCIAL-768x402

      Getting Started with Komprise:

    • Cloud Cost Optimization

      Cloud cost optimization is a process to reduce operating costs in the cloud while maintaining or improving the quality of cloud services. It involves identifying and addressing areas to reduce the use of cloud resources, select more cost-effective cloud services, or deploy better management practices, including data management.

      The cloud is highly flexible and scalable, but it also involves ongoing and sometimes hidden costs, including usage fees, egress fees, storage costs, and network fees. If not managed properly, these costs can quickly become a significant burden for organizations.

      In one of our 2023 data management predictions posts, we noted:

      Managing the cost and complexity of cloud infrastructure will be Job No. 1 for enterprise IT in 2023. Cloud spending will continue, although at perhaps a more measured pace during uncertain economic times. What will be paramount is to have the best data possible on cloud assets to make sound decisions on where to move data and how to manage it for cost efficiency, performance, and analytics projects. Data insights will also be important for migration planning, spend management (FinOps), and to meet governance requirements for unstructured data management. These are the trends we’re tracking for cloud data management, which will give IT directors precise guidance to maximize data value and minimize cloud waste.

      Source: ITPro-Today

      Steps to Optimize Cloud Costs

      To optimize cloud costs, organizations can take several steps, including:

      • Right-sizing: Choose the correct size and configuration of cloud resources to meet the needs of the application, avoiding overprovisioning or underprovisioning.
      • Resource utilization: Monitor the use of cloud resources to reduce waste and improve cost efficiency.
      • Cost allocation: Implement cost allocation and tracking practices to better understand cloud costs and improve accountability.
      • Reserved instances: Use reserved instances to reduce costs by committing to a certain level of usage for a longer term.
      • Cost optimization tools: These tools identify areas for savings and help manage cloud expenses.

      The Challenge of Managing Cloud Data

      Managing cloud data costs takes significant manual effort, multiple tools, and constant monitoring. As a result, companies are using less than 20% of the cloud cost-saving options available to them. “Bucket sprawl” makes matter worse, as users easily create accounts and buckets and fill them with data—some of which is never accessed again.

      When trying to optimize cloud data, cloud administrators contend with poor visibility and complexity of data management:

      • How can you know your cloud data?
      • How fast is cloud data growing and who’s using it?
      • How much is active vs. how much is cold?
      • How can you dig deeper to optimize across object sizes and storage classes?

      How can you make managing data and costs manageable?

      • It’s hard to decipher complicated cost structures.
      • Need more information to manage data better, e.g., when was an object last accessed?
      • Factoring in multiple billable dimensions and costs is extremely complex: storage, access, retrievals, API,
        transitions, initial transfer, and minimal storage-time costs.
      • There are unexpected costs of moving data across different storage classes (e.g., Amazon S3 Standard to S3
        Glacier). If access isn’t continually monitored, and data is not moved back up when it gets hot, you will face
        expensive retrieval fees

      These issues are further compounded as enterprises move toward a multicloud approach and require a single set
      of tools, policies, and workflow to optimize and manage data residing within and across clouds.

      Komprise_Cloud_Data_ManagementKomprise Cloud Data Management

      Reduce cloud storage costs by more than 50% with Komprise.

      Cloud providers offer a range of storage services. Generally, there are storage classes with higher performance
      and costs for hot and warm data, such as Amazon S3 Standard and S3 Standard-IA, and there are storage classes
      with much lower performance and costs that are appropriate for cold data, such as S3 Glacier and S3 Glacier Deep
      Archive. Data access fees and retrieval fees for the lower cost storage classes are much higher than that of the
      higher performance and higher cost storage classes. To maximize savings, you need an automated unstructured data management solution that takes into account data access patterns to dynamically and cost optimally move data across storage classes (e.g., Amazon S3 Standard to S3 Standard-IA or S3 Standard-IA to S3 Glacier) and across multi-vendor storage services (e.g., NetApp Cloud Volumes ONTAP to Amazon S3 Standard to S3 Standard-IA to S3 Glacier to S3 Glacier Deep Archive). While some limited manual data movement through Object Lifecycle Management policies based on modified times
      or intelligent tiering is available from the cloud providers, these approaches offer limited savings and involve hidden
      costs.

      Komprise automates full lifecycle management across multi-vendor cloud storage classes using intelligence from data
      usage patterns to maximize your savings without heavy lifting. Read the white paper to see how you can save +50% on cloud storage cost savings.

      Watch the video: How to save costs and manage your multi-cloud sorry

      Getting Started with Komprise:

    • Cloud Data Analytics

      Cloud-Analytics-IconCloud data analytics refers to the use of cloud computing resources to process, analyze, and extract insights from large amounts of data. These solutions can include data warehousing, big data processing, machine learning, and business intelligence and can ingest a wide range of data, including structured, semi-structured, and unstructured data.

      Cloud data analytics can deliver an agile and lower-cost method to analyze large amounts of data quickly for a variety of business outcomes including operational improvements, customer behavior analysis, competitive analysis, R&D and more.

      Some of the leading cloud data analytics providers include Amazon Web Services, Google Cloud, Microsoft Azure, IBM and many early-stage venture-backed startups. One of the first cloud analytics vendors was LucidEra. These companies offer a range of cloud data analytics services and tools, including data warehousing, big data processing, machine learning, and business intelligence.

      Komprise Smart Data Workflows can be created to search and find the right unstructured data and automate the delivery of data to cloud analytics infrastructure.

      Getting Started with Komprise:

    • Cloud Data Growth Analytics

      70% of data is most enterprise organizations is cold data and has not been accessed in months, yet it sits on expensive storage and consumes the same backup resources as hot data.

      50% of the 175 zettabytes of data worldwide in 2025 will be stored in public cloud environments. (IDC)

      80% of businesses will overspend their cloud infrastructure budgets, according to due to a lack of cloud cost optimization. (Gartner)

      Komprise provides the visibility and analytics into cloud data that lets organizations understand data growth across their clouds and helps move cold data to optimize costs.

      CLOUD_blog_Resource_Thumbnail_800x533-768x512

      Getting Started with Komprise:

    • Cloud Data Management

      CloudDataManagement_Diagram-scaled

      What is Cloud Data Management?

      Cloud data management is a way to manage data across cloud platforms, either with or instead of on-premises storage. A popular form of data storage management, the goal is to curb rising cloud data storage costs, but it can be quite a complicated pursuit, which is why most businesses employ an external company offering cloud data management services with the primary goal being cloud cost optimization.

      Cloud data management is emerging as an alternative to data management using traditional on-premises software. The benefit of employing a top cloud data management company means that instead of buying on-premises data storage resources and managing them, resources are bought on-demand in the cloud. This cloud data management services model for cloud data storage allows organizations to receive dedicated data management resources on an as-needed basis. Cloud data management also involves finding the right data from on-premises storage and moving this data through data archiving, data tiering, data replication and data protection, or data migration to the cloud.

      Advantages of Cloud Data Management

      How to manage cloud storage? According to two 2023 surveys (here and here), 94% of respondents say they’re wasting money in the cloud, 69% say that data storage accounts for over one quarter of their company’s cloud costs and 94% said that cloud storage costs are rising. Optimal unstructured data management in the cloud provides four key capabilities that help to reduce cloud data storage costs:

      1. Gain Accurate Visibility Across Cloud Accounts into Actual Usage
      2. Forecast Savings and Plan Data Management Strategies for Cloud Cost Optimization
      3. Cloud Tiering and Archiving Based on Actual Data Usage to Avoid Surprises
        • For example, using last-accessed time vs. last modified provides a more predictable decision on the objects that will be accessed in the future, which avoids costly archiving errors.
      4. Radically Simplify Cloud Migrations
        • Easily pick your source and destination
        • Run dozens or hundreds of migrations in parallel
        • Reduce the babysitting

      Komprise-Hypertransfer-Migration-White-Paper-SOCIAL-2

      The many benefits of cloud data management services include speeding up technology deployment and reducing system maintenance costs; it can also provide increased flexibility to help meet changing business requirements.

      Challenges Faced with Enterprise Cloud Data Management

      But, like other cloud computing technologies, enterprise cloud data management services can introduce challenges – for example, data security concerns related to sending sensitive business data outside the corporate firewall for storage. Another challenge is the disruption to existing users and applications who may be using file-based applications on premise since the cloud is predominantly object based.

      Cloud data management service solutions should provide you with options to eliminate this disruption by transparently moving and managing data across common formats such as file and object.

      Komprise Intelligent Data Management

      Features of a Cloud Data Management Services Platform

      Some common features and capabilities cloud data management solutions should deliver:

      • Data Analytics: Can you get a view of all your cloud data, how it’s being used, and how much it’s costing you? Can you get visibility into on-premises data that you wish to migrate to the cloud? Can you understand where your costs are so you know what to do about them?
      • Planning and Forecasting: Can you set policies for how data should get moved either from one cloud storage class to another or from an on-premises storage to the cloud. Can you project your savings? Does this account for hidden fees like retrieval and egress costs?
      • Policy based data archiving, data replication, and data management: How much babysitting do you have to do to move and manage data? Do you have to tell the system every time something needs to be moved or does it have policy based intelligent automation?
      • Fast Reliable Cloud Data Migration: Does the system support migrating on-premises data to the cloud? Does it handle going over a Wide Area Network? Does it handle your permissions and access controls and preserve security of data both while it’s moving the data and in the cloud?
      • Intelligent Cloud Archiving, Intelligent Tiering and Data Lifecycle Management: Does the solution enable you to manage ongoing data lifecycle in the cloud? Does it support the different cloud storage classes (eg High-performance options like File and Cloud NAS and cost-efficient options like Amazon S3 and Glacier)?

      In practice, the design and architecture of a cloud varies among cloud providers. Service Level Agreements (SLA) represent the contract which captures the agreed upon guarantees between a service provider and its customers.

      It is important to consider that cloud administrators are responsible for factoring:

      • Multiple billable dimensions and costs: storage, access, retrievals, API, transitions, initial transfer, and minimal storage-time costs
      • Unexpected costs of moving data across different storage classes. Unless access is continually monitored and data is moved back up when it gets hot, you’ll face expensive retrieval fees.

      This complexity is the reason why only a mere 20% of organizations are leveraging the cost-saving options available to them in the cloud.

      How do Cloud Data Management Services Tools work?

      As more enterprise data runs on public cloud infrastructure, many different types of tools and approaches to cloud data management have emerged. The initial focus has been on migrating and managing structured data in the cloud. Cloud data integration, ETL (extraction, transformation and loading), and iPaaS (integration platform as a service) tools are designed to move and manage enterprise applications and databases in the cloud. These tools typically move and manage bulk or batch data or real time data.

      Cloud-based analytics and cloud data warehousing have emerged for analyzing and managing hybrid and multi-cloud structured and semi-structured data, such as Snowflake and Databricks.

      In the world of unstructured data storage and backup technologies, cloud data management has been driven by the need for cost visibility, cost reduction, cloud cost optimization and optimizing cloud data. As file-level tiering has emerged as a critical component of an intelligent data management strategy and more file data is migrating to the cloud, cloud data management is evolving from cost management to automation and orchestration, governance and compliance, performance monitoring, and security. Even so, spend management continues to be a top priority for any enterprise IT organizing migrating application and data workloads to the cloud.

      What are the challenges faced with Cloud Data Management security?

      Most of the cloud data management security concerns are related to general cloud computing security questions organizations face. It’s important to evaluate the strengths and security certifications of your cloud data management vendor as part of your overall cloud strategy

      Is adoption of Cloud Data Management services growing?

      As enterprise IT organizations are increasingly running hybrid, multi-cloud, and edge computing infrastructure, cloud data management services have emerged as a critical requirement. Look for solutions that are open, cross-platform, and ensure you always have native access to your data. Visibility across silos has become a critical need in the enterprise, but it’s equally important to ensure data does not get locked into a proprietary solution that will disrupt users, applications, and customers. The need for cloud native data access and data mobility should not be underestimated. In addition to visibility and access, cloud data management services must enable organizations to take the right action in order to move data to the right place and the right time. The right cloud data management solution will reduce storage, backup and cloud costs as well as ensure a maximum return on the potential value from all enterprise data.

      How is Enterprise Cloud Data Management different from Consumer Systems?

      While consumers need to manage cloud storage, it is usually a matter of capacity across personal storage and devices. Enterprise cloud data management involves IT organizations working closely with departments to build strategies and plans that will ensure unstructured data growth is managed and data is accessible and available to the right people at the right time.

      Enterprise IT organizations are increasingly adopting cloud data management solutions to understand how cloud (typically multi-cloud) data is growing and manage its lifecycle efficiently across all of their cloud file and object storage options.

      Get More from your Cloud Storage Solution with Komprise

      • Get accurate analytics across clouds with a single view across all your users’ cloud accounts and buckets and save on storage costs with an analytics-driven approach.
      • Forecast cloud cost optimization by setting different data lifecycle policies based on your own cloud costs.
      • Establish policy-based multi-cloud lifecycle management by continuously moving objects by policy across storage classes transparently (e.g., Amazon Standard, Standard-IA, Glacier, Glacier Deep Archive).
      • Accelerate cloud data migrations with fast, efficient data migrations across clouds (e.g., AWS, Azure, Google and Wasabi) and even on-premises (ECS, IBM COS, Pure FlashBlade).
      • Deliver powerful cloud-to-cloud data replication by running, monitoring, and managing hundreds of migrations faster than ever at a fraction of the cost with Elastic Data Migration.
      • Keep your users happy with no retrieval fee surprises and no disruption to users and applications from making poor data movement decisions based on when the data was created.

      A cloud data management platform like Komprise, named a Gartner Peer Insights Awards leader, that is analytics-driven, can help you save 50% or more on your cloud storage costs.

      Komprise_Cloud_Data_Management-768x407

      Learn more about your options for migrating file workloads to the cloud: The Easy, Fast, No Lock-In Path to the Cloud.

      What is Cloud Data Management?

      Cloud Data Management is a way to analyze, manage, secure, monitor and move data across public clouds. It works either with, or instead of on-premises applications, databases, and data storage and typically offers a run-anywhere platform.

      Cloud Data Management Services

      Cloud data management is typically overseen by a vendor that specializes in data integration, database, data warehouse or data storage technologies. Ideally the cloud data management solution is data agnostic, meaning it is independent from the data sources and targets it is monitoring, managing and moving. Benefits of an enterprise cloud data management solution include ensuring security, large savings, backup and disaster recovery, data quality, automated updates and a strategic approach to analyzing, managing and migrating data.

      Cloud Data Management platform

      Cloud data management platforms are cloud based hubs that analyze and offer visibility and insights into an enterprises data, whether the data is structured, semi-structured or unstructured.

      Getting Started with Komprise:

    • Cloud Data Migration

      What is Cloud Data Migration?

      Cloud data migration is the process of relocating either all or a part of an enterprise’s data to a cloud infrastructure. Cloud data migration is often the most difficult and time-consuming part of an overall cloud migration project. Other elements of cloud migration involve application migration and workflow migration. A “smart data migration” to the cloud strategy for enterprise file data means an analytics-first approach ensuring you know which data can migrate, to which class and tier, and which data should stay on-premises in your hybrid cloud storage infrastructure. Komprise Elastic Data Migration makes cloud data migrations simple, fast and reliable with continuous data visibility and optimization.

      The Komprise Smart Data Migration Strategy

      KompriseDataArchive-1-1536x668
      Learn more about Komprise Smart Data Migration for file and object data.

      Read the blog post: Smart Data Migration for File and Object Data Workloads

      Cost, Complexity and Time:
      Why Cloud Data Migrations are Difficult

      Cloud data migrations are usually the most laborious and time-consuming part of a cloud migration initiative. Why? Data is heavy – data footprints are often in hundreds of terabytes to petabytes and can involve billions of files and objects. Some key reasons why cloud data migrations fail include:

      • Lack of Proper Planning: Often cloud data migrations are done in an ad-hoc fashion without proper analytics on the data set and planning
      • Improper Choice of Cloud Storage Destination: Most public clouds offer many different classes and tiers of storage – each with their own costs and performance metrics. Also, many of the cloud storage classes have retrieval and egress costs, so picking the right cloud storage class for a data migration involves not just finding the right performance and price to store the data but also the right access costs. Intelligent tiering and Intelligent archiving techniques that span both cloud file and object storage classes are important to ensure the right data is in the right place at the right time.
      • Ensuring Data Integrity: Data migrations involve migrating the data along with migrating metadata. For a cloud data migration to succeed, not only should all the data be moved over with full fidelity, but all the access controls, permissions, and metadata should also move over. Often, this is not just about moving data but mapping these from one storage environment to another.
      • Downtime Impact: Cloud data migrations can often take weeks to months to complete. Clearly, you don’t want users to not be able to access the data the need for this entire time. Minimizing downtime, even during a cutover, is very important to reduce productivity impact.
      • Slow Networks, Failures: Often cloud data migrations are done over a Wide Area Network (WAN), which can have other data moving on it and hence deliver intermittent performance. Plus, there may be times when the network is down or the storage at either end is unavailable. Handling all these edge conditions is extremely important – you don’t want to be halfway through a month-long cloud data migration only to encounter a network failure and have to start all over again.
      • Time Consuming – Since cloud data migrations involve moving large amounts of data, they can often involve a lot of manual effort in managing the migrations. This is laborious, tedious and time consuming.
      • Sunk Costs: Cloud data migrations are often time-bound projects – once the data is migrated, the project is complete. So, if you invest in tools to address cloud data migrations, you may have sunk costs once the cloud data migration is complete.

      Komprise-Hypertransfer-Migration-PR-SOCIAL-768x402

      Cloud Data Migrations can be of Network Attached Storage (NAS) or File Data, or of Object data or of Block data. Of these, Cloud Data Migration of File Data and Cloud Data Migration of Object data are particularly difficult and time-consuming because file and object data are much larger in volume.

      • To learn more about the seven reasons why cloud data migrations are dreaded, watch the webinar.
      • Learn more about why Komprise is the fast, no lock-in approach to unstructured cloud data migrations: Path to the cloud.

      Cloud Data Migration Strategies

      Different cloud data migration strategies are used depending on whether file data or object data need to be migrated. Common methods for moving these two types of data through cloud migration solutions are described in further detail below.

      Cloud Data Migration for File Data aka NAS Cloud Data Migrations

      Migration-two-tone-icon-2@2x-300x265

      File data is often stored on Network Attached Storage. File data is typically accessed over NFS and SMB protocols. File data can be particularly difficult to migrate because of its size, volume, and richness. File data often involves a mix of large and small files – data migration techniques often do better when migrating large files but fail when migrating small files. Data migration solutions need to address a mix of large and small files and handle both efficiently. File data is also voluminous – often involving billions of files. Reliable cloud data migration solutions for file data need to be able to handle such large volumes of data efficiently. File data is also very rich and has metadata, access control permissions and hierarchies. A good file data migration solution should preserve all the metadata, access controls and directory structures. Often, migrating file data involves mapping this information from one file storage format to another. Sometimes, file data may need to be migrated to an object store. In these situations, the file metadata needs to be preserved in the object store so the data can be restored as files at a later date. Techniques such as MD5 checksums are important to ensure the data integrity of file data migrations to the cloud.

      Cloud Data Migration for Object Data (S3 Data Migrations or Object-to-Cloud Data Migrations or Cloud-to-Cloud Data Migrations)

      Cloud data migrations of object data is relatively new but quickly gaining momentum as the majority of enterprises are moving to a multi-cloud architecture. The Amazon Simple Storage Service (S3) protocol has become a de-facto standard for object stores and public cloud providers. So most cloud data migrations of object data involve S3 based data migrations.

      3 common use cases for cloud object data migrations:
      • Data migrations from an on-premises object store to the public cloud: Many enterprises have adopted an on-premises object storage Most of these object storage solutions follow the S3 protocol. Customers are now looking to analyze data on their on-premises object storage and migrate some or all of that data to a public cloud storage option such as Amazon S3 or Microsoft Azure Blob.
      • Cloud-to-cloud data migrations and cloud-to-cloud data replications: Enterprises looking to switch public cloud providers need to migrate data from one cloud to another. Sometimes, it may also be cost-effective to replicate across clouds as opposed to replicating within a cloud. This also improves data resiliency and provides enterprises with a multi-cloud strategy. Cloud-to-cloud data replication differs from cloud data migration because it is ongoing – as data changes on one cloud, it is copied or replicated to the second cloud.
      • S3 data migrations: This is a generic term that refers to any object or cloud data migration done using the S3 protocol. The Amazon Simple Storage Service (s3) protocol has become a de-facto standard. Any Object-to-Cloud, Cloud-to-Cloud or Cloud-to-Object migration can typically be classified as a S3 Data Migration.

      five-industry-data-migration-use-cases-blog-SOCIAL-1-768x402

      Secure Cloud Data Migration Tools

      Cloud data migrations can be performed by using free tools that require extensive manual involvement or commercial data migration solutions. Sometimes Cloud Storage Gateways are used to move data to the cloud, but these require heavy hardware and infrastructure setup. Cloud data management solutions offer a streamlined, cost-effective, software-based approach to manage cloud data migrations without requiring expensive hardware infrastructure and without creating data lock-in. Look for elastic data migration solutions that can dynamically scale to handle data migration workloads and adjust to your demands.

      7 Tips for a Clean Cloud Data Migration:
      1. Define Sources and Targets
      2. Know the Rules & Regulations
      3. Proper Data Discovery
      4. Define Your Path
      5. Test, Test, Test
      6. Free Tools vs. Enterprise
      7. Establish a Communication Plan

      Watch the webinar: Preparing for a Cloud File Data Migration

      What is a Smart Data Migration?

      Know your cloud data migration choices for file and object data migration.

      PttC_pagebanner-2048x639

      Getting Started with Komprise:

    • Cloud Data Storage

      Cloud data storage is a service for individuals or organizations to store data through a cloud computing provider such as AWS, Azure, Google Cloud, IBM or Wasabi. Storing data in a cloud service eliminates the need to purchase and maintain data storage infrastructure, since infrastructure resides within the data centers of the cloud IaaS provider and is owned/managed by the provider. Many organizations are increasing data storage investments in the cloud for a variety of purposes including: backup, data replication and data protection, data tiering and archiving, data lakes for artificial intelligence (AI) and business intelligence (BI) projects, and to reduce their physical data center footprint. As with on-premises storage, you have different levels of data storage available in the cloud. You can segment data based on access tiers: for instance, hot and cold data storage.

      komprise_cloud_intelligent

      Types of Cloud Data Storage

      Cloud data storage can either be designed for personal data and collaboration or for enterprise data storage in the cloud. Examples of personal data cloud storage are Google Drive, Box and DropBox.

      Increasingly, corporate data storage in the cloud is gaining prominence – particularly around taking enterprise file data that was traditionally stored on Network Attached Storage (NAS) and moving that to the cloud.

      Cloud file storage and object storage are gaining adoption as they can store petabytes of unstructured data for enterprises cost-effectively.

      Enterprise Cloud Data Storage for Unstructured Data

      (Cloud File Data Storage and Cloud Object Data Storage)

      Enterprise unstructured data growth is exploding – whether its genomics data, video and media content, or log files or IoT data.  Unstructured data can be stored as files on file data storage or as objects on cost-efficient object storage. Cloud storage providers are now offering a variety of file and object storage classes at different price points to accommodate unstructured data. Amazon EFS, FSX, Azure Files are examples of cloud data storage for enterprise file data, and Amazon S3, Azure Blob and Amazon Glacier are examples of object storage.

      Advantages of Cloud Data Storage

      There are many benefits of investing in cloud data storage, particularly for unstructured data in the enterprise. Organizations gain access to unlimited resources, so they can scale data volumes as needed and decommission instances at the end of a project or when data is deleted or moved to another storage resource. Enterprise IT teams can also reduce dependence on hardware and have a more predictable storage budget. However, without proper cloud data management, cloud egress costs and other cloud costs are often cited as challenges.

      In summary, cloud data storage allows:
      • The opportunity to reduce capital expenses (CAPEX) of data center hardware along with savings in energy, facility space and staff hours spend maintaining and installing hardware.
      • Deliver vastly improved agility and scalability to support rapidly changing business needs and initiatives.
      • Develop an enterprise-wide data lake strategy that would otherwise be unaffordable.
      • Lower risks from storing important data on aging physical hardware.
      • Leverage cheaper cloud storage for archiving and tiering purposes, which can also reduce backup costs.
      Challenges and Considerations
      • Cloud data storage can be costly if you need to frequently access the data for use outside of the cloud, due to egress fees charged by cloud storage providers.
      • Using cloud tiering methodologies from on-premises storage vendors may result in unexpected costs, due to the need for restoring data back to the storage appliance prior to use. Read the white paper Cloud Tiering: Storage-Based vs. Gateways vs. File-Based
      • Moving data between clouds is often difficult, because of data translation and data mobility issues with file objects. Each cloud provider uses different standards and formats for data storage.
      • Security can be a concern, especially in some highly regulated sectors such as healthcare, financial services and e-commerce. IT organizations will need to fully understand the risks and methods of storing and protecting data in the cloud.
      • The cloud creates another data silo for enterprise IT. When adding cloud storage to an organization’s storage ecosystem, IT will need to determine how to attain a central, holistic view of all storage and data assets.

      For these reasons, cloud optimization and cloud data management are essential components of an enterprise cloud data storage and overall data storage cost savings strategy. Komprise has strategic alliance partnerships with hybrid and cloud data storage technology leaders:

      Learn more about your options for migrating file workloads to the cloud: The Easy, Fast, No Lock-In Path to the Cloud.

      Getting Started with Komprise:

    • Cloud File Storage

      What is Cloud File Storage?

      Cloud File Storage, also known as Cloud NAS isCloud-Migration-3@3x-400x400 a method for storing data in the cloud that provides servers and applications access to data through file system protocols such as NFS and SMB. Cloud file storage allows customers to move file-based workloads to the cloud without code changes.

      Popular choices for cloud file storage are AWS FSx for Windows, AWS FSx ONTAP, AWS FSx ZFS, Microsoft Azure Files, Google Filestore, and Qumulo.

      In late 2021, Komprise COO Krishna Subramanian predicted that cloud file storage will accelerate.

      She wrote:

      First, it was cloud-native applications, then block workloads, but now it’s time for file workloads to move to the cloud. Explosive growth in unstructured file data has led to data centers bursting at the seams. Covid-19 has accelerated the shift to cloud for file workloads.

      Data management solutions are also enabling smart file migrations so that hot data is placed in cloud file storage and cold data is transparently and efficiently tiered at the file level to object storage. This means that customers can use data from both the file and object tiers. Another approach many vendors are taking is to provide cloud-like economics and pricing while the infrastructure remains on-premises — HPE Greenlake and Pure as a Service are examples of this trend.

      PttC_pagebanner-2048x639

      Getting Started with Komprise:

    • Cloud Migration

      CloudMigrationDiagram.png

      Cloud migration refers to the movement of data, processes, and applications from on-premises data storage or legacy infrastructure to cloud-based infrastructure for storage, application processing, data archiving and ongoing data lifecycle management. Komprise offers an analytics-driven cloud migration software solution – Elastic Data Migration – that integrate with most leading cloud service providers, such as AWS, Microsoft Azure, Google Cloud, Wasabi, IBM Cloud and more.

      Benefits of Cloud Migration

      Migrating to the cloud can offer many advantages – lower operational costs, greater elasticity, and flexibility. Migrating data to the cloud in a native format also ensures you can leverage the computational capabilities of the cloud and not just use it as a cheap storage tier. When migrating to the cloud, you need to consider both the application as well as its data. While application footprints are generally small and relatively easier to migrate, cloud file data migrations need careful planning and execution as data footprints can be large. Cloud migration of file data workloads with Komprise allows you to:

      • Plan a data migration strategy using analytics before migration. A pre-migration analysis helps you identify which files need to be migrated, plan how to organize the data to maximize the efficiency of the migration process. It’s important to know how data is used and to determine how large and how old files are throughout the storage system. Since data footprints often reach billions of files, planning a migration is critical.
      • Improve scalability with Elastic Data Migration. Data migrations can be time consuming as they involve moving hundreds of terabytes to  petabytes of data.  Since storage that data is migrating from is usually still in use during the migration, the data migration solution needs to move data as fast as possible without slowing down user access to the source storage.  This requires a scalable architecture that can leverage the inherent parallelism of the data sets to migrate multiple data streams in parallel without overburdening any single source storage. Komprise uses a patented elastic data migration architecture that maximizes parallelism while throttling back as needed to preserve source data storage performance.
      • Shrink cloud migration time. When compared to generic tools used across heterogeneous cloud and physical storage, Komprise cloud data migration is nearly 30x faster. Performance is maximized at every level with the auto parallelize feature, minimizing network usage and making migration over WAN more efficient.

      Komprise-Hypertransfer-Migration-BLOG-SOCIAL-final-768x402

      • Reduce ongoing cloud data storage costs with smart migration, intelligent tiering and data lifecycle management in the cloud. Migrating to the cloud can reduce the amount spent on IT needs, storage maintenance, and hardware upgrades as these are typically handled by the cloud provider. Most clouds provide multiple storage classes at different price points – Komprise intelligently moves data to the right storage class in the cloud based on your policy and performs ongoing data lifecycle management in the cloud to reduce storage cost.  For example, for AWS, unlike cloud intelligent tiering classes, Komprise tiers across both S3 and Glacier storage classes so you get the best cost savings.
      • Simplify storage management. With a Komprise cloud migration, you can use a single solution across your multivendor storage and multicloud architectures. All you have to do is connect via open standards – pick the SMB, NFS, and S3 sources along with the appropriate destinations and Komprise handles the rest. You also get a dashboard to monitor and manage all of your migrations from one place. No more sunk costs of point migration tools because Komprise provides ongoing data lifecycle management beyond the data migration.
      • Greater resource availability. Moving your data to the cloud allows it to be accessed from wherever users may be, making your it easier for international businesses to store and access their data from around the world. Komprise delivers native data access so you can directly access objects and files in the cloud without getting locked in to your NAS vendor—or even to Komprise.

      Cloud Migration Process

      The cloud data migration process can differ widely based on a company’s storage needs, business model, environment of current storage, and goals for the new cloud-based system. Below are the main steps involved in migrating to the cloud.

      Step 1 – Analyze Current Storage Environment and Create Migration Strategy

      A smooth migration to the cloud requires proper planning to ensure that all bases are covered before the migration begins. It’s important to understand why the move is beneficial and how to get the most out of the new cloud-based features before the process continues.

      Step 2 – Choose Your Cloud Deployment Environment

      After taking a thorough look at the current resource requirements across your storage system, you can choose who will be your cloud storage provider(s). At this stage, it’s decided which type of hardware the system will use, whether it’s used in a single or multi-cloud solution, and if the cloud solution will be public or private.

      Step 3 – Migrate Data and Applications to the Cloud

      Application workload migration to the cloud can be done through generic tools.  However, since data migration involves moving petabytes of data and billions of files, you need a data management software solution that can migrate data efficiently in a number of ways including through a public internet connection, a private internet connection, (LAN or a WAN), etc.

      Step 4 – Validate Data After Migration

      Once the migration is complete, the data within the cloud can be validated and production access to the storage system can be swapped from on-premises to the cloud.  Data validation often requires MD5 checksum on every file to ensure the integrity of the data is intact after migration.

      Komprise Cloud Data Migration

      With Elastic Data Migration from Komprise, you can affordably run and manage hundreds of migrations across many different platforms simultaneously. Gain access to a full suite of high-speed cloud migration tools from a single dashboard that takes on the heavy lifting of migrations, and moves your data nearly 30x faster than traditional available services—all without any access disruption to users or apps.

      Our team of cloud migration professionals with over two decades of experience developing efficient IT solutions have helped businesses around the world provide faster and smoother data migrations with total confidence and none of the headaches. Contact us to learn more about our cloud data migration solution or sign up for a free trial to see the benefits beyond data migration with our analytics-driven Intelligent Data Management solution.

      Learn more about your options for migrating file workloads to the cloud: The Easy, Fast, No Lock-In Path to the Cloud.

      PttC_pagebanner-2048x639

      Getting Started with Komprise:

    • Cloud NAS

      optimize-data3x-300x288

      What is Cloud NAS?

      Cloud NAS is a relatively new term – it refers to a cloud-based storage solution to store and manage files. Cloud NAS or cloud file storage is gaining prominence and several vendors have now released cloud NAS offerings.

      What is NAS?

      Network Attached Storage (NAS) refers to data storage that can be accessed from different devices over a network. NAS environments have gained prominence for file-based workloads because they provide a hierarchical structure of directories and folders that makes it easier to organize and find files. Many enterprise applications today are file-based, and use files stored in a NAS as their data repositories.

      Access Protocols

      Cloud NAS storage is accessed via the Server Message Block (SMB) and Network File System (NFS) protocols. On-premises NAS environments are also accessed via SMB and NFS.

      Why is Cloud NAS gaining in importance?

      While the cloud was initially used by DevOps teams for new cloud-native applications that were largely object-based, the cloud is now seen as a major destination for core enterprise applications. These enterprise workloads are largely file-based, and so moving them to the cloud without rewriting the application means file-based workloads need to be able to run in the cloud.

      To address this need, both cloud vendors and third-party storage providers are now creating cloud-based NAS offerings. Here are some examples of cloud NAS offerings:

      Cloud NAS Tiers

      Cloud NAS storage is often designed for high-performance file workloads and its high performance Flash tier can be very expensive.

      Many Cloud NAS offerings such as AWS EFS and NetApp CloudVolumes ONTAP do offer some less expensive file tiers – but putting data in these lower tiers requires some data management solution. As an example, the standard tier of AWS EFS is 10 times more expensive than the standard tier of AWS S3. Furthermore, when you use a Cloud NAS, you may also have to replicate and backup the data, which can often make it three times more expensive. As this data becomes inactive and cold data, it is very important to manage data lifecycle on Cloud NAS to ensure you are only paying for what you use and not for dormant cold data on expensive tiers.

      Intelligent Data Archiving and Intelligent Data Tiering for Cloud NAS

      An analytics-driven unstructured data management solution can help you get the right data onto your cloud NAS and keep your cloud NAS costs low by managing the data lifecycle with intelligent archiving and intelligent tiering.

      As an example, Komprise Intelligent Data Management for multi-cloud does the following:

      • Analyzes your on-premises NAS data so you can pick the data sets you want to migrate to the cloud
      • Migrates on-premises NAS data to your cloud NAS with speed, reliability and efficiency
      • Analyzes data on your cloud NAS to show you how data is getting cold and inactive
      • Enables policy-based automation so you can decide when data should be archived and tiered from expensive Cloud NAS tiers to lower cost file or object classes
      • Monitors ongoing costs to ensure you avoid expensive retrieval fees when cold data becomes hot again
      • Eliminates expensive backup and DR costs of cold data on cloud NAS

      Cloud NAS Migration

      Komprise-Hypertransfer-Migration-PR-SOCIAL-768x402

      There are man potential advantages to migrated your NAS device to the cloud. But the right approach to cloud data migration is essential. Some of the common cloud NAS migration challenges are outlined in this post: Eliminating the Roadblocks of Cloud Data Migrations for File and NAS Data. Avoid unstructured data migration challenges and pitfalls with an analytics-first approach to cloud data migration and unstructured data management. With Komprise Elastic Data Migration you will:

      • Know before you migrate – analytics drive the most cost-effective plans
      • Preserve data integrity – maintain metadata, run MD5 checksums
      • Save time and costs – multi-level parallelism provides elastic scaling
      • Be worry-free – built for petabyte-scale that ensures reliability
      • Migrate NFS 27X faster and Migrate SMB data 25X faster – forget slow, free tools that need babysitting

      Get the fast, no lock-in path to the cloud with a unified platform for unstructured data migration.

      PttC_pagebanner-2048x639

      ———-

      Getting Started with Komprise:

    • Cloud Object Storage

      What is Cloud Object Storage?

      Cloud-storage-problem-blog-callout@3x-1536x1056Cloud object storage is a type of cloud data storage that is designed to store and manage large amounts of unstructured data in the cloud. Unlike file-based storage systems, cloud object storage services are based on a simple key-value model that allows data to be stored and retrieved based on unique identifiers (or keys) that are associated with each piece of data.

      Also see Object Storage.

      Cloud object storage is ideal for storing documents, images, videos, and other unstructured data types that doesn’t fit neatly into a structured (relational) database. Cloud object storage systems are designed to be highly scalable and can store large data sets, making them well-suited for big data applications and use cases such as backup and archiving, content distribution, and data analytics.

      Examples of Cloud Object Storage

      Some examples of cloud object storage include Amazon S3, Microsoft Azure Blob Storage, Google Cloud Storage, and IBM Cloud Object Storage Services. These cloud object storage services offer a range of features such as data durability and availability, built-in encryption, and flexible data access controls, as well as APIs and integrations for developers to easily incorporate object storage into their applications.

      Komprise TMT: Cloud File and Object Duality

      Komprise-Kumar-TMT-Deep-Dive-Blog-Part2-Social-768x402One of the core components of the Komprise Intelligent Data Management Platform is the patented Transparent Move Technology. When Komprise tiers files to a new target, typically object storage like AWS S3 or Azure Blob, moved files remain in native form, which means when a file becomes an object, a user sees it as a file. In addition to no end user disruption, preserving duality of file and object data across silos enables native cloud services on the data and ensures your data is not locked into a proprietary storage vendor format. This approach also ensures that hot data at the original source is handled by that storage vendor for optimal performance.

      In an interview, CEO and co-founder Kumar Goswami put it this way:

      Without using any agents, you can tier the data to the cloud and still access it from the original source as if it had never moved AND access it as a native object in the cloud to leverage cloud services like AI/ML cloud applications. This file to object duality, without agents, without getting in front of hot, mission-critical data is something no one else can tout.

      Komprise partners with cloud object storage vendors to deliver data-storage agnostic unstructured data management as a service.

      Getting Started with Komprise:

    • Cloud Storage Gateway

      A cloud storage gateway is a hardware or software appliance that serves as a bridge between local applications and remote cloud-based storage.

      A cloud storage gateway provides basic protocol translation and simple connectivity to allow incompatible technologies to communicate. The gateway may be hardware or a virtual machine (VM) image.

      The requirement for a gateway between cloud storage and enterprise applications became necessary because of the incompatibility between protocols used for public cloud technologies and legacy storage systems. Most public cloud providers rely on Internet protocols, usually a RESTful API over HTTP, rather than conventional storage area network (SAN) or network-attached storage (NAS) protocols.

      Gateways can also be used for archiving in the cloud. This pairs with automated storage tiering, in which data can be replicated between fast, local disk and cheaper cloud storage to balance space, cost, and data archiving requirements.

      The challenge with traditional cloud gateways which front the cloud with on-premise hardware and use the cloud like another storage silo is that the cloud is very expensive for hot data that tends to be frequently accessed, resulting in high retrieval costs. Read the blog post: Are Cloud Storage Gateways a Good Choice for Cloud Data Migrations?

      CloudStorageGateway-1

      Cloud Storage Gateway versus File-Level Cloud Tiering

      Cloud storage gateways create a new appliance (virtual or physical) that acts as your storage at each site to cache data locally and put a golden copy in the cloud. They are useful when you are doing active file collaboration across multiple sites and do not have NAS at branch sites or do not want to use your existing NAS. But, they do not leverage existing data storage investments and require data to be moved to the gateway which creates additional infrastructure costs. Cloud storage gateways store data in the cloud in their proprietary format. Similar to storage-based cloud tiering, cloud storage gateways create proprietary lock-in and unnecessary cloud gateway costs in perpetuity. And they also typically create additional on-premises costs.

      Cloud Storage Gateways: Additional On-Premises Infrastructure

      Cloud storage gateways are typically hardware-based since they have to serve hot data from the cache. Many vendors also offer virtual appliance options for smaller deployments.

      Duplication of Data in the Cloud

      Cloud storage gateways typically put all the data in the cloud and then cache some data locally. So, if you are using a cloud storage gateway for 100TB, then all 100TB of data is in the cloud and a subset of it (maybe 20TB or 30TB) is also cached locally. This means you may need 130TB of infrastructure to house 100TB of data. Depending on the size of the local cache, this may be larger.

      Cloud Storage Gateways: A New Storage Silo

      A cloud storage gateway is a new storage infrastructure silo that caches some data locally and keeps all of the data in the cloud. It replaces your existing NAS. It does not work with it. It is a rip-and-replace approach.

      Cloud Storage Gateway Licensing Charges to Access Data in the Cloud

      Cloud storage gateways lock data in the cloud with their proprietary format. This means you cannot directly access your data in the cloud—data access needs to be through the gateway software in the cloud. Many customers are surprised to learn they have to pay gateway licensing costs even to access data in the cloud, and this cost continues as long as you need your data. This lock-in limits flexibility and creates unnecessary cloud expenses. It also limits your use of the cloud as you cannot natively access your data without the gateway software.

      Assuming $700/TB/yr. of cloud storage gateway licensing costs, cloud storage gateways have 287% higher annual costs than using a file-level data management solution with the cloud. This is a recurring cost that you pay for over the lifetime of your data!

      This table summarizes the common cloud data migration requirements and the differences between Komprise Elastic Data Migration and Cloud Storage Gateways.

      Getting Started with Komprise:

    • Cloud Tiering

      What is Cloud Tiering?

      Cloud tiering definition: Cloud tiering is increasingly becoming a critical capability in managing enterprise file workloads across the hybrid cloud. Cloud tiering (also referred to as cloud archiving or archive to the cloud) are techniques that offload less frequently used data, also known as cold data, from expensive on-premises file storage or Network Attached Storage (NAS) to cheaper levels of storage in the cloud, typically object storage classes such as Amazon S3. Cloud tiering is a variant of data tiering. The term “data tiering” arose from moving data around different tiers or classes of storage within a storage system, but has expanded now to mean tiering or archiving data from a storage system to other clouds and storage systems.

      Cloud Tiering Transparently Extends Enterprise File Storage to the Cloud

      Enterprises today are increasingly trying to move core file workloads to the cloud. Since file data can be voluminous, involving billions of files, migrating file data to the cloud can take months and create disruption.

      A simple solution to this is to gradually offload files to the cloud (cloud tiering) without changing the end user experience. Cloud tiering (or archiving to specific cloud tiers) enables this by moving infrequently used cold data to a cheaper cloud storage tier, while the data continues to remain accessible from the original location. This enables users to transparently extend on-premises capacity with the cloud.

      Cloud Tiering Can Yield Significant Savings If Done Correctly

      Cloud object storage is cost-efficient if used correctly. Most cloud providers charge not only for the storage, but also to retrieve data, and they charge egress fees if the data has to leave the cloud. Cloud retrieval fees are usually in the form of charges for “get” and “put” API calls and cloud egress costs are charged by the amount of data that is read from anywhere outside the cloud. So, to keep enterprise storage costs low, infrequently accessed data such as snapshots, logs, backups and cold data are best suited for tiering to the cloud.

      By tiering cold data to the cloud, the on-premises storage array needs to only keep hot data and the most recent logs and snapshots. Across Komprise customers, we have found that typically 60% to 80% of their actual data has not been accessed in over a year. By cloud tiering the cold data as well as older log files and snapshots, the capacity of the storage array, mirrored storage array (if mirroring/replication is being used) and backup storage is reduced dramatically. This is why tiering cold data can reduce the overall storage cost by as much as 70% to 80%.

      Cloud-Data-Tieringv2-1-300x225The many advantages of cloud tiering of cold data include:

      • Reduced storage acquisition costs. Flash storage, used for fast access to hot data, is expensive. By tiering off infrequently used data you can purchase a much smaller amount of flash storage, thereby reducing acquisition costs.
      • Cut backup footprint and costs. By continuously tiering off cold data that is not being accessed you can reduce your backup footprint, backup license costs, and backup storage costs if the cold data is placed in robust storage (such as that provided by the major CSPs).
      • Increase disaster recovery speeds and lower disaster recovery (DR) costs. As with backup, by tiering off the cold data, the amount of data mirrored/replicated is dramatically reduced as well.
      • Improved storage performance. By running storage at a lower capacity and by removing access to cold data to another storage device or service, you can increase the performance of your storage array.
      • Leverage the cloud to run AI, ML, compliance checks and other applications on cold data. With cold data in the cloud, you can access, search and process your cold data without putting any load on your storage array. The cold data that is tiered off has value. Being able to process and feed your cold data into your AI/ML/BI engines is critical to staying competitive. By tiering you can extract value from your cold data without burdening your storage array. This also helps to extend the life of your storage array.

      Clearly, if cloud tiering is implemented correctly at the file level it will provide all of the above benefits whereas block tiering to the cloud will not. But not all cloud tiering choices are the same.

      To learn more about the differences between cloud tiering at the file level vs the block level, and why so-called cloud pools such as NetApp FabricPool or Dell EMC Isilon CloudPools are not the right approach for cloud tiering, read “What you need to know before jumping into the cloud tiering pool”.

      Also download the white paper: Cloud Tiering: Storage-Based vs Gateways vs. File-Based.

      PttC_pagebanner-2048x639

      Getting Started with Komprise:

    • CloudPools

      What are CloudPools?

      Dell EMC Isilon CloudPools software provides policy-based automated tiering that allows for an additional storage tier for the Isilon cluster at your data center. CloudPools supports tiering data from Isilon to public, private or hybrid cloud options. This technology is a form of storage pools, which are collections of storage volumes exported to a shared storage environment.

      Read more about storage pools.

      Smart, fast proven Isilon migration.

      Read the blog post: What you need to know before jumping into the cloud tiering pool

      Komprise_CloudTieringPool_blogthumb-768x512

      Download the white paper: Cloud Tiering: Storage-Based vs Gateways vs File-Based: Which is Better and Why?

      Getting Started with Komprise:

    • Cold Data Storage

      Cold-Data-Storage

      What is cold data?

      Cold data refers to data that is infrequently accessed, as compared to hot data that is frequently accessed. As unstructured data grows at unprecedented rates, organizations are realizing the advantages of utilizing cold data storage devices instead of high-performance primary storage as they are much more economical, simple to set up & use, and are less prone to suffering from drive failure.

      For many organizations, the real difficulty with cold data is figuring out when data should be considered hot and kept on primary storage or it can be labeled as cold and moved off to a secondary storage device. For this reason, it’s important to understand the difference between data types to develop a solution for managing cold data that is most cost effective for your organization.

      Types of Data That Cold Storage is Typically Used For

      Examples of data types for which cold storage may be suitable include information a business is required to keep for regulatory compliance, video, photographs, and data that is saved for backup, archival, big-data analytics or disaster recovery purposes. As this data ages and is less frequently accessed, it can generally be moved to cold storage. A policy-based approach allows organizations to optimize storage resources and reduce costs by moving inactive data to more economical cold data storage.

      Advantages of Developing a Cold Data Storage Solution

      1. Prevent primary storage solutions from becoming overburdened with unused data
      2. Reduce overall resource costs of data storage
      3. Simplify data storage solution and optimize the management of its data
      4. Efficiently meet governance and compliance requirements
      5. Make use of more affordable & reliable mechanical storage drives for lesser used data

      Reduce Strain on Primary Storage by Moving Cold Data to Secondary Storage

      Affordable Costs of Cold Storage

      When comparing costs for enterprise-level storage drives, the mechanical drives used in many cold data storage systems are just over 20% of the price that high-end solid-state drives (SSD) can cost on average. For SSD’s at the top tier of performance, storage still costs close to 10 centers per gigabyte whereas NAS-level mechanical drives cost only around 2 centers per gigabyte on average.

      Simplify Your Data Storage Solution

      A well-optimized cold data storage system can make your local storage infrastructure much less cluttered & easier to maintain. As the storage tools which help us automatically determine which data is hot and cold continue to improve, managing the movement of data between solutions or tiers is becoming easier every year. Some cold data storage solutions are even starting to automate the entirety of the unstructured data management process based on rules that the business establishes.

      Meet Regulatory or Compliance Requirements

      Many organizations in the healthcare industry are required to hold onto their data for extended periods of time, if not forever. With the possibility of facing litigation somewhere down the line based on having this data intact, corporations are opting to use a cold data storage solution which can effectively store critically important, unused data under conditions in which it cannot be tampered with or altered.

      Increase Data Durability with Cold Data Storage

      Reliability is one of the most important factors when choosing a data storage solution to house data for extended periods of time or indefinitely. Mechanical drives can be somewhat slower than SSD’s in providing file access, but they are still quick to be able to pull files and offer much more budget room for creating additional backup or parity within your storage system.

      When considering storage hardware for cold data solutions, consider low cost, high-capacity options with a high degree of data durability so your data can remain intact for as long as it needs to be stored for.

      Learn more about the your options when it comes to migrating file workloads to the cloud.

      How Pfizer Saved Millions with a Cold Data Management Strategy

      Pfizer needed to change the way it was managing petabytes of unstructured data to cut data storage costs and reinvest in areas with patients at the center. Read the blog.

      Komprise-Pfizer-Webinar-blogSOCIAL2-768x402

      Getting Started with Komprise:

  • D
    • Data Analytics

      Data analytics refers to the process used to enhance productivity and business improvement by extracting and categorizing data to identify and analyze behavioral patterns. Techniques vary according to organizational requirements.

      The primary goal of data analytics is to help organizations make more informed business decisions by enabling analytics professionals to evaluate large volumes of transactional and other forms of data. Data analytics can be pulled from anything from Web server logs to social media comments.

      Potential issues with data analytics initiatives include a lack of analytics professionals and the cost of hiring qualified candidates. The amount of information that can be involved and the variety of data analytics data can also cause data analytics issues, including the quality and consistency of the data. In addition, integrating technologies and data warehouses can be a challenge, although various vendors offer data integration tools with big data capabilities.

      Big data has drastically changed the requirements for extracting data analytics from business data. With relational databases, administrators can easily generate reports for business use, but they lack the broader intelligence data warehouses can provide. However, the challenge for data analytics from data warehouses is the costs associated.

      Unstructured Data Analytics

      There is also the challenge of pulling the relevant data sets to enable data analytics from cold data. This requires intelligent data management solutions that track what unstructured data is kept and where, and enable you to easily search and find relevant data sets for big-data analytics.

      global-analytics-and-search-diagram-768x407
      Deliver the right data to the right place at right time with Komprise and bring unstructured data to you your analytics projects.

      Learn more Komprise unstructured data analysis and insight.

      Getting Started with Komprise:

    • Data Archiving

      What is Data Archiving?

      Data Archiving, often referred to as Data Tiering, protects older data that is not needed for everyday operations of an organization. A data archiving strategy reduces primary storage and allows an organization to maintain data that may be required for regulatory or other needs.

      Benefits of a Data Archiving Solution

      Data archiving is intended to protect older information that is not needed for everyday operations but may have to be accessed occasionally. Data archiving tools deliver the most value by reducing primary storage and the related costs, rather than acting as a data recovery tool. Unstructured data archive tools are in high demand because of the opportunity to drastically reduce overall storage costs since most data is now unstructured—and often residing on expensive, high-performance storage devices. Archive data storage, meanwhile, is typically on a low-performance, lost-cost, high-capacity data storage medium.

      Types of Data Archiving

      Some data archiving products enable read-only access to protect data from modification, while other data tiering and archiving products allow users to make changes.

      Data archiving take a few different forms. Options include online data storage, which places archive data onto disk systems where it is readily accessible. Archives are frequently file-based, but object storage is also growing in popularity. A key challenge when using object storage to archive file-based data is the impact it can have on users and applications. To avoid changing paradigms from file to object and breaking user and application access, use data management solutions that provide a file interface to data that is archived as objects.

      Another archival system uses offline data storage where archive data is written to tape or other removable media using data archiving software rather than being kept online. Data archiving on tape consumes less power than disk systems, translating to lower costs.

      A third option is using cloud data storage, such as those offered by Amazon and Azure. Cloud object storage is a smart choice for cloud tiering and data archiving because of its low-cost, immutable nature.This is inexpensive but requires ongoing investment.

      With the ongoing threats from ransomware and other sophisticated cybersecurity actors, creating a secure data archiving and data tiering strategy is imperative. Encryption of sensitive archives, use of multi-factor authentication for access and object lock storage (such as AWS S3) are a few ways to protect archival data from modification, corruption and theft.

      The data archiving process typically uses automated software, which will automatically move cold data via policies set by an administrator. Today, a popular approach to data archiving is to make the archive “transparent.” By this, archived data is not only online but is fully accessed exactly as before by users and applications, so they experience no change in behavior. (See Native Access)

      Learn more about Komprise Transparent Move Technology (TMT).

      Getting Started with Komprise:

    • Data Backup

      Why Data Backup?

      Data loss can occur from a variety of causes, including computer viruses, hardware failure, file corruption, fire, flood, or theft, etc. Data loss may involve critical financial, customer, and company data, so a solid data backup plan is critical for every organization.

      Data backup plan considerations:
      • What data (files and folders) to backup
      • How often to run your backups
      • Where to store the backup data
      • What compression method to use
      • What type of backups to run
      • What kind of media on which to store the backups

      In general, you should back up any data that can’t be replaced easily. Some examples are structured data like databases, and unstructured data such as word processing documents, spreadsheets, photos, videos, emails, etc. Typically, programs or system folders are not part of a data backup program. Installation discs, operating system discs, and registration information should be stored in a safe place.

      Data backup frequency depends on how often your organizational data changes.

      • Frequently changing data may need daily or hourly backups
      • Data that changes every few days might require a weekly or even monthly backup
      • For some data, a backup may need to be created each time it changes

      costsavings_fig3-30x16

      The challenge with unstructured data is that backing up unstructured data is not only time consuming but also very complex, with millions to billions of files of various sizes and types and growing at an astronomical rate, leaving enterprises to struggle with long backup windows, overlapping backup cycles, backup footprint sprawl, spiraling costs, and above all, vulnerable in the case of a disaster.

      Read the white paper: Rein in Storage and Backup Costs.

      Read the post: 5 Ways to Get to the Cloud Smarter and Faster

      Backing Up Unstructured Data First (Before Analysis) is Backwards

      Don’t backup data first. Know your data first to make smarter, cost-saving decisions. Start with the Komprise TCO calculator.

      Learn more about Komprise Analysis.

      CLOUD_blog_Resource_Thumbnail_800x533-768x512

      Getting Started with Komprise:

    • Data Center Consolidation

      Data center consolidation is the process of merging or reducing the number of data centers that an organization operates. The consolidation is typically done in order to reduce costs, increase efficiency, and simplify management of the data center infrastructure.

      There are several steps involved in data center consolidation, including:

      • Assessing the current state of the data center environment, including the number and locations of data centers, the types of systems and applications being used, and the costs associated with operating and maintaining the infrastructure.
      • Developing a consolidation plan that outlines the goals, timelines, and resources needed for the project. This plan should include an analysis of the potential benefits and risks of consolidation, as well as a detailed roadmap for migrating applications and data to the new infrastructure.
      • Migrating applications and data migration to the consolidated data center(s). This may involve re-architecting applications to run in a virtualized environment or on cloud infrastructure.
      • Decommissioning or repurposing the legacy data center(s), including disposing of any equipment that is no longer needed.
      • Continuously monitoring and optimizing the consolidated data center infrastructure to ensure it remains efficient and cost-effective.

      Overall, data center consolidation can be a complex process that requires careful planning and execution. However, the benefits of consolidation can be significant, including lower costs, improved performance, and increased agility and flexibility for the organization.

      In 2023, Komprise summarized the following customers trends in unstructured data management and storage.

      • Simplifying infrastructure, getting rid of legacy apps and software and data center consolidation to support business growth and IT modernization.
      • Reducing IT spending by pivoting to more of an OPEX environment and by deleting data that is no longer needed to reduce data storage costs and complexity.
      • Managing research workflows and the full lifecycle of data: Examples include, from a major university: enabling users to share data between labs and send some data to the cloud for processing, then bring it back on-premises.
      • Using industry standards to move data easily between platforms.
      • Externalize (tier) data off NAS: IT and storage managers want to tier cold data from across the business to cheaper, secondary storage to save money and free up primary storage capacity.

      Learn more about Komprise Elastic Data Migration and read 5 Industry Case Studies.

      Press-Release_-Linkedin-Social-1200px-x-628px-Color-768x402

      Getting Started with Komprise:

    • Data Classification

      Data classification is the process of organizing data into tiers of information for data organizational purposes.

      Data classification is essential to make data easy to find and retrieve so that your organization can optimize risk management, compliance, and legal requirements. Written guidelines are essential in order to define the categories and criteria to classify your organization’s data. It is also important to define the roles and responsibilities of employees in the data organization structure.

      When data classification procedures are established, security standards should also be established to address data lifecycle requirements. Classification should be simple so employees can easily comply with the standard.

      Examples of types of data classifications:

      • 1st Classification: Data that is free to share with the public
      • 2nd Classification: Internal data not intended for the public
      • 3rd Classification: Sensitive internal data that would negatively impact the organization if disclosed
      • 4th Classification: Highly sensitive data that could put an organization at risk

      Data classification is a complex process, but automated systems can help streamline this process. The enterprise must create the criteria for classification, outline the roles and responsibilities of employees to maintain the protocols, and implement proper security standards. Properly executed, data classification will provide a framework for the data storage, transmission and retrieval of data.

      Automation simplifies data classification by enabling you to dynamically set different filters and classification criteria when viewing data across your storage. For instance, if you wanted to classify all data belonging to users who are no longer at the company as “zombie data,” the Komprise Intelligent Data Management solution will aggregate files that fit into the zombie data criterion to help you quickly classify your data.

      Data Classification and Komprise Deep Analytics

      Komprise Deep Analytics gives data storage administrators and line of business users granular, flexible search capabilities and indexes data creating a Global File Index across file, object and cloud data storage spanning petabytes of unstructured data. Komprise Deep Analytics Actions uses these virtual datasets (see virtual data lake) for systematic, policy-driven data management actions that can feed your data pipelines.

      Komprise-blog-storage-teams-using-deep-analytics-SOCIAL

      Getting Started with Komprise:

    • Data Governance

      What is data governance?

      Data governance refers to the management of the availability, security, usability, and integrity of data used in an enterprise. Data governance in an organization typically includes a governing council, a defined set of procedures, and a plan to execute those procedures.

      Data governance is not about allowing access to a few privileged users; instead, it should allow broad groups of users access with appropriate controls. Business and IT users have different needs; business users need secure access to shared data and IT needs to set policies around security and business practices. When done right, data governance allows any user access to data anytime, so the organization can run more efficiently, and users can manage their workload in a self-service manner.

      3 things to consider when developing a data governance strategy:

      Selecting a Data Governance Team
      • Balance IT and business leaders to get a broad view of the data and service needs
      • Start small – choose a small group to review existing data analytics
      Data Quality Strategy
      • Audit existing data to discover data types and how they are used
      • Define a process for new data sources to ensure quality and availability standards are met
      Data Security
      • Make sure data is classified so data requiring protection for legal or regulatory reasons meets those requirements
      • Implement policies that allow for different levels of access based on user privileges

      Komprise is not a data governance solution but we are part of an overall governance strategy as it relates to unstructured data management. With the Deep Analytics user profile, you can provide secure data access to specific users to search and tag file and object data so that it can then be incorporated into smart data migration and data mobility use cases, including Smart Data Workflows.

      Getting Started with Komprise:

    • Data Hoarding

      What is Data Hoarding?

      Data hoarding is now being recognized as a growing challenge in the technology world. Many IT teams are caught in an endless cycle of buying more data storage. Unstructured data is growing at record rates and this data is increasingly being stored across hybrid cloud infrastructure. This massive data growth and increased data mobility has only created more disconnected data silos. Just like hoarding has been recognized as a real problem in the real-world (see reality TV shows like Hoarders and Storage Wars), data hoarding refers to the practice of retaining large amounts of data that is no longer needed or is rarely used, for extended periods of time. This is a common problem in many organizations, where employees tend to save data out of habit, fear of losing it, or simply because they don’t know what to do with it.

      What is the impact of data hoarding?

      The impact of data hoarding is more significant than most people / organizations realize, including:

      • Increased costs: Storing large amounts of unnecessary data can be expensive, especially if the organization is using expensive storage solutions, such as high-end disk arrays or tape libraries.
      • Reduced efficiency: Hoarded data can slow down systems and applications, as well as increase the time required to complete backups and other data management tasks.
      • Compliance risks: Hoarded data can pose a risk to organizations in terms of compliance, as they may contain sensitive information that is subject to data privacy regulations.
      • Cybersecurity risks: Hoarded data can also pose a security risk, as it may contain sensitive information that could be targeted by cybercriminals or hackers.

      Stop Treating All Data the Same

      Sound familiar?

      • Cold data sits on expensive storage.
      • Everything gets replicated.
      • Everything gets backed up and backup windows are getting longer.
      • Costs are spiraling out of control.

      The IDC report, How to Manage Your Data Growth Smarter with Data Literacy noted:

      • 60% of the storage budget is not really spent on storage. It’s spent on secondary copies of data for data protection – backups, backup software licenses, replication, and disaster recovery.
      • 1/3 of IT organizations are spending most of their IT storage on secondary data.

      And with ransomware attacks on the rise, which increasingly target unstructured data, it’s increasingly important to find ways to manage, tier, migrate, replicate file data within tight IT budgets. Read the blog post: How to Protect File Data from Ransomware at 80% Lower Cost.

      Dealing with Data Hoarding

      To address the data hoarding challenge and establish an Intelligent Data Management strategy, IDC recommends the following:

      1. Focus less on finding alternatives to store data better/faster and focus more on finding intelligent alternatives to unstructured data management.
      2. Use modern, next-generation cloud data management technologies that are lightweight and non-intrusive, and that demonstrate powerful return on investment.
      3. Aim to deliver continuous insights as a service to business and achieve speed of intelligence for a competitive edge.

      StorageAddiction_Blog_pic2-30x11

      Establish a Cold Data Storage Strategy

      One obvious strategy to deal with data hoarding is to define a cold data storage strategy and establish unstructured data management policies.

      Read this post to learn how to quantify the business value impact of Komprise Intelligent Data Management.

      Komprise-Pfizer-Webinar-blogSOCIAL2-768x402

      Getting Started with Komprise:

    • Data Lake

      A data lake is data stored in its natural state. The term typically refers to unstructured data that is sitting on different storage environments and clouds. The data lake supports data of all types – for example, you may have videos, blogs, log files, seismic files and genomics data in a single data lake. You can think of each of your Network Attached Storage (NAS) devices as a data lake.

      One big challenge with data lakes is to comb through them and find the relevant data you need. With unstructured data, you may have billions of files strewn across different data lakes, and finding data that fits specific criteria can be like finding a needle in a haystack

      A virtual data lake is a collection of data that fits certain criteria – and as the name implies, it is virtual because the data is not moved. The data continues to reside in its original location, but the virtual data lake gives a discrete handle to manipulate that entire data set. The Komprise Global File Index can be considered to be a virtual data lake for file and object metadata.

      Some key aspects of data lakes – both physical and virtual:

      • Data Lakes Support a Variety of Data Formats: Data lakes are not restricted to data of any particular type.
      • Data Lakes Retain All Data: Even if you do a search and find some data that does not fit your criteria, the data is not deleted from the data lake. A virtual data lake provides a discrete handle to the subset of data across different storage silos that fits specific criteria, but nothing is moved or deleted.
      • Virtual Data Lakes Do Not Physically Move Data: Virtual data lakes do not physically move the data, but provide a virtual aggregation of all data that fits certain criteria. Deep Analytics can be used to specify criteria.

      Komprise-Global-File-Index-Architecture-2048x1020

      Getting Started with Komprise:

    • Data Lakehouse

      Data Lakehouse is a term first coined by the co-founder and then CTO of Pentaho, James Dixon. And while both Amazon and Snowflake had already started using the term “lakehouse,” it wasn’t until Databricks really endorsed it in a January 30, 2020 blog post entitled “What is a Data Lakehouse?” that it received more mainstream attention (amongst data practitioners at least).

      You’ve heard of a Data Lake. You’ve heard of a Data Warehouse. Enter the Data Lakehouse.

      A data lakehouse is a modern data architecture that combines the benefits of data lakes and data warehouses. A data lake is a centralized repository that stores vast amounts of raw, unstructured, and semi-structured data, making it ideal for big data analytics and machine learning. A data warehouse, on the other hand, is designed to store structured data that has been organized for querying and analysis.

      A data lakehouse builds on key elements of these two approaches by providing a centralized platform for storing and processing large volumes of structured and unstructured data, while supporting real-time data analytics. It allows organizations to store all of their data in one place and perform interactive and ad-hoc analysis at scale, making it easier to derive insights from complex data sets. A data lakehouse typically uses modern (and often open source) technologies such as Apache Spark, Apache Arrow, to provide high-performance, scalable data processing.

      Who are the data lakehouse vendors?

      There are several vendors that offer data lakehouse solutions, including:

      • Amazon Web Services (AWS) with Amazon Lake Formation
      • Microsoft with Azure Synapse Analytics
      • Google with Google BigQuery Omni
      • Snowflake
      • Databricks
      • Cloudera with Cloudera Data Platform
      • Oracle with Oracle Autonomous Data Warehouse Cloud
      • IBM with IBM Cloud Pak for Data

      These vendors provide a range of services, from cloud-based data lakehouse solutions to on-premises solutions that can be deployed in an organization’s own data center. The choice of vendor will depend on the specific needs and requirements of the organization, such as: the size of the data sets, the required performance and scalability, the level of security and compliance needed and the overall budget.

      Komprise Smart Data Workflows is an automated process for all the steps required to find the right unstructured data across your data storage assets, tag and enrich the data, and send it to external tools such as a data lakehouse for analysis. Komprise makes it easier and more streamlined to find and prepare the right file and object data for analytics, AI, ML projects.

      Getting Started with Komprise:

    • Data Literacy

      The ability to derive meaningful information from data. Komprise Data Analytics provides data literacy by showing how much data, what kind, who’s using it, how often—across all storage silos.

      Read the IDC InfoBrief: How to Manage Your Data Growth Smarter with Data Literacy.

      IDCinfobrief-lander-min-30x16

      Getting Started with Komprise:

    • Data Management

      Data management is officially defined by DAMA International, the professional organization data management professionals, is:

      “Data Resource Management is the development and execution of architectures, policies, practices and procedures that properly manage the full data lifecycle needs of an enterprise.”

      Data management is the process of developing policies and procedures in order to effectively manage the information lifecycle needs of an enterprise. This includes identifying how data is acquired, validated, stored, protected, and processed. Data management policies should cover the entire lifecycle of the data, from creation to deletion.

      Due to the sheer volume of unstructured data, an unstructured data management plan is necessary for every organization. The numbers are staggering – for example, more data has been created in the past two years than in the entire previous history of the human race. Cloud data management is also a growing area of investment in the enterprise.

      Komprise-State-of-Unstructured-Data-Management-Report-2022-PROMO
      Unstructured Data Management Report

      Getting Started with Komprise:

    • Data Management Policy

      What is a Data Management Policy?

      A data management policy addresses the operating policy that focuses on the management and governance of data assets, and is a cornerstone of governing enterprise data assets. This policy should be managed by a team within the organization that identifies how the policy is accessed and used, who enforces the data management policy, and how it is communicated to employees.

      It is recommended that an effective data management policy team include top executives to lead in order for governance and accountability to be enforced. In many organizations, the Chief Information Officer (CIO) and other senior management can demonstrate their understanding of the importance of data management by either authoring or supporting directives that will be used to govern and enforce data standards.

      Considerations to consider in a data management policy

      • Enterprise data is not owned by any individual or business unit, but is owned by the enterprise
      • Enterprise data must be safe
      • Enterprise data must be accessible to individuals within the organization
      • Metadata should be developed and utilized for all structured and unstructured data
      • Data owners should be accountable for enterprise data
      • Users should not have to worry about where data lives
      • Data should be accessible to users no matter where it resides

      Ultimately, a data management policy should guide your organization’s philosophy toward managing data as a valued enterprise asset. Watch the video: Intelligent Data Management: Policy-Based Automation

      Developing an unstructured data management policy

      It is important to develop enterprise-wide data management policies using a flexible governance framework that can adapt to unique business scenarios and requirements. Identify the right technologies following a proof of concept approach that supports specific risk management and compliance use cases. Tool proliferation is always a problem so look to consolidate and set standards that address end-to-end scenarios. Unstructured data management policies must address data storage, data migration, data tiering, data replication, data archiving and data lifecycle management of unstructured data (block, file, and object data stores) in addition to the semi-structured and structured data lakes, data warehouses and other so-called big-data repositories.

      2020-Analytics_Driven_Storage-Cover-r2

      Read the VentureBeat article: How to create data management policies for unstructured data.
      What is a Data Management Policy?

      A data management policy addresses the operating policy that focuses on the management and governance of data assets. The data management policy should contain all the guidelines and information necessary for governing enterprise data assets and should address the management of structured, semi-structured and unstructured data.

      What does a Data Management Policy contain?

      A comprehensive Data Management Policy should contain the following:

      • An inventory of the organization’s data assets
      • A strategy of effective management of the organization’s data assets
      • An appropriate level of security and protection for the data including details of which roles can access with data elements
      • Categorization of the different sensitivity and confidentiality levels of the data
      • The objectives for measuring expectations and success
      • Details of the laws and regulations that must be adhered to regarding the data program
      Data Management policy and procedures
      Firstly the business much select who should be part of the policy-making process. This should include legal, compliance and risk executives, security and IT leaders, business unit heads and the chief data officer or relevant alternative. Once the committee is selected, they should identify the risks associated with the organizations data and create a data management policy.

      Getting Started with Komprise:

    • Data Migration

      Data migration means many different things. At it’s core, it is the process of selecting and moving data from one location to another. For this Glossary, we’re focused on Unstructured Data Migration, specifically file and object data, which may involve moving data across different data storage vendors, and across different formats and protocols (SMB, NFS, S3, etc.)

      Data migrations are often done in the context of retiring a system and moving to a new system, or in the context of a cloud migration, or in the context of a modernization or upgrade strategy.

      When it comes to unstructured data migrations and migrating enterprise file data workloads to the cloud, data migrations can be laborious, error prone, manual, and time consuming. Migrating data may involve finding and moving billions of files (large and small), which can succumb to storage and network slowdowns or outages. Also, different file systems do not often preserve metadata in exactly the same way, so migrating data without loss of fidelity and integrity can be a challenge.

      Two Data Migration Approaches

      Lift-and-Shift

      Many organizations start here, thinking they’ll just migrate entire file shares and directories to the cloud. If this is your plan, it’s important to use analytics to plan and migrate to reduce errors, ensure alignment and multi-storage visibility while minimizing cutover. With Komprise Elastic Data Migration, you can readily migrate from one primary vendor to another without rehydrating all the archived data, so migrations are cheaper and faster.

      Cloud Data Tiering as a First Step: Smart Data Migration

      Since a large percentage of file data is cold and has not been used in a year or more, tiering and archiving cold data is a smart first step – especially if you use Transparent Move Technology so users can access the files exactly as before. You can follow this up by migrating the remaining hot data to a performance cloud tier.

      Data Migration Questions

      Here are some questions that will help you determine the best file and object data migration strategy:

      • What data storage do we have and where?​ (primary storage, secondary storage)
      • What data sets are accessed most frequently (hot) and less frequently (cold)?​
      • What types of files do we have and which are taking up the most storage (image files, video, audio files, sensor data, etc.)?​
      • What is the cost of storing these different file types today? How does this align with the budget and projected growth?​
      • Which types of files should be stored at a higher security level? (PII or IP data? Mission-critical projects?)​
      • Are we complying with regulations and internal policies with our unstructured data management practices?
      • What constraints do my network and environment pose and how do I avoid surprises during migrations?
      • Do we have the best possible strategy in place for WAN acceleration, such as Komprise Hypertransfer for Elastic Data Migration.

      Getting Started with Komprise:

    • Data Protection

      Data protection is used to describe both data backup and disaster recovery. A quality data protection strategy should automate the movement of critical data to online and offline storage and include a comprehensive strategy for valuing, classifying, and protecting data as to protect these assets from user errors, malware and viruses, machine failure, or facility outages/disruptions.

      Data protection storage technologies include tape backup, which copies data to a physical tape cartridge, or cloud backup, which copies data to the cloud, and mirroring, which replicates a website or files to a secondary location. These processes can be automated and policies assigned to the data, allowing for accurate, faster data recovery.

      Data protection should always be applied to all forms of data within an organization, in order to protect the integrity of the data, protect from corruption or errors, and ensuring privacy of the data. When classifying data, policies should be established to identify different levels of security, from least secure (data that anyone can see) to most secure (data that if released, would put the organization at risk).

      Komprise-Ransomware-blog-post-SOCIAL-1-768x402

      Getting Started with Komprise:

    • Data Retrieval

      Data retrieval refers to the process of accessing and retrieving data from a database or data storage system. Data retrieval is possible using various techniques and tools, such as database querying, data mining, and data warehousing. The specific techniques and tools used will depend on the type of data being retrieved, along with the requirements and goals of the organization.

      Some benefits of effective data retrieval include:

      • Improved data access: By providing quick and easy access to data, organizations can improve their overall data management processes and make better use of their existing data.
      • Better decision making: By providing access to up-to-date and accurate information, data retrieval can help organizations to make better decisions and improve their overall performance.
      • Better customer insights: By retrieving and analyzing customer data, organizations can gain valuable insights into customer behavior and preferences, so they can improve customer relationships and drive business growth.

      Cloud Data Retrieval

      There are several challenges associated with retrieving data from the cloud, including:

      • Network Latency: Retrieving data from a remote server can result in significant latency, especially if the data is large or the network is congested.
      • Bandwidth Limitations: Bandwidth limitations can limit the speed at which data can be retrieved from the cloud.
      • Data Security: Ensuring the security and privacy of data stored in the cloud can be challenging, especially for sensitive data.
      • Data Compliance: Organizations must ensure that their data retrieval practices comply with relevant regulations and standards, such as data privacy laws and industry standards.
      • Data Availability: In some cases, cloud data may not be available due to network outages, server downtime, or other technical issues.
      • Cloud Costs: Retrieving large amounts of data from the cloud can be expensive, especially if the data is stored in a high-performance tier.
      • Complexity: Interacting with cloud data storage systems can be complex and requires a certain level of technical expertise.

      Cloud Data Retrieval and Egress Costs

      Egress fees refer to the costs associated with transferring data from a cloud storage service to an external location or to another cloud provider. Many cloud service providers charge fees for data egress, as transferring large amounts of data can put a strain on their network and infrastructure. The cost of egress is usually based on the amount of data transferred, the distance of the transfer, and the speed of the transfer.

      It is important for organizations to understand their cloud service provider’s data egress policies and fees, as well as their data transfer needs, to avoid unexpected costs. Organizations can minimize egress costs by compressing data, reducing the amount of data transferred, or storing data in the same geographic region as their computing resources.

      The Benefits of Smart File Data Migration

      A smart data migration strategy for enterprise file data means an analytics-first approach ensuring you know which data can migrate, to which class and tier, and which data should stay on-premises in your hybrid cloud storage infrastructure. With Komprise, you always have native data access, which not only removes end-user disruption, but also reduces egress costs and the need for rehydration and accelerates innovation in the cloud.

      Komprise-Smart-Data-Migration-Webinar-SOCIAL-ONDEMAND

      Getting Started with Komprise:

    • Data Services

      Data services is a term that is increasingly being used to describe the range of services typically provided by enterprise IT operations teams or shared services teams that involve data processing, data integration, data security, data reduction, data protection, data storage, and unstructured data management. Data services is a broad term that many enterprise IT teams, industry analysts and technology vendors and service providers use differently today, often overlapping, but not to be confused with analytics services, cloud services or professional services. Data services should be tied to financial operations (FinOps) goals to drive efficiency, cost savings and/or specific business objectives. Examples of data services include:

      • Departmental-Archiving-WP-THUMB-2-768x512Data storage services: The storage of data in various forms, including files, databases, and cloud storage. The shift to Storage as a Service (STaaS) can be considered to part of a data services strategy. Read the white paper: Getting Departments to Care About Storage Savings.
      • Data management services: The management of data throughout its lifecycle, including data quality management, data governance, data classification. Data management services include analysis and line of business reporting into data storage usage and costs for showback as well as data mobility use cases (data migration, data tiering, data replication, deletion, etc.)
      • Data processing services: The processing of data through various algorithms and techniques, including data analytics, machine learning, and artificial intelligence. Data engineering services can also be considered to be a data service
      • Data integration services: The integration of data from multiple sources (ETL/ELT) to create a single, unified view of the data (usually for analytics) as well as real-time application of data between systems (EAI, ESB, streaming).

      Data services are essential for data-heavy enterprise organizations that need to manage, process, and analyze large amounts of (mostly unstructured) data to gain insights and make better business decisions.

      From Storage Services to Data Services

      In VMblog predictions post: Unstructured Data Management Predictions for 2023: Data Insights and Automation take Center Stage, Komprise cofounder and COO Krishna Subramanian noted that enterprises are moving away from managing storage to managing data services:

      “Storage teams have traditionally measured infrastructure metrics for capacity and performance such as latency, I/O operations per second (IOPS) and throughput. But given the massive data growth of unstructured data, data focused metrics are becoming paramount as enterprises move away from managing storage to managing data services in hybrid cloud infrastructure. New data management metrics look at usage indicators such as top data owners, percentage of “cold” files which haven’t been accessed in over a year, most common file size and type, and financial operations metrics such as storage costs per department, storage costs per vendor per TB, percentage of backups reduced, rate of data growth, chargeback metrics and more.”

      In the same post she highlighted the changing role of storage administrators:

      The storage architect/engineer will evolve to incorporate data services

      We’ll see more experienced individuals in these roles move on to cloud architect and other engineering roles while IT generalists/junior cloud engineers inherit their responsibilities. This is a challenging time for IT organizations in a hybrid model as there is still significant NAS expertise needed. Either way, the IT employees managing the storage function will need new skills beyond managing the storage hardware. These individuals must understand the concept of data services-including facilitating secure, reliable governance and access to data and making data searchable and available to business stakeholders for applications such as cloud-based machine learning and data lakes. The new storage architect will frequently analyze and interpret data characteristics, developing data management plans which factor in cost savings strategies and business demands to create new value from data. This individual will interact regularly with departments to create and execute ongoing data management processes and plans.

      In a Solutions Review post: 2023 Expert Data Management Best Practices & Predictions, Komprise cofounder and CEO Kumar Goswami noted:

      “IT organizations must better understand data to improve migrations and gain maximum ROI from cloud, meet compliance requirements, deliver data services to departments, and to facilitate new value generation from data.”

      He went on to say:

      To keep up with ever-changing data services demands from the business, IT will implement collaborative processes with stakeholders across many different departments such as finance, marketing, legal, research, HR. Data workflow automation will support a variety of use cases from governance and compliance to cost savings to big data analytics.

      In the 2022 Strategic Roadmap for Storage, Gartner noted (subscription required):

      I&O leaders must implement intelligent data services infrastructure powered by software-defined storage and hybrid cloud IT operations….Integration of data services to the hybrid cloud platform is among the top enterprise challenges to address the need for seamless data services across the edge, the core data center and public clouds.

      Getting Started with Komprise:

    • Data Sprawl

      What is Data Sprawl?

      Data sprawl describes the staggering amount of unstructured data produced by enterprises worldwide every day; with new devices, including enterprise and mobile applications added to a network, it is estimated data sprawl to be 40% year over year, into the next decade.

      Given this growth in data sprawl, data security is imperative, as it can lead to enormous problems for organizations, as well as its employees and customers. In today’s fast-paced world, organizations must carefully consider how to best manage the precious information it holds.

      Organizations experiencing unstructured data sprawl need to secure all of their endpoints. Security is critical. Addressing data security as well as remote physical devices ensure organizations are in compliance with internal and external regulations.

      As the amount of security threats mount, it is critical that data sprawl is addressed. Taking the right steps to ensure data sprawl is controlled, via policies and procedures within an organization, means safeguarding not only internal data, but also critical customer data.

      Organizations should develop solid practices that may have been dismissed in the past. Left unchecked, control of an organization’s unstructured data will continue to manifest itself in hidden costs and limited options. With a little evaluation and planning, it is an aspect of your network that can be improved significantly and will pay off long term.

      Analyzing and Managing Unstructured Data: Getting Sprawl (and Costs) Under Control

      According to this Geekwire article, Gartner estimates that unstructured data represents an astounding 80 to 90% of all new enterprise data, and it’s growing 3X faster than structured data. Komprise Intelligent Data Management rapidly analyzes file ad object unstructured data in-place across multi-vendor storage to provide aggregate analytics (e.g., how much data, how much is hot, how much is cold, what types, top users, etc.) as well as a Global File Index across cloud and on-prem environments. The Komprise Global File Index is highly efficient and scalable to handle billions of files, exabytes of data without the scalability issues of using a central database or any other centralized architectures. Customers can build queries using Komprise Deep Analytics to find the precise subset of data they need through any combination of metadata and tags, and then move, copy and tier that data using Deep Analytics Actions. Komprise combines in-place analytics with data movement and on-going data management to provide a closed-loop system that is intelligent and adapts to a customer’s unique needs. The functionality is also available via API.

      Tackling Data Sprawl with Komprise Analysis

      Komprise-Analysis-blog-SOCIAL-1-1-768x402

      Komprise Analysis provides consistent unified insights into unstructured data across many vendors’ storage and cloud platforms. Key metrics include data volume, data growth rates, where data is stored, top owners, top file types/sizes and time of last access. Komprise can create cost models based on different storage targets and tiering plan that will show.

      Getting Started with Komprise:

    • Data Storage

      What is Data Storage?

      Data storage refers both to the methods of transferring digital information from the source (users, applications, sensors) via protocols or APIs and to the destination; physical storage media such as magnetic or solid-state disks, tape, or optical. Data storage is pervasive:  implemented in enterprise data centers, cloud providers, and consumer technology such as laptops, and phones. 

      From genomics and medical imaging to streaming video, electric cars, IoT at the edge and user generated data, unstructured data growth is exploding. Enterprise IT organizations are looking to new cloud and hybrid cloud strategies to manage costs and investing in unstructured data management and cloud data migration and cloud data management technologies and strategies to reduce data storage costs and while maximizing data value.

      unstructuredData-768x415

      What are the different types of data storage protocols?

      File Data Storage: File storage records data to files that are organized in folders, and the folders are organized under a hierarchy of directories and subdirectories. For example, a text file stored to your home directory on your laptop. File data is typically used for collaboration and shared access.

      Examples of File Storage

      NAS Network Attached Storage, Network File System NFS, and Server Message Block SMB

      File Storage Vendor Solutions

      NetApp ONTAP, Dell/EMC PowerScale (Isilon), Qumulo, Microsoft Windows Server, Pure FlashBlade, Amazon FSx, Azure Files

      Block Data Storage

      Typically used in servers and workstations where data is being written directly to physical media (HDD or SSD) in chunks or blocks. In contrast to file, block data is typically dedicated for access by a single application. Block storage is often used for the most performance intensive applications.

      • Examples of Block Data Storage: Direct Attached Storage DAS, Storage Attached Network SAN, iSCSI, NVME
      • Block Storage Vendor Solutions: Pure FlashArray, Dell/EMC VMAX, NetApp ONTAP and E-series, HDS
      Object Storage

      Also known as object-based storage or cloud storage, is a way of addressing and manipulating data storage as objects. In contrast to file storage, object data is stored in a flat namespace. Object storage was designed for use in massive repositories and is accessed over the HTTP protocol as a REST API.

      • Examples of Object Storage: AWS S3, Azure Blob, Google Cloud Storage, Cloud Data Management Interface (CDMI)
      • Object Storage Vendors: AWS, Azure, Google, Wasabi, Cloudian, NetApp, Dell/EMC, Scality
      NDMP (Network Data Management Protocol)

      Storage protocol that allows file servers and backup applications to communicate directly to a network-attached tape device for backup or recovery operations.

      What are types of physical storage media?

      • Hard Disk Drive (HDD): Disk based storage, used for high density data storage. Data is written to a magnetic layer of spinning disk.
      • Solid State Drive (SSD): Also known as flash. Silicone replaces the spinning disk component of HDD to achieve higher performance and smaller form factor.
      • Tape: Data is written to a ribbon of magnetic material in a cartridge. Used strictly for backup and archive, tape’s slow performance is off set by low cost, high levels of density, and the ability to be stored offline. 
      • Optical Storage: In contrast to magnetic storage data is recorded optically to media such as CD and DVD disks. Optical storage is used for durable, long term, off-line, archival storage. 

      What is Primary Storage?

      Primary storage is used for active read and write data sets where high performance is critical. SSD or flash media with the highest level of performance is the ideal storage media for primary storage. While less typical HDD is also used as primary storage where lower cost and storage density is the key factor.

      What is Secondary Storage?

      Also referred to as active archive, secondary storage is used for less frequently accessed data sets. While any protocol and media can be used for secondary storage HDD with NAS and Object are the most common choices. Use cases for secondary storage is data tiering and backup / data protection applications.

      Read the white paper: Block-Level vs. File Level Tiering

      What is Data Storage?

      Data storage refers both to the methods of transferring digital information from the source (users, applications, sensors) via protocols or APIs and to the destination; physical storage media such as magnetic or solid-state disks, tape, or optical.

      What is Block Level Data Storage?

      Mainly used in servers and workstations where data is being written directly to physical media (HDD or SSD) in chunks or blocks. As opposed to file level data storage, block level data storage is mostly dedicated for access by a single application. Block storage uses either direct attached storage (DAS), or data transfer protocols Fiber Channel (FC) or iSCSI (Internet Small Computer Systems Interface) via a storage area network (SAN).

      What is Data Lake Storage in Azure?

      Data Lake Storage in Azure from Microsoft is a fully managed scalable system based on a secure cloud platform that provides industry-standard, cost-effective storage for big data analytics.

      Getting Started with Komprise:

    • Data Storage Costs

      Data storage costs are the expenses associated with storing and maintaining data in various forms of storage media, such as hard drives, solid-state drives (SSDs), cloud storage, and tape storage. These costs can be influenced by a variety of factors, including the size of the data, the type of storage media used, the frequency of data access, and the level of redundancy required. As the amount of unstructured data generated continues to grow, the cost of storing it remains a significant consideration for many organizations. In fact, according to the Komprise 2022 State of Unstructured Data Management Report, the majority of enterprise IT organizations are spending over 30% of their budget on data storage, backups and disaster recovery—similar to 2021. This is why shifting from storage management to storage-agnostic data management continues to be a topic of conversation for enterprise IT leaders.

      Komprise-State-of-Unstructured-Data-Management-Report-2022-BLOG-SOCIAL-1
      Unstructured Data Management

      Cloud Data Storage Costs

      Cloud data storage costs refer to the expenses incurred for storing data on cloud storage platforms provided by companies like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). In addition to the points above about data storage costs (amount of data stored and frequency of data access) in the cloud the level of durability and availability required are also factors when it comes to cloud storage costs. Cloud data storage providers typically charge based on the amount of data stored per unit of time, and additional fees may be incurred for data retrieval, data transfer, and data processing. Many cloud storage providers offer different storage tiers with varying levels of performance and cost, allowing customers to choose the option that best fits their budget and performance needs. With the right cloud data management strategy, cloud storage can be more cost-effective than traditional hardware-centric on-premises storage, especially for organizations with large amounts of data and high storage needs.

      Managing Data Storage Costs

      Managing data storage costs involves making informed decisions (and the right investment strategies) about how to store, access, and use data in a cost-effective manner. Here are some strategies for managing data storage costs:

      • Data archiving: Archiving infrequently accessed data to lower cost storage options, such as object storage or tape, can help reduce storage costs.
      • Data tiering: Using different storage tiers for different types of data based on their access frequency and importance can help optimize costs.
      • Compression and deduplication: A well known data storage technique, compressing data and deduplicating redundant data can help reduce the amount of storage needed and lower costs.
      • Cloud file storage: Using cloud storage can be more cost-effective than traditional on-premises storage, especially for organizations with large amounts of data and high storage needs.
      • Data lifecycle management (aka Information Lifecycle Management): Regularly reviewing and purging unneeded data can help control storage costs over time.
      • Cost monitoring and optimization (see cloud cost optimization): Regularly monitoring and analyzing data storage costs and usage patterns can help identify opportunities for cost optimization.

      By using a combination of these strategies, organizations can effectively manage their data storage costs and ensure that they are using their data storage resources efficiently. Additionally, organizations can negotiate with data storage providers to secure better pricing and take advantage of cost-saving opportunities like bulk purchasing or long-term contracts.

      Stop Overspending on Data Storage with Komprise

      The blog post How Storage Teams Use Komprise Deep Analytics summarizes a number of strategies storage teams use Komprise Intelligent Data Management to deliver greater data storage cost savings and unstructured data value to the business, including:

      • Business unit metrics with interactive dashboards
      • Business-unit data tiering, retention and deletion
      • Identifying and deleting duplicates
      • Mobilizing specific data sets for third-party tools
      • Using data tags from on-premises sources in the cloud

      In the blog post Quantifying the Business Value of Komprise Intelligent Data Management, we review a storage cost savings analysis that saves customers an average 57% of overall data storage costs and over $2.6M+ annually. In addition to cost savings, benefits include:

      Plan Future Data Storage Purchases with Visibility and Insight

      With an analytics-first approach, Komprise delivers visibility into how data is growing and being used across a customer’s data storage silos – on-premises and in the cloud. Data storage administrators no longer have to make critical storage capacity planning decisions in the dark and now can understand how much more storage will be needed, when and how to streamline purchases during planning.

      Optimize Data Storage, Backup, and DR Footprint

      Komprise reduces the amount of data stored on Tier 1 NAS, as well as the amount of actively managed data—so customers can shrink backups, reduce backup licensing costs, and reduce DR costs.

      Faster Cloud Data Migrations

      Auto parallelize at every level to maximize performance, minimize network usage to migrate efficiently over WANs, and migrate more than 25 times faster than generic tools across heterogeneous cloud and storage with Elastic Data Migration.

      Komprise-Hypertransfer-Migration-PR-SOCIAL

      Reduced Datacenter Footprint

      Komprise moves and copies data to secondary storage to help reduce on-premises data center costs, based on customizable data management policies.

      Risk Mitigation

      Since Komprise works across storage vendors and technologies to provide native access without lock-in, organizations reduce the risk of reliance on any one storage vendor.

      Rein-In-Storage-768x512

      Getting Started with Komprise:

    • Data Tagging

      What is data tagging?

      Data tagging is the process of adding metadata to your file data in the form of key value pairs. These values give context to your data, so that others can easily find it in search and execute actions on it, such as move to confinement or a cloud-based data lake. Data tagging is valuable for research queries and analytics projects or to comply with regulations and policies.

      How does Komprise data tagging work?

      Komprise-Automated-Data-Tagging-blog-THUMBUsers, such as data owners, can apply tags to groups of files and tags can also be applied programmatically by analytics applications via API. In the Komprise Deep Analytics interface, users can query the Global File Index and find the data for tagging. This is done by creating a Komprise Plan that will invoke the text search function to inspect and tag the selected files. The ability to use Komprise Intelligent Data Management to search, find, apply tags and then take action makes it possible for customers to get faster value from enriched data sets.

      Tagging and Smart Data Workflows

      Komprise-Smart-Data-Workflows-blog-SOCIAL-1-768x402

      Komprise Smart Data Workflows automate unstructured data discovery, data mobility and the delivery of data services.

      • Define custom query to find specific data set.
      • Analyze and tag data sets with additional metadata
      • Move only the tagged data for analytics, AI/ML, etc.
      • Move to a lower-cost data storage tier after analysis

      Komprise-Search-and-Tag-Blog-THUMB

      ———-

      Getting Started with Komprise:

    • Data Tiering

      Data Tiering refers to a technique of moving less frequently used data, also known as cold data, to cheaper levels of storage or tiers. The term “data tiering” arose from moving data around different tiers or classes of storage within a storage system, but has expanded now to mean tiering or archiving data from a storage system to other clouds and storage systems. See also cloud tiering and choices for cloud data tiering.

      komprise-file-tiering-image-768x404

      Data Tiering Cuts Costs Because 70%+ of Data is Cold

      As data grows, storage costs are escalating. It is easy to think the solution is more efficient storage. But the real cause of storage costs is poor data management. Over 70% of data is cold and has not been accessed in months, yet it sits on expensive storage and consumes the same backup resources as hot data. As a result, data storage costs are rising, backups are slow, recovery is unreliable, and the sheer bulk of this data makes it difficult to leverage new options like Flash and Cloud.

      Data Tiering Was Initially Used within a Storage Array

      Data Tiering was initially a technique used by storage systems to reduce the cost of data storage by tiering cold data within the storage array to cheaper but less performant options – for example, moving data that has not been touched in a year or more from an expensive Flash tier to a low-cost SATA disk tier.

      Typical storage tiers within a storage array include:
      • Flash or SSD: A high-performance storage class but also very expensive. Flash is usually used on smaller data sets that are being actively used and require the highest performance.
      • SAS Disks: Usually the workhorse of a storage system, they are moderately good at performance but more expensive than SATA disks.
      • SATA Disks: Usually the lowest price-point for disks but not as performant as SAS disks.
      • Secondary Storage, often Object Storage: Usually a good choice for capacity storage – to store large volumes of cool data that is not as frequently accessed, at a much lower cost.

      Cloud-Data-Tieringv2-1-300x225

      Cloud Data Tiering is now Popular

      Increasingly, customers are looking at another option – tiering or archiving data to a public cloud.

      • Public Cloud Storage: Public clouds currently have a mix of object and file storage options. The object storage classes such as Amazon S3 and Azure Blob (Azure Storage) provide tremendous cost efficiency and all the benefits of object storage without the headaches of setup and management.

      Tiering and archiving less frequently used data or cold data to public cloud storage classes is now more popular. This is because customers can leverage the lower cost storage classes within the cloud to keep the cold data and promote them to the higher cost storage classes when needed. For example, data can be archived or tiered from on-premises NAS to Amazon S3 Infrequent Access or Amazon Glacier for low ongoing costs, and then promoted to Amazon EFS or FSX when you want to operate on it and need performance.

      But in order to get this level of flexibility, and to ensure you’re not treating the cloud as just a cheap storage locker, data that is tiered to the cloud needs to be accessible natively in the cloud without requiring third-party software. This requires file-tiering, not block-tiering.

      Block Tiering Creates Unnecessary Costs and Lock-In

      Block-level tiering was first introduced as a technique within a storage array to make the storage box more efficient by leveraging a mix of technologies such as more expensive SAS disks as well as cheaper SATA disks.

      Block tiering breaks a file into various blocks – metadata blocks that contain information about the file, and data blocks that are chunks of the original file. Block-tiering or Block-level tiering moves less used cold blocks to lower, less expensive tiers, while hot blocks and metadata are typically retained in the higher, faster, and more expensive storage tiers.

      Block tiering is a technique used within the storage operating system or filesystem and is proprietary. Storage vendors offer block tiering as a way to reduce the cost of their storage environment. Many storage vendors are now expanding block tiering to move data to the public cloud or on-premises object storage.

      But, since block tiering (often called CloudPools – examples are NetApp FabricPool and Dell EMC Isilon CloudPools) is done inside the storage operating system as a proprietary solution, it has several limitations when it comes to efficiency of reuse and efficiency of storage savings. Firstly, with block tiering, the proprietary storage filesystem must be involved in all data access since it retains the metadata and has the “map” to putting the file together from the various blocks. This also means that the cold blocks that are moved to a lower tier or the cloud cannot be directly accessed from the new location without involving the proprietary filesystem because the cloud does not have the metadata map and the other data blocks and the file context and attributes to put the file together. So, block tiering is a proprietary approach that often results in unnecessary rehydration of the data and treats the cloud as a cheap storage locker rather than as a powerful way to use data when needed.

      The only way to access data in the cloud is to run the proprietary storage filesystem in the cloud which adds to costs. Also, many third-party applications such as backup software that operate at a file level require the cold blocks to be brought back or rehydrated, which defeats the purpose of tiering to a lower cost storage and erodes the potential savings. For more details, read the white paper: Block vs. File-Level Tiering and Archiving.

      Know Your Cloud Tiering Choices

      CloudTieringMigrations-WebinarOnDemandthumb

      File Tiering Maximizes Savings and Eliminates Lock-In

      File-tiering is an advanced modern technology that uses standard protocols to move the entire file along with its metadata in a non-proprietary fashion to the secondary tier or cloud. File tiering is harder to build but better for customers because it eliminates vendor lock-in and maximizes savings. Whether files have POSIX-based Access Control Lists (ACLs) or NTFS extended attributes, all this metadata along with the file itself is fully tiered or archived to the secondary tier and stored in a non-proprietary format. This ensures that the entire data can be brought back as a file when needed. File tiering does not just move the file, but it also moves the attributes and security permissions and ACLS along with the file and maintains full file fidelity even when you are moving a file to a different storage architecture such as object storage or cloud. This ensures that applications and users can use the moved file from the original location, and they can directly open the file natively in the secondary location or cloud without requiring any third-party software or storage operating system.

      Since file tiering maintains full file fidelity and native access based on standards at every tier, it also means that third party applications can access the moved data without requiring any agents or proprietary software. This ensures that savings are maximized since backup software and other third -arty applications can access moved data without rehydrating or bringing the file back to the original location. It also ensures that the cloud can be used to run valuable applications such as compliance search or big data analytics on the trove of tiered and archived data without requiring any third-party software or additional costs.

      File-tiering is an advanced technique for archiving and cloud tiering that maximizes savings and breaks vendor lock-in.

      Data Tiering Can Cut 70%+ Storage and Backup Costs When Done Right

      In summary, data tiering is an efficient solution to cut storage and backup costs because it tiers or archives cold, unused files to a lower-cost storage class, either on-premises or in the cloud. However, to maximize the savings, data tiering needs to be done at the file level, not block level. Block-level tiering creates lock-in and erodes much of the cost savings because it requires unnecessary rehydration of the data. File tiering maximizes savings and preserves flexibility by enabling data to be used directly in the cloud without lock-in.

      Why Komprise is the easy, fast, no lock-in path to the cloud for file and object data.

      PttC_pagebanner-2048x639

      Getting Started with Komprise:

    • Data Virtualization

      Data virtualization delivers a unified, simplified view of an organization’s data that can be accessed anytime. It integrates data from multiple sources, to create a single data layer to support multiple layers and users. The result is faster access to this data, providing instant access, any way you want it.

      Data virtualization involves abstracting, transforming, federating and delivering data from disparate sources. This allows users to access the applications without having to know their exact location.

      Advantages to data virtualization:

      • An organization can gain business insights by leveraging all data
      • They can become aware of analytics and business intelligence
      • Data virtualization can streamline an organization’s data management approach, which reduces complexity and saves money

      Data virtualization involves three key steps. First, data virtualization software is installed on-premise or in the cloud, which collects data from production sources and stays synchronized as those sources change over time. Next, administrators are able to secure, archive, replicate, and transform data using the data virtualization platform as a single point of control. Last, it allows users to provision virtual copies of the data that consume significantly less storage than physical copies.

      Data virtualization use cases:

      • Application development
      • Backup and disaster recovery
      • Datacenter migration
      • Test data management
      • Packaged application projects

      Getting Started with Komprise:

    • Deep Analytics

      What is Deep Analytics?

      Deep analytics is the process of applying data mining and data processing techniques to analyze and find large amounts of data in a form that is useful and beneficial for new applications. Deep analytics can apply to both structured and unstructured data.

      In the context of unstructured data, deep analytics is the process of examining file and object metadata (both standard and extended) across billions of files to find data that fits specific criteria. A petabyte of unstructured data can be a few billion files. Analyzing petabytes of data typically involves analyzing tens to hundreds of billions of files. Because analysis of such large workloads can require distribution over a farm of processing units, deep analytics is often associated with scale-out distributed computing, cloud computing, distributed search, and metadata analytics.

      Deep analytics of unstructured file and object data requires efficient indexing and search of files and objects across a distributed farm. Financial services, genomics, research and exploration, biomedical, and pharmaceutical are some of the early adopters of Komprise Deep Analytics, which is powered by a Global File Index. In recent years, enterprises have started to show interest in deep analytics as the amount of corporate unstructured data has increased, and with it, the desire to extract value from the data.

      TechKrunch-Nov10

      Deep analytics enables additional use cases such as Big Data Analytics, Artificial Intelligence and Machine Learning.

      When the result of a deep analytics query is a virtual data lake, which we call the Global File Index, data does not have to be moved or disrupted from its original destination to enable reuse. This is an ideal scenario to rapidly leverage deep analytics without disruption since data can be pretty heavy to move.

      Learn more about Komprise Deep Analytics.

      Read the blog post: How Storage Teams Use Deep Analytics

      Getting Started with Komprise:

    • Digital Business

      A digital business is one that uses technology as an advantage in its internal and external operations.

      Information technology has changed the infrastructure and operation of businesses from the time the Internet became widely available to businesses and individuals. This transformation has profoundly changed the way businesses conduct their day-to-day operations. This has maximized the benefits of data assets and technology-focused initiatives.

      This digital transformation has had a profound impact on businesses; accelerating business activities and processes to fully leverage opportunities in a strategic way. A digital business takes advantage of this fully so to not be disrupted and to thrive in this era. C-Level staff needs to help their organizations seize opportunities while mitigating risks.

      This technology mindset has become standard in even the most traditional of industries, making a digital business strategy imperative for storing and analyzing data to gain a competitive advantage over the competition. The introduction of cloud computing and SaaS delivery models means that internal processes can be easily managed through a wide choice of applications, giving organizations the flexibility to chose, and change software as the businesses grows and changes.

      A digital business also has seen a shift in purchasing power; individual departments now push for the applications that will best suit their needs, rather than relying on IT to drive change.

      Unstructured Data Management is a Digital Business Priority

      The Komprise 2022 State of Unstructured Data Management Report found that data storage costs comprise over 30% of enterprise IT budgets. This is why the right unstructured data management strategy has become an essential component of a digital business strategy.

      Komprise-State-of-Unstructured-Data-Management-Report-2022-BLOG-SOCIAL-1
      Unstructured Data Management

      Unstructured data management is about being able to realize business outcomes from analytics through data movement, extraction, and value. Komprise provides a storage-independent way to manage data no matter where it lives so a digital business can get value from unstructured data from every tier. Unlike storage tiering or data backup solutions that move blocks of data and lock customers into proprietary file systems, Komprise Intelligent Data Management moves the entire file intact and enables customers to directly leverage native services at every tier without going through Komprise or their primary file system. This is key to a seamless user experience because users in a modern digital enterprise transparently access data from their original file system while also being able to build new applications in the cloud. Furthermore, user transparency, powered by patented Transparent Move Technology, makes it possible for IT teams to deploy transparent tiering company wide. Without this transparency, IT would need user and/or departmental approval and this essentially is a major roadblock that prevents any large scale tiering.

      Getting Started with Komprise:

    • Digital Pathology Data Management

      According to the Digital Pathology Association:

       “Digital pathology is a dynamic, image-based environment that enables the acquisition, management and interpretation of pathology information generated from a digitized glass slide.”

      Healthcare organizations have shifted to digital media for medical imaging. Digital pathology, digital PACS and VNA systems are all generating and now storing petabytes of medical imaging data—lab slides, X-rays, MRIs, CT scans and more. These ever-expanding datasets are pushing the limitations of data storage systems and challenging IT department’s ability to effectively manage data. And with increasing regulations, healthcare providers typically must retain medical imaging files for many years. In addition to compliance requirements, clinical researchers may also need access to the data indefinitely. They also typically need access to the unstructured data immediately. The potential future value of this ever-expanding data repository must be weighed against the growing financial and overall unstructured data management costs.

      The Digital Pathology Data Management Challenge

      Medical-Imaging-White-Paper-SOCIAL-3-768x402Data center storage for large image files is expensive – typically costing millions a year for some organizations on expensive NAS devices. Not only is NAS expensive, but its data must also be secured, replicated and backed up, which typically triples the costs. Meanwhile, in most cases, imaging data is rarely accessed after a few days or weeks. To get greater flexibility and manage data storage costs, healthcare organizations are adopting unstructured data management software to tier cold medical imaging data out of expensive storage to cost-effective environments such as the cloud. Data management decisions can be difficult internally with politics, vendor relationships and long-standing institutional perspectives. Health systems are handling sensitive patient information and tolerance for downtime is usually quite low.

      There are many benefits from augmenting medical imaging solutions with data management software that transparently tiers cold data from your data storage and backups.

      Komprise has many customers in the healthcare industry dealing with multiple petabytes of file and object data.

      Learn more.

      Getting Started with Komprise:

    • Direct Data Access

      Direct data access is the ability to directly access your data whether on-premises, in the cloud, or a hybrid environment without needing to rehydrate.

      The patented Komprise Transparent Move Technology™ (TMT) tiers file data workloads to a target without using any agents or stubs, allowing users to still access files natively from the original source as if they had never moved. Known as file and object duality, with Komprise users access files as native objects without getting in front of hot, mission-critical data.

      Native Data Access definition.

      Komprise-Cloud-Native-Access-Webinar-blog-SOCIAL-1-768x402

      Getting Started with Komprise:

    • Director (Komprise Director)

      The Komprise Director is the administrative console of the Komprise distributed architecture that runs as a cloud service or on-premises. Read the white paper: Komprise Intelligent Data Management Architecture Overview or one of the Komprise TechKrunch videos to learn more.

      Learn more about the Komprise architecture.

      Komprise-Donut-Transparent-bg-schedule-a-demo-768x539

      Getting Started with Komprise:

    • Disaster Recovery

      Disaster recovery refers to security planning to protect an organization from the effects of a disaster – such as a cyber attack or equipment failure. A properly constructed disaster recovery plan will allow an organization to maintain or quickly resume mission critical functions following a disaster.

      The disaster recovery plan includes policies and testing, and may involve a separate physical site for restoring operations. This preparation needs to be taken very seriously, and will involve a significant investment of time and money to ensure minimal losses in the event of a disaster.

      Control measures are steps that can reduce or eliminate various threats for organizations. Different types of measures can be included in disaster recovery plan. There are three types of disaster recovery control measures that should be considered:

      1. Preventive measures – Intended to prevent a disaster from occurring
      2. Detective measures – Intended to detect unwanted events
      3. Corrective measures – The plan to restore systems after a disaster has occurred.

      A quality disaster recovery plan requires these policies be documented and tested regularly. In some cases, organizations outsource disaster recovery to an outsourced provider instead of using their own remote facility, which can save time and money. This solution has become increasingly more popular with the rise in cloud computing.

      Read the case study:
      Leading Idaho Health System Selects Komprise to Right-Place Data and Bolster Disaster Recovery

      Komprise-St-Lukes-Customer-Story-PR-SOCIAL

      Getting Started with Komprise:

    • Dynamic Data Analytics

      Komprise unstructured data analytics allows organizations to analyze data across all storage to know how much exists, what kind, who’s using it, and how fast it’s growing. “What if” data scenarios can be run based on various policies to instantly see capacity and data storage cost savings, enabling informed, optimal unstructured data management planning decisions without risk.

      Learn more about Komprise Analysis.

      Learn more about Komprise Deep Analytics.

      Komprise-Analysis-blog-SOCIAL-1-1-768x402

      Getting Started with Komprise:

  • E
    • Egress Costs

      Egress costs are the large network fees most cloud providers charge to move your data out of the cloud. Most allow you to move your data into the cloud for free (ingress).

      In the post 5 Tips to Optimize Your Unstructured Data, a key benefit of embracing open, standards-based unstructured data management is that organizations can do whatever they need to do with their file and object data data without paying licensing penalties and costs, such as for a third-party cloud file system or unnecessary cloud-egress fees. Komprise moves and manages unstructured data in native format in each tier, which means you can directly access the data and use all the cloud data services on your data without having to pay a data management or storage vendor. Avoiding these costs, including egress costs, is a priority for IT leaders surveyed by Komprise. Read the report: State of Unstructured Data Management.

      To learn more about Egress Costs read the New Stack article: Why Data Egress in the Cloud is Expensive.

      To learn more about right approach to cloud data migrations and data management visit: Smart Data Migration.

      The Benefits of Cloud Native Access

      Cloud native is a way to move data to the cloud without lock in, which means that your data is no longer tied to the file system Komprise-Cloud-Native-Access-Webinar-blog-SOCIAL-1-768x402from which it was originally served.

      In this webinar, Komprise leaders review the importance of cloud native data access and maximizing the potential of your data in terms of access, efficiency and data services. When you move data in cloud native format, your users should be able to access the data not only as a file, but also as a native object—which is necessary for leveraging cloud-native analytics and other services. Access to your data should not have to go through your file storage layer, as this incurs licensing fees and requires adequate capacity.

      Read the blog post: Why Cloud Native Unstructured Data Access Matters

      Getting Started with Komprise:

    • Elastic Data Migration

      What is Elastic Data Migration?

      Data migration is the process of moving data (eg files, objects) from one storage environment to another, but Elastic Data Migration is a high-performance migration solution from Komprise using a parallelized, multi-processing, multi-threaded approach that speeds NAS-to-NAS and NAS-to-cloud migrations in a fraction of the traditional time and cost.

      Standard Data Migration

      • NAS Data Migration – move files from a Network Attached Storage (NAS) to another NAS. The NAS environments may be on-premises or in the cloud (Cloud NAS)
      • S3 Data Migration – move objects from an object storage or cloud to another object storage or cloud

      Data migrations can occur over a local network (LAN) or when going to the cloud over the internet (WAN). As a result, migrations can be impacted by network latencies and network outages.

      Data migration software needs to address these issues to make data migrations efficient, reliable, and simple, especially when dealing with NAS and S3 data since these data sizes can be in petabytes and involve billions of files.

      Komprise-Smart-Data-Migration-Webinar-SOCIAL-ONDEMAND-768x402

      Elastic Data Migration

      Elastic Data Migration makes its orders of magnitude faster than normal data migrations. It leverages parallelism at multiple levels to deliver 27 times faster performance than NFS alternatives and 25 times faster for SMB protocol performance.

      • Parallelism of the Komprise scale-out architecture – Komprise distributes the data migration work across multiple Komprise Observer VMs so they run in parallel.
      • Parallelism of sources – When migrating multiple shares, Komprise breaks them up across multiple Observers to leverage the inherent parallelism of the sources
      • Parallelism of data set – Komprise optimizes for all the inherent parallelism available in the data set across multiple directories, folders, etc to speed up data migrations
      • Big files vs small files – Komprise analyzes the data set before migrating it so it learns from the nature of the data – if the data set has a lot of small files, Komprise adjusts its migration approach to reduce the overhead of moving small files. This AI driven approach delivers greater speeds without human intervention.
      • Protocol level optimizations – Komprise optimizes data at the protocol level (eg NFS, SMB) so the chattiness of the protocol can be minimized

      All of these improvements deliver substantially higher performance than standard data migration. When an enterprise is looking to migrate large production data sets quickly, without errors, and without disruption to user productivity, Komprise Elastic Data Migration delivers a fast, reliable, and cost-efficient migration solution.

      ElasticDMarchitecture

      Komprise Elastic Data Migration Architecture

      What Elastic Data Migration for NAS and Cloud provides

      Komprise Elastic Data Migration provides high-performance data migration at scale, solving critical issues that IT professionals face with these migrations. Komprise makes it possible to easily run, monitor, and manage hundreds of migrations simultaneously. Unlike most other migration utilities, Komprise also provides analytics along with migration to provide insight into the data being migrated, which allows for better migration planning.

      PttC_pagebanner-2048x639

      Fast, painless file and object migrations with parallelized, optimized data migration:

      • Parallelism at every level:
        • Leverages parallelism of storage, data hierarchy and files
        • High performance multi-threading and automatic division of a migration task across machines
      • Network efficient: Adjusts for high-latency networks by reducing round trips
      • Protocol efficient: optimized NFS handling to eliminate unnecessary protocol chatter
      • High Fidelity: Does MD5 checksums of each file to ensure full integrity of data transfer
      • Intuitive Dashboards and API: Manage hundreds of migrations seamlessly with intuitive UI and API
      • Greater speed and reliability
      • Analytics with migration for data insights
      • Ongoing value

      komprise-elastic-data-migration-page-promo-1536x349

      Getting Started with Komprise:

  • F
    • FabricPool

      What is NetApp FabricPool?

      FabricPool is a NetApp storage technology that enables automated tiering of data from an all-flash appliance to low-cost object storage tiers either on or off premises. This technology is a form of storage pools which are collections of storage volumes exported to a shared storage environment.

      Read more about storage pools.

      Read the blog post: What you need to know before jumping into the cloud tiering pool

      Komprise_CloudTieringPool_blogthumb-768x512

      Download the white paper: Cloud Tiering: Storage-Based vs Gateways vs File-Based: Which is Better and Why?

      Learn more about the Komprise path to the cloud for file and object data.

      Getting Started with Komprise:

    • File Archiving

      File archiving is the process of preserving digital files for long-term data storage and retrieval. The goal of file archiving is to retain important files and documents in a secure, easily accessible, and cost-effective manner, while freeing up space on primary storage systems.

      Manual file data management, backup and restore solutions, and dedicated file archiving systems are three ways to archive files. Manual file management moves files to a secondary storage location, such as a network share or external hard drive. Backup and restore solutions preserve files by creating snapshots of the data at regular intervals; snapshots can restore data in the event of data loss or corruption. Dedicated file archiving systems are specialized software solutions that are designed specifically for file archiving and provide features such as indexing, searching, and data retention policies.

      File Archiving Challenges

      File archiving reduces the risk of data loss, improves regulatory compliance, and reduces the costs associated with primary storage. Yet file archiving can present several challenges, including:

      • Data Storage Costs: Storing large volumes of data for a long time can be expensive, especially if the data is stored on traditional storage solutions, such as tapes or hard disk drives.
      • Scalability: As data volumes continue to grow, archiving solutions must be able to meet the increasing demand for storage capacity.
      • Data Retrieval: Archived files are difficult to locate and retrieve if they are not properly indexed or if the index becomes corrupted.
      • Data Retention: Organizations must ensure that their archiving solutions meet regulatory requirements for data retention, including data privacy and security laws.
      • Data Integrity: Archived files must be preserved in their original format and remain readable over time, which requires proper data preservation and data migration strategies.
      • Data migration: As archiving systems age or become obsolete, IT must migrate data to new systems, in particular cloud data migration, which can be time-consuming and complex.
      • Integration with other systems: Archiving solutions must integrate with other systems, such as backup and restore solutions, to ensure streamlined access.

      Standards-based Transparent Data Archiving

      Thumbnail_600x400_CCC7pitfalls

      A true transparent data archiving solution creates literally no disruption, and that’s only achievable with a standards-based approach. Komprise Intelligent Data Management is the only standards-based transparent data archiving solution that uses Transparent Move Technology™ (TMT), which uses symbolic inks instead of proprietary stubs.

      True transparency that users won’t notice

      When a file is archived using TMT, it’s replaced by a symbolic link, which is a standard file system construct available in NFS, SMB, object store file systems. The symbolic link, which retains the same attributes as the original file, points to the Komprise Cloud File System (KCFS), and when a user clicks on it, the file system on the primary storage forwards the request to KCFS, which maps the file from the secondary storage where the file actually resides. (An eye blink takes longer.) This approach seamlessly bridges file and object storage systems so files can be archived to highly cost-efficient object-based solutions without losing file access.

      Learn more about Komprise TMT for File Archiving

      Getting Started with Komprise:

    • File Data Management

      File data management is the process of organizing, storing, and retrieving digital files in an efficient and secure manner. This can include tasks such as:

      • Naming files in a consistent and descriptive manner
      • Creating folders and sub-folders to categorize and store files
      • Regularly backing up important files to prevent data loss
      • Purging old or unnecessary files to free up storage space
      • Using appropriate software tools to manage, search and retrieve files

      Effective file data management helps improve productivity and organization, and reduces the risk of data loss or corruption. It is a critical aspect of overall data management, especially in businesses and organizations where large amounts of data are generated and stored on a regular basis.

      File Data Management Challenges

      Because we’re talking about unstructured data, file data management can present a number of challenges, including:

      • Data Growth: As more and more data is generated and stored, it can become difficult to manage and organize effectively. The majority is unstructured data.
      • Data Duplication: Duplicate files can lead to confusion, waste storage space and make it harder to find the most up-to-date version of a file.
      • Data Security: Protecting sensitive information from unauthorized access or cyberattacks is a major concern in file data management. (Read about cyber resiliency and saving on ransomware production.)
      • Data Loss: Accidentally deleting or losing files can result in significant data loss and potential productivity loss.
      • Compliance: Certain industries and organizations may have regulatory requirements for file data management, such as retention policies and data privacy laws.
      • Integration with Other Systems: Integrating file data management systems with other applications, such as email, CRM, and collaboration platforms, can be complex and time-consuming.
      • Scalability: As the amount of data grows, the file data management system must be able to scale to meet the demands of the organization.
      • Compatibility: Ensuring that files can be opened and used by multiple users and systems can be a challenge, especially with different file formats and software versions.

      These challenges can be addressed through the use of appropriate software tools, best practices for file data management, and regular reviews and updates to the file data management policies.

      Komprise_ArchitectureOverview_WhitePaperthumbKomprise File Data Management

      Komprise Intelligent Data Management has been designed from the ground-up to simplify file data management and put customers in control of unstructured data, no matter where data lives. Analytics-first approach, Komprise works across file and object storage, across cloud and on-premises, and across data storage and data backup architectures to deliver a consistent way to manage data. With Komprise you get instant insight into all of your unstructured data—wherever it resides. See patterns, make decisions, make moves, and save money—all without compromising user access to any data. Komprise puts you in control of your data while simplifying file data management by creating a lightweight management plane across all your data storage silos without getting in the path of data access.

      Getting Started with Komprise:

    • File Data Migration

      File data migration or file migration is the process of transferring data stored in files, such as text documents, images, audio and video files, spreadsheets, and other types of data, from one system to another. IT organizations move data for many reasons including for system upgrades, data center relocations, during mergers and acquisitions, and when acquiring new data storage platforms.

      File data migration involves several steps, such as data extraction, data transformation, data loading, data verification, and data archiving. It’s important to ensure that all the data is accurately and securely transferred to the new system, while minimizing any disruptions to business operations and preserving the integrity of the data.

      File data migration can be complex and time-consuming, especially for organizations with large volumes of data, multiple file formats, and strict security and compliance requirements. To ensure a successful migration, organizations typically use specialized tools and services, such as data migration software, cloud data migration services, and managed data migration services.

      Komprise File Data Migration

      Komprise Elastic Data Migration is a fast, predictable and cost-efficient file data migration software solution. Elastic Data Migration is included in the Komprise Intelligent Data Management platform or is available standalone.

      Komprise Hypertransfer for Elastic Data Migration accelerates file data transfer to the cloud while strengthening cloud security. Komprise Hypertransfer optimizes cloud data migration performance by minimizing the WAN roundtrips using dedicated channels to send data, mitigating SMB protocol issues.komprise-elastic-data-migration-page-promo

      File Data Migration to the Cloud Considerations

      Increasingly enterprise IT organizations are looking to migrate file data workloads to the cloud. (Read the State of Unstructured Data Management report to review data storage and cloud data migration trends.) This ITPro-Today article reviews some key considerations to know first before a file data migration initiative:

      • What data do I have and where is it stored?
      • What data sets are accessed most frequently (a.k.a. hot data)?
      • What data sets are rarely accessed (a.k.a. cold data)?
      • Who uses the data currently and is there value in enabling collaboration outside of your organization?
      • What data/files haven’t been accessed for more than 3-5 years and should be considered for deep archival storage or confinement and deletion?
      • What types of files do we have and which comprise the most storage: a.k.a. image files, video or audio files, sensor data, text data.
      • What is the cost of storing these different file types?
      • Which types of files should be stored in a higher security level — a.k.a. those containing PII or IP data or belonging to mission-critical projects?
      • Are we complying with regulations and internal policies with our data management practices?

      This video discussion reviews cloud file data migration considerations:

      In this Data on the Move discussion we interview Benjamin Henry, Customer Success Architect at Komprise.

      _______________________

      ———-

      Getting Started with Komprise:

    • File Data Ransomware

      What is File Data Ransomware?

      This is a ransomware attack targeting file data. 

      File data can be generated from users as well as machines. From genomics and medical imaging, streaming video, electric car data, and IoT products, all industries are generating vast amounts of unstructured file data, and increasingly enterprises are migrating file workloads to the cloud. File data can be petabytes of data and billions of files, so migrating this much unstructured data to the cloud takes time and can be disruptive. Cloud data migrations require proper planning to ensure minimal disruption and unintended costs.

      There is a growing recognition in the importance of having a layered protection strategy in place against potential file data ransomware attacks. Upwards of 80% of data today is unstructured file data, so IT organizations cannot afford to leave file data unprotected from ransomware. Early detection of ransomware will deliver the best outcome, but ransomware attacks are constantly evolving. Detection is not always foolproof and can be difficult. Investing in ways to recover data if you do get attacked by ransomware and establishing an immutable copy of data in a separate location separate from data storage and backups is the best way to recover data in the event of a ransomware attack. 

      But keeping multiple copies of data can get prohibitively expensive. Read the blog: How to Protect File Data from Ransomware at 80% Lower Cost

      Learn more about Komprise for cyber resiliency, including optimizing your defenses against cyber incidents, system failure and file data.

      What is File Data Ransomware?

      Ransomware is an attack by malware that holds your data files hostage by encrypting your systems and making your data inaccessible to you.  The majority of enterprise data in the enterprise is unstructured file data, which means organizations cannot afford to leave file data unprotected from ransomware. While the primary target for ransomware is file data, as the attacks grow more sophisticated hackers are seeking to defeat backups and snapshots.

      How to recover your ransomware encrypted data files

      The way to recover from a ransomware attack is to establish an immutable copy of your data in a separate location, ensuring it is separate from your data storage. Immutable storage can be physically “air gapped” with offline media such as tape or virtually air gapped with technologies such as AWS S3 object lock that prevent any modification of data even by administrators for a set retention period.

      How long does it take to recover from a ransomware attack?

      A critical component often overlooked is how long the ransomware recovery can take – if your business can’t resume until data is restored, every minute adds to the cost of the ransomware attack. Recovery from a ransomware attack is equivalent to a disaster where potentially 100% of your data must be restored. Having a tested recovery plan in place is essential to a successful recovery.

      How do you protect file data from ransomware?

      There are two components of ransomware protection: detection and recovery. Early detection of ransomware will deliver the best outcome, but this is not always foolproof and can be difficult. Organizations should also invest in data recovery strategies and create an immutable copy of data in a separate location data storage and backups in the event of a ransomware attack. But keeping multiple copies of data can get prohibitively expensive. To protect file data from ransomware, the solution must: – Be cost-effective – Protect if backups and snapshots are infected – Provide simple recovery without significant upfront investment – Be verifiable.

      Getting Started with Komprise:

    • File Data Tiering

      File data tiering is a data storage management technique that automatically moves files from one storage tier to another based on usage patterns and access frequency. The goal of file data tiering is to optimize storage utilization and reduce storage costs by placing frequently used files on high-performance storage and less frequently used files (cold data storage) on lower-performance storage.

      Hardware-based tiering, software-based tiering, and cloud-based tiering are three methods of file data tiering. Hardware-based tiering moves files between different types of physical storage devices, such as solid-state drives (SSDs) and hard disk drives (HDDs), within a storage array. Software-based tiering moves files between different types of virtual storage volumes, such as high-performance and low-performance storage pools. Cloud-based tiering moves files between different storage classes within a cloud-based object storage service, such as Amazon S3.

      As part of a broader file data management strategy, file data tiering can help organizations improve storage utilization, reduce storage costs, and increase storage performance by automatically placing the right data in the right place at the right time. However, it’s important for organizations to carefully consider their storage requirements and choose a file tiering solution that fits their needs, as not all tiering solutions are appropriate for all environments.

      File-Level Tiering vs Block-Level Tiering

      Learn the difference between storage-centric block tiering, which moves blocks that can no longer be directly accessed from their new location without vendor software (aka lock-in) and file data tiering, which is what Komprise uses to fully preserve file access at each tier by keeping the metadata and file attributes with the file—no matter where it lives. Know the difference to make the right cloud tiering choice for your data storage moves.block_file_tiering

      Getting Started with Komprise:

    • File Server

      A file server is the central server in a computer network that provides a central storage place for files on internal data media to connected clients.

      Getting Started with Komprise:

    • File-level Tiering

      File-level tiering is a standards-based data tiering approach Komprise uses that moves each file with all its metadata to the new tier, maintaining full file fidelity and attributes at each tier for direct data access from the target storage and no rehydration.

      Read the white paper: Block-Level Tiering versus File-Level Tiering.

      block_file_tiering-768x584

      Getting Started with Komprise:

    • FinOps (or Cloud FinOps)

      FinOps (or Cloud FinOps) means financial operations that include practices such as cost optimization, cost allocation, chargeback and showback, and cloud financial governance. Some of the key challenges that organizations face with regards to cloud costs include:

      • Cost visibility: Many organizations struggle to gain complete visibility into their cloud costs, which can make it difficult to ensure that they are not overspending on resources.
      • Cost optimization: Organizations need to optimize their cloud costs by reducing waste, optimizing resource utilization, and ensuring that they are only paying for what they need.
      • Cost allocation: Organizations need to allocate their cloud costs so that they are charged in a way that accurately reflects the resources that they are consuming.
      • Cloud financial governance: Governance processes and controls can ensure that cloud spending is aligned with their overall business goals and objectives.

      Overall, FinOps is a critical aspect of modern cloud management, and is essential for organizations that want to effectively manage their cloud costs and ensure that they are maximizing value and ROI from their cloud investments.

      There are several vendors that specialize in FinOps solutions for cloud cost management and cloud cost optimization, but increasingly FinOps is built into other applications and technology platforms:

      • Apptio
      • CloudHealth by VMware
      • RightScale (acquired by Flexera)
      • CloudCheckr
      • Azure Cost Management + Billing by Microsoft
      • AWS Cost Explorer by Amazon Web Services
      • Cloudability
      • ParkMyCloud

      With the right Cloud FinOps strategy, organizations should focus on gaining the tools and expertise they need to manage their cloud costs and ensure that they are getting the most value from their cloud investments.

      FinOps and Unstructured Data Management

      How much does it cost to own your data?

      Cost modeling in Komprise helps IT teams enter their actual data storage costs to determine upfront new projected costs and benefits before spending money on storage. (Know First)

      Look at your current (and future) data storage platform(s). Does the company pay per GB (OPEX) or is it an owned technology (CAPEX)? For the latter, divide the current total amount of actual usable data by the cost to acquire the full system to attain cost/TB. For example, 1PB of physical storage may end up being just 500TB of actual usable capacity but only has 300TB of actual useable data on it. Use the 300TB because that is representative of today’s data ownership cost.

      Data ownership should also include the cost of data protection (data backup, disaster recovery, etc.). The FinOps capabilities in Komprise Intelligent Data Management allow you to compare on-premises versus cloud models or factor in cloud tiering or migrating to a new NAS platform.

      Komprise-Analysis-Only-WP-graphic-4
      Komprise Cost Models

      According to GigaOm’s 2022 Data Migration Radar Report: Komprise has, “the best set of Financial Operations (FinOps) features to date.”

      Stop overspending on cloud storage: Know First. Move Smart. Take Control with the right FinOps for cloud data storage and data management strategy.

      Getting Started with Komprise:

    • Flash Storage

      Flash storage is storage media intended to electronically secure data, which can be electronically erased and reprogrammed. The other advantage is it responds faster than a traditional disc, increasing performance.

      With the increasing volume of stored unstructured data from the growth of mobility and Internet of Things (IoT), organizations are challenged with both storing data and the opportunities it brings. Disk drives can be too slow, due to the speed limitations. For stored data to have real value, businesses must be able to quickly access and process that data to extract actionable information.

      Flash storage has a number of advantages over alternative storage technologies
      • Greater performance. This leads to agility, innovation, and improved experience for the users accessing the data – delivering real insight to an organization
      • Reliability. With no moving parts, Flash has higher uptime due to no moving parts. A well-built all-flash array can last between 7-10 years.

      While Flash storage can offer a great improvement for organizations, it is still too expensive as a place to store all data. Flash storage has been about twenty times more expensive per gigabyte than spinning disk storage over the past seven years. Many enterprises are looking at a tiered model with high-performance flash for hot data and cheap, deep object or cloud storage for cold data.

      Getting Started with Komprise:

  • G
    • General Data Protection Regulation (GDPR)

      The General Data Protection Regulation (GDPR) (Regulation (EU) 2016/679) is a regulation by the European Union that aims to strengthen and unify data protection for all individuals within the European Union (EU). It also addresses the export of personal data outside the EU.

      GDPR becomes enforceable from 25 May 2018. Businesses transacting with countries in the EU will have to comply with GDPR laws.

      The GDPR regulation applies to personal data collected by organizations including cloud providers and businesses.

      Article 17 of GDPR is often called the “Right to be Forgotten” or “Right to Erasure”. The full text of the article is found below.

      To comply with GDPR, you need to use an intelligent data management solution to identify data belonging to a particular user and confine it outside the visible namespace before deleting the data. This two-step deletion ensures there are no dangling references to the data from users and applications and enables an orderly deletion of data.

      Art. 17 GDPR Right to erasure (‘right to be forgotten’)

      1) The data subject shall have the right to obtain from the controller the erasure of personal data concerning him or her without undue delay and the controller shall have the obligation to erase personal data without undue delay where one of the following grounds applies:

      1. the personal data are no longer necessary in relation to the purposes for which they were collected or otherwise processed; 2 the data subject withdraws consent on which the processing is based according to point (a) of Article 6(1), or point (a) of Article 9(2), and where there is no other legal ground for the processing;
      2. the data subject objects to the processing pursuant to Article 21(1) and there are no overriding legitimate grounds for the processing, or the data subject objects to the processing pursuant to Article 21(2);
        the personal data have been unlawfully processed;
      3. the personal data have to be erased for compliance with a legal obligation in Union or Member State law to which the controller is subject;
      4. the personal data have been collected in relation to the offer of information society services referred to in Article 8(1).

      2) Where the controller has made the personal data public and is obliged pursuant to paragraph 1 to erase the personal data, the controller, taking account of available technology and the cost of implementation, shall take reasonable steps, including technical measures, to inform controllers which are processing the personal data that the data subject has requested the erasure by such controllers of any links to, or copy or replication of, those personal data.

      3) Paragraphs 1 and 2 shall not apply to the extent that processing is necessary:

      1. for exercising the right of freedom of expression and information;
      2. for compliance with a legal obligation which requires processing by Union or Member State law to which the controller is subject or for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller;
      3. for reasons of public interest in the area of public health in accordance with points (h) and (i) of Article 9(2) as well as Article 9(3);
      4. for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes in accordance with Article 89(1) in so far as the right referred to in paragraph 1 is likely to render impossible or seriously impair the achievement of the objectives of that processing; or
      5. for the establishment, exercise or defense of legal claims.

      Getting Started with Komprise:

    • Global File Index

      What is a Global File Index?

      Komprise Deep Analytics enables precise unstructured data management at enterprise scale, creating a Global File Index, which is a metadata catalog spanning petabytes of file and object data sources, to find specific data sets and then create a data management plan to systematically take action on your data set. Unstructured data ends up in multiple silos, so an index needs to be global across different data centers, storage, backup and cloud infrastructure.

      Once you connect Komprise to your file and object storage, your data is indexed and a Global File Index, which is a global metadata catalog across disparate file and object data, is created. You do not have to move the data anywhere; but you now have a single way to query and search across your file and object stores. Say you have some NetApp, some Isilon, some Windows servers, some Pure Storage at different sites and you have some cloud file storage on AWS, Azure, and Google. You get a single index via Komprise of all the data across all these environments and now you can search and find exactly the data you need with a single console and API.

      Komprise-Global-File-Index-2048x976

      Benefits of the Global File Index

      • Users only move the data they need, with the ability to create queries on countless file attributes and tags such as: data related to a specific tag or project name, projects that are no longer active, file age, user/group ID’s, path, file type (aka JPEG) and specific extensions, data with unknown owners.
      • A global metadata catalog eliminates the manual effort of finding custom data sets and moving them separately from different storage silos since Komprise can create a virtual data set based on the query and systematically and continuously move data from multiple file and object silos to the target location.
      • Improves IT and business collaboration around data, as data owners/users can participate in data tiering. 

      Watch the TechKrunch session: Deep Analytics Actions with One Global File Index

      TechKrunch-Nov10

      Getting Started with Komprise:

  • H
    • Hierarchical Storage Management (HSM)

      software, also known as tiered storage, was designed for distributed server
      environments to automate the process of identifying cold data sets and automatically migrating them from primary disk to less expensive optical and tape storage devices. Going back to the era of the mainframe, HSM was also supposed to handle file recall requests automatically whenever a user clicked on a stub file.

      Unfortunately, these early HSM products (see Wikipedia for a history) suffered from a number of deficiencies such as:

      • They were custom designed for specific proprietary storage systems, which limited hardware choices and resulted in vendor lock-in.
      • Many required file server agents that required substantial memory and compute resources, and operated in the direct data path, impacting performance.
      • They used static stub files left in place of the moved data. These static stub files could be corrupted, deleted, and orphaned making it difficult if not impossible to locate the original source file.
      • The early HSM solutions did not scale well. As file counts increased, HSM performance deteriorated significantly since they were traditional database-driven architectures.
      • The solutions would disrupt storage s
        ystem performance, interrupting active usage.
      • File recalls could take a long time, especially if the requested file was stored on tape.

      So bad were these deficiencies, that HSM became a “bad word” amongst IT professionals. Many of those IT pros believed that the only viable way to manage storage was to just keep adding more capacity to the primary tier.

      storage-swiss-768x512

      As the data center landscape has changed, with organizations having a wide range of data storage options available. Flash memory devices have replaced high performance physical disk drives as Tier-1 storage. High performance and commodity physical hard disks now function as secondary and tertiary storage tiers. Cloud file storage and object storage options are available to handle large bulk, long-term storage requirements. All of these options are needed to combat the unstructured data onslaught (and data sprawl and high data storage costs) that most organizations are facing. However, the main problem remains; how to automatically detect “warm” and “cold” data sets then continuously migrate them to the most cost-effective storage tier while also managing the entire file life cycle. As outlined in this early review of Komprise:

      In short, we have more storage options than ever but less intelligence about how and when to move our increasing data to which storage platform.

      In a 2022 Blocks and Files review, Komprise Intelligent Data Management is referred to as an HSM or Information Lifecycle Management solution. The new category of software is now known as unstructured data management as well as the broader term: data services.

      Getting Started with Komprise:

    • High Performance Storage

      What is High Performance Storage?

      High performance storage is a type of storage management system designed for moving large files and large amounts of data around a network. High performance storage is especially valuable for moving around large amounts of complex data or unstructured data like large video files across the network.

      Used with both direct-connected and network-attached storage, high performance storage supports data transfer rates greater than one gigabyte per second and is designed for enterprises handling large quantities of data – in the petabyte range.

      HotandColdData-27x30

      High performance storage supports a variety of methods for accessing and creating data, including FTP, parallel FTP, VFS (Linux), as well as a robust client API with support for parallel I/O.

      High performance storage is useful to manage hot or active data, but can be very expensive for cold/inactive data. Since over 60 to 90% of data in an organization is typically inactive/cold within months of creation, this data should be moved off high performance storage to get the best TCO of storage without sacrificing performance.

      Is Cold Data Impacting Data Storage Performance?

      Unstructured data management policies ensures that data is always stored in the appropriate environment according to its usage, age, value and business priority to maximize data storage performance and data storage costs.

      Read: The Need for Policies to Corral Your Unstructured Data

      Getting Started with Komprise:

    • Hosted Data Management

      With hosted data management, a service provider administers IT services, including infrastructure, hardware, operating systems, and system software, as well as the equipment used to support operations, including data storage, hardware, servers, and networking components.

      The managed service provider (MSP) typically sets up and configures hardware, installs and configures software, provides support and software patches, maintenance, and monitoring.

      Services may also include disaster recovery, security, DDoS (distributed denial of service) mitigation, and more.

      Hosted data management may be provided on a dedicated or shared-service model. In dedicated hosting, the service provider sets aside servers and infrastructure for each client; in shared hosting, pooled resources and charged for on a per-use basis.

      Hosted data management can also be referred to as cloud services. With cloud hosting, resources are dispersed between and across multiple servers, so load spikes, downtime, and hardware dependencies are spread across multiple servers working together.

      In this arrangement, the client usually has administrative access through a Web-based interface.

      Another popular model is hybrid cloud hosted data management – where the administrative console resides in the cloud but all the data management (analyzing data, moving data, accessing data) is done on premise. Komprise Intelligent Data Management uses this hybrid approach as it offers the best of both worlds – a fully managed service that reduces operating costs without compromising the security of data.

      Komprise-Architecture-Page-SOCIAL-768x402

      Getting Started with Komprise:

    • Hot Data

      Hot data is business-critical data that needs to be accessed frequently and resides on primary storage (NAS).

      Hot data is considered to be of high value and importance. This type of data is typically stored in fast memory, such as RAM, to ensure quick and efficient access. Examples of hot data include frequently used databases, in-memory caches, and real-time data streams.

      HotandColdData2

      Getting Started with Komprise:

    • Hybrid Cloud Storage

      What is Hybrid Cloud Storage?

      As data moves from on-premises data centers to the public cloud and to edge computing devices, enterprise data storage has increasingly moved to a hybrid cloud storage model, where data is stored on the infrastructure that will leverage the processing power of the public cloud. In Gartner’s Hybrid Cloud Storage Market Guide (subscription required), they recommend that infrastructure and operations leaders identify the right workloads, types of data and use cases for cloud data storage and prioritize hybrid cloud storage solutions that support cloud-native access.

      In the 2021 Komprise Unstructured Data Management Survey, 50% of enterprises responded that they have data stored in a mix of on-premises and cloud-based storage and 56% stated that their top priority is cloud data migration.

      Download the State of Unstructured Data Management report. 

      Komprise-State-of-Unstructured-Data-Management-Report-SOCIAL-2-1

      In August 2022, Komprise published the 2nd annual State of Unstructured Data Management Report.

      Komprise-State-of-Unstructured-Data-Management-Report-2022-BLOG-SOCIAL-1
      Unstructured Data Management

      Getting Started with Komprise:

    • Hypertransfer

      Hypertransfer for Komprise Elastic Data Migration migrates file data to the cloud 25x faster.

      Announced in December 2022, Komprise Hypertransfer for Elastic Data Migration creates dedicated virtual channels across the WAN to accelerate cloud data migrations. By establishing dedicated channels to send data, Komprise Hypertransfer minimizes the WAN roundtrips, which mitigates SMB protocol chattiness and dramatically improves data transfer rates. Tests done using a dataset dominated by small files shows Komprise accelerates cloud data migration 25x faster than other alternatives.

      Komprise-Hypertransfer-Migration-PR-SOCIAL

      Read the Hypertransfer white paper.

      Getting Started with Komprise:

  • I
    • Immutable Storage

      What is immutable storage?

      Immutable storage is a feature of file storage, or more typically object storage, that protects data from modification or deletion for a set retention period. Immutable storage is often used in highly regulated industries such as finance and health care but is now gaining popularity across other industries as a defense against ransomware or insider threats.

      Implementations of immutable storage such as AWS S3 Object lock are certified by independent 3rd parties to ensure they comply with government regulations.

      Read the blog post: How to Protect File Data from Ransomware at 80% Lower Cost

      Komprise-Ransomware-blog-post-THUMB-1Since approximately 80% of data today is unstructured data, organizations cannot afford to leave file data unprotected from ransomware attacks. Early ransomware detection can deliver the best outcome, but as ransomware attacks are constantly evolving, detection is not always foolproof and can be difficult. Investing in ways to recover data if you do get attacked by ransomware is essential. An immutable copy of data in a separate location separate from your data storage and data backups gives you a way to recover data in the event of a potentially devastating ransomware attack. But keeping multiple copies of data can get prohibitively expensive.

      Getting Started with Komprise:

    • Information Lifecycle Management

      Information Lifecycle Management (ILM) is a data management strategy that focuses on managing the flow of data from creation to deletion. The goal of ILM is to optimize the use of storage resources and improve data management efficiency and cost-effectiveness.

      Gartner defines ILM this way:

      Information Lifecycle Management (ILM) is approach to data and storage management that recognizes that the value of information changes over time and that it must be managed accordingly. ILM seeks to classify data according to its business value and establish policies to migrate and store data on the appropriate storage tier and, ultimately, remove it altogether. ILM has evolved to include upfront initiatives like master data management and compliance.

      Source

      TechTarget Defines ILM this way:

      Information lifecycle management (ILM) is a comprehensive approach to managing an organization’s data and associated metadata, starting with its creation and acquisition through when it becomes obsolete and is deleted.

      Source

      ILM involves a series of activities that are performed at different stages of the data lifecycle, such as data creation, data storage, data protection, data archiving, and data deletion. At each stage, the data is managed and stored according to its value, importance, and frequency of use.

      ILM typically involves the use of data classification, data retention, data archiving policies, and data management tools and technologies. These policies and technologies help to manage the flow of data throughout its lifecycle and ensure that it is stored in the most appropriate location and format for its current needs.

      Benefits of implementing ILM

      Improved storage utilization and cost savings

      By managing data throughout its lifecycle, ILM helps organizations ensure that the most valuable and important data is stored on high-performance storage systems, while less important data is stored on lower-cost storage systems.

      Increased data protection and security

      By managing the flow of data and applying appropriate data protection and security measures, ILM helps reduce the risk of data loss or corruption.

      Better compliance

      ILM helps organizations meet regulatory and compliance requirements by ensuring that data is managed and stored in accordance with the organization’s policies and best practices.

      Overall, Information Lifecycle Management is an essential aspect of modern data management and is critical to effectively manage and store data securely and with cost savings in mind.

      ILM Challenges

      • Complexity: In organizations with large and complex data environments it can be difficult to effectively manage and store data throughout its lifecycle. This can lead to data sprawl, increased data storage costs, and increased security and compliance risks.
      • Cost: Implementing ILM requires investment in the right data management tools and technologies, and structured and unstructured data management policies and processes. This can be a significant cost for organizations, especially those with limited budgets.
      • Data protection and security: ILM can introduce new security and privacy risks, especially if sensitive data is stored on low-cost or low-security storage systems. Organizations should ensure that they have appropriate data protection and security measures in place to mitigate these risks.

      By carefully planning and executing your ILM strategies, organizations can manage and store your data throughout its lifecycle, cutting costs while ensuring that data is protected, secure, and compliant with regulatory requirements.

      On-going Unstructured Data Management as part of an ILM Strategy

      As we noted when we launched Smart Data Workflows, with billions of files and objects, analytics plus continuous mobilization is essential because data has a lifecycle and data management is not a one-time thing. Whether the use case is data analytics, data migration, data tiering, data replication, data search or anything related to the data lifecycle, it is important to look for an unstructured data management solution that delivers on-going data management. Learn more about Komprise Intelligent Data Management.

      Getting Started with Komprise:

    • Intelligent Data Management

      Intelligent Data Management is the process of managing unstructured data throughout its lifecycle with analytics and intelligence. It is also the name of the Komprise platform as a service: Intelligent Data Management.

      The criteria for a solution to be considered as Intelligent Data Management includes:

      Analytics-Driven Data Management

      Is the solution able to leverage analysis of the data to inform its behavior? Is it able to deliver analysis of the data to guide the data management planning and policies? Learn more about Komprise Analysis.

      Storage-Agnostic Data Management

      Is the data management solution able to work across different vendor and different storage platforms?

      Adaptive Data Management

      Based on the network, storage, usage, and other conditions, is the data management solution able to intelligently adapt its behavior? For instance, does it throttle back when the load gets higher, does it move bigger files first, does it recognize when metadata does not translate properly across environments, does it retry when the network fails?

      Closed Loop Unstructured Data Management

      Analytics feeds the data management which in turn provides additional analytics. A closed loop system is a self-learning system that uses machine learning techniques to learn and adapt progressively in an environment.

      Efficient and Cost Effective Data Management

      An intelligent data management solution should be able to scale out efficiently to handle the load, and to be resilient and fault tolerant to errors.It should also ensure you’re able to achieve data storage cost savings.

      Komprise-Architecture-Page-SOCIALIntelligent data management solutions typically address the following use cases:

      • Analysis: Find the what, who, when of how data is growing and being used
      • Planning: Understand the impact of different policies on costs, and on data footprint
      • Data Tiering or Data Archiving: Support various forms of managing cold data and offloading it from primary storage and backups without impacting user access. Includes: Tier and archive data by policy – move data with links for seamless access, Archive project data – archive data that belongs to a project as a collection, Archive without links – move data without leaving a link behind when data needs to be moved out of an environment
      • Data Replication: Create a copy of data on another location.
      • Data Migration: Move data from one storage environment to another
      • Deep Analytics: Search and query data at scale across storage

      Getting Started with Komprise:

    • Isilon CloudPools (Dell EMC)

      What are Isilon CloudPools?

      Smart-Data-Migration-600x600-3

      Dell EMC PowerScale (formerly Isilon) CloudPools software provides policy-based automated tiering that allows for an additional storage tier for the Isilon cluster at your data center. This technology is a form of storage pools which are collections of storage volumes that often blend different tiers of storage into a logical pool or shared storage environment.

      CloudPools supports tiering data from Dell PowerScale Isilon to public, private or hybrid cloud options. This technology moves archived files to the destination storage in a proprietary format and then references the moved files via stubs. File data access from the object storage is not possible, eliminating the use of cloud-based functions such as AI/ML. Functions such as backup by external application or migration to new storage array require full rehydration of data leading to egress fees from cloud storage and the need to retain on-prem storage capacity.

      Learn more about CloudPools.

      Read the blog post: What you need to know before jumping into the cloud tiering pool

      Komprise_CloudTieringPool_blogthumb

      Read the white paper: Cloud Tiering: Storage-Based vs Gateways vs File-Based: Which is Better and Why?

      Learn how to save on storage with Dell EMC and Komprise.

      Getting Started with Komprise:

    • Isilon Tiering
      The Isilon Tiering solution from Dell EMC is called PowerScale CloudPools.
      Dell EMC PowerScale Isilon CloudPools software provides policy-based automated tiering that allows for an additional storage tier for the Isilon cluster at your data center. CloudPools supports tiering data from Dell PowerScale Isilon to public, private or hybrid cloud options. This technology is a form of storage pools, which are collections of storage volumes exported to a shared storage environment.
      Cloud tiering and data tiering (or archiving) can deliver significant cost savings as part of a cloud data strategy by offloading unused cold data to more cost-efficient cloud storage solutions. The approach you take to Isilon tiering can either create an easy path to the cloud with native access and full use of data in the cloud or it can create costly cloud egress and lock-in. Array block-level tiering is a mismatch for the cloud. Isilon cloud tiers blocks rather than entire files, which the following ramifications:
      • Limited policies result in more data access from the cloud.
      • Defragmentation of blocks leads to higher cloud costs.
      • Sequential reads lead to higher cloud costs and lower performance.
      • Tiering blocks impacts performance of the storage array.

      Read the blog post: What you need to know before jumping into the cloud tiering pool

      PowerScale Isilon Tiering Choices

      When it comes to considering PowerScale Isilon data tiering and PowerScale Isilon cloud tiering, it’s important to understand your cloud tiering choices. Cloud tiering and archiving can save you millions by offloading infrequently accessed cold data to cost-efficient cloud data storage. But, the approach you take can either create an easy path to the cloud for file data with full use of data in the cloud or it can create costly cloud egress and lock-in.

      Smart Migration from PowerScale Isilon with Komprise: Analyze your data first, tier off cold data, deliver 25x faster cloud data migrations and deliver transparency / no disruption to your users and native data access / no storage-vendor lock-in for your file and object data.

      PttC_pagebanner-2048x639
      Learn more about cloud tiering and your cloud tiering choices.
      Learn more about Komprise for Dell EMC.

      Getting Started with Komprise:

  • M
    • Metadata

      Metadata means “data about data” or data that describes other data. The prefix “meta” typically means “an underlying definition or description” in technology circles

      Metadata makes finding and working with data easier – allowing the user to sort or locate specific documents. Some examples of basic metadata are author, date created, date modified, and file size. Metadata is also used for unstructured data such as images, video, web pages, spreadsheets, etc.

      Web pages often include metadata in the form of meta tags. Description and keywords meta tags are commonly used to describe content within a web page. Search engines can use this data to help understand the content within a page.

      Metadata can be created manually or through automation. Accuracy is increased using manual creation as it allows the user to input relevant information. Automated metadata creation can be more elementary, usually only displaying basic information such as file size, file extension, when the file was created, for example.

      Metadata can be stored and managed in a database, however, without context, it may be impossible to identify metadata just by looking at it. Metadata is useful in managing unstructured data since it provides a common framework to identify and classify a variety of data including videos, audios, genomics data, seismic data, user data, documents, logs.

      Learn more about the Komprise Global File Index and Deep Analytics.

      Learn more about the Komprise Intelligent Data Management architecture.

      What is Metadata?

      Metadata is “data about data.” It is structured data that references and identifies data to give an essential extra layer of shorthand information. Metadata schema can be simple or complex but it provides an important underlying definition or description.

      Types of Metadata

      There are three main types of metadata:

      Structural Metadata – examples include:

      • Page Numbers
      • Sections
      • Chapters
      • Indexes
      • Tables of Contents

      Administrative Metadata – examples include:

      • Technical Metadata – Decoding and rendering files information
      • Preservation Metadata – Information necessary for the long-term management and archiving of digital assets
      • Rights Metadata – Information relating to intellectual property and usage rights

      Descriptive Metadata – examples include:

      • Unique identifiers (eg ISBN)
      • Physical attributes (eg file dimensions or Pantone colors)
      • Bibliographic attributes (eg author or creator, title, and keywords)
      Metadata Management

      Metadata management is the administration of data that describes other data. To manage metadata effectively there must be established policies.
      Metadata management is important for understanding, aggregating, grouping and sorting data for use. Over the last decade, the rapid growth of data has created the need for metadata management to provide a clear insight into what data to produce and what data to consume. This ensures data becomes a valuable enterprise asset.

      Learn more about Komprise Smart Data Workflows

      Learn more about Komprise Deep Analytics and the metadata-driven Komprise Global File Index

      Getting Started with Komprise:

  • N
    • Native Data Access

      Native Data Access: Having direct access to tiered or archived data without needing rehydration because files are accessed as objects from the target storage.

      The Benefits of Cloud Native Data Access

      Gartner estimates that by 2025 more than 95% of new digital workloads will be deployed on cloud-native platforms, up from 30% in 2021. According to the 2022 State of Unstructured Data Management report, enterprise IT organizations are looking to optimize data storage efficiency by moving more data to the cloud. As a result cloud NAS file data storage options are attracting attention. In fact, cloud NAS topped the list for storage investments in the 2023 (47%), followed closely by cloud object storage (44%). Enterprise data storage vendors such as NetApp have popular cloud NAS offerings alongside cloud-native offerings such as Amazon FSx and Azure Files. These services are ideal for active or “hot” data requiring high performance and response times; rarely-accessed or “cold” data can live on object storage which delivers significant cost savings for long-term storage.

      Read the Blog Post: Why Cloud Native Data Access Matters

      As you migrate file workloads to the cloud, it’s important to not limit the potential of your data by locking data into a proprietary format. Cloud native data access is essential to unleash the potential of the cloud. Cloud native is a way to move data to the cloud without lock in, which means that your data is no longer tied to the file system from which it was originally served.

      Watch the TechKrunch session: How to Access Tiered Data in the Cloud

      This short webinar demonstrates how Komprise allows you to access your stored data wherever it’s stored, whenever you want, without rehydration. Because moved data are always intact, you can extract data value with both file and native access – and without penalty. Read the Komprise Architecture Overview for more information on Native Access.

      Getting Started with Komprise:

    • Native File Format

      Native File Format or Native Data Format. The file structure in which a document is created and maintained by the original creating application. Komprise provides transparent data tiering from the source storage array with native access to the cold data on the target, without getting in front of hot data on the source.

      Learn more about the benefits of Native Data Access and Komprise Transparent Move Technology

      Komprise-Transparent-Move-Technology-White-Paper-SOCIAL-768x402

      Getting Started with Komprise:

    • NetApp Cloud Tiering

      The NetApp Cloud Tiering solution is called FabricPool.

      FabricPool is a NetApp tiering technology that enables automated tiering of data from an all-flash appliance to low-cost object storage tiers either on or off premises. This technology is a form of storage pools which are collections of storage volumes exported to a shared storage environment.

      Cloud tiering and data tiering (or data archiving) can deliver significant data storage cost savings as part of a cloud storage strategy by offloading unused cold data to more cost-efficient cloud storage solutions. The approach you take to NetApp tiering can either create an easy path to the cloud with native access and full use of data in the cloud or it can create costly cloud egress and lock-in.

      What you need to know before jumping into the cloud pool.

      Learn more about your cloud tiering choices.

      Learn more about Komprise for NetApp.PttC_pagebanner-2048x639

      Getting Started with Komprise:

    • NetApp FabricPool

      What is NetApp FabricPool? Is it the Right Choice for NetApp Data Tiering?

      FabricPool (now called NetApp Cloud Tiering) is a NetApp storage technology that enables automated data tiering at the block level from flash storage to low-cost object storage tiers, in the cloud or on premises. FabricPool is a form of storage pools which are collections of storage volumes that often blend different tiers of storage into a logical pool or shared storage environment.

      Originally developed to tier “snapshot” or backup data, the functionality has been extended to infrequently accessed blocks of the active file system. Tiered data is stored in a proprietary format in object storage and as a result can only be read via the original NetApp array. File data access from the object storage is not possible, eliminating the use of cloud-based tools for AI/ML. Additionally functions such as backup by external application or migration to new storage array require full rehydration of data, leading to egress fees from cloud storage and the need to retain sufficient storage capacity on-premises.

      Read the white paper, Cloud Tiering: Storage-Based vs Gateways vs File-Based, for more discussion on storage pools.

      Array block-level tiering is a mismatch for the cloud. NetApp cloud tiering blocks rather than entire files has the following ramifications:

      • Limited policies result in more data access from the cloud.
      • Defragmentation of blocks leads to higher cloud costs.
      • Sequential reads lead to higher cloud costs and lower performance.
      • Tiering blocks impacts performance of the storage array.

      Komprise_CloudTieringPool_blogthumb

      Read the blog post: What you need to know before jumping into the cloud tiering pool

      Learn more about FabricPool technology.

      When it comes to considering NetApp data tiering and NetApp cloud tiering, it’s important to understand your cloud tiering choices. Cloud tiering and archiving can save you millions by offloading infrequently accessed cold data to cost-efficient cloud data storage. But, the approach you take can either create an easy path to the cloud for file data with full use of data in the cloud or it can create costly cloud egress and lock-in. Also, what about cloud data migration and cloud tiering for other storage systems (i.e. Isilon cloud tiering) if you are a multi-storage enterprise IT organization? And what about tiering data from older versions of NetApp? This is why increasingly the market is moving to storage-agnostic unstructured data management.

      PttC_pagebanner-2048x639

      Learn about Komprise’s native integration with NetApp and why Komprise is the right choice for NetApp cloud data tiering.

      Getting Started with Komprise:

    • Network Attached Storage (NAS)

      Diagram_NAS.png

      What is Network Attached Storage?

      Network Attached Storage (NAS) definition: A NAS system is a storage device connected to a network that allows storage and retrieval of data from a centralized location for authorized network users and heterogeneous clients. These devices generally consist of an engine that implements the file services (NAS device), and one or more devices on which data is stored (NAS drives).

      The purpose of a NAS system is to provide a local area network (LAN) with file-based, shared storage in the form of an appliance optimized for quick data storage and retrieval. NAS is a relatively expensive storage option, so it should only be used for hot data that is accessed the most frequently. Many enterprise IT organizations today are looking to migrate NAS and Object data to the cloud to reduce costs improve agility and efficiency.

      NAS Storage Benefits

      Network attached storage devices are used to remove the responsibility of file serving from other servers on a network and allows for a convenient way to share files among multiple computers. Benefits of dedicated network attached storage include:

      • Faster data access
      • Easy to scale up and expand upon
      • Remote data accessibility
      • Easier administration
      • OS-agnostic compatibility (works with Windows and Apple-based devices)
      • Built-in data security with compatibility for redundant storage arrays
      • Simple configuration and management (typically does not require an IT pro to operate)

      NAS File Access Protocols

      Network attached storage devices are often capable of communicating in a number of different file access protocols, such as:

      Most NAS devices have a flexible range of data storage systems that they’re compatible with, but you should always ensure that your intended device will work with your specific data storage system.

      Enterprise NAS Storage Applications

      In an enterprise, a NAS array can be used as primary storage for storing unstructured data and as backup for data archiving or disaster recovery (DR). It can also function as an email, media database or print server for a small business. Higher-end NAS devices can hold enough disks to support RAID, a storage technology that allows multiple hard disks into one unit to provide better performance times, redundancy, and high availability.

      Data on NAS systems (aka NAS device) is often mirrored (replicated) to another NAS system, and backups or snapshots of the footprint are kept on the NAS for weeks or months. This leads to at least three or more copies of the data being kept on expensive NAS storage. A NAS storage solution does not need to be used for disaster recovery and backup copies as this can be very costly. By finding and data tiering (or data archiving) cold data from NAS, you can eliminate the extra copies of cold data and cut cold data storage costs by over 70%.

      Check out our video on NAS storage savings to get a more detailed explanation of how this concept works in practice.

      Network Attached Storage (NAS) Data Tiering and Data Archiving

      Since NAS storage is typically designed for higher performance and can be expensive, data on NAS is often tiered, archived and moved to less expensive storage classes. NAS vendors offer some basic data tiering at the block-level to provide limited savings on storage costs, but not on backup and DR costs. Unlike the proprietary block-level tiering, file-level tiering or archiving provides a standards-based, non-proprietary solution to maximize savings by moving cold data to cheaper storage solutions. This can be done transparently so users and applications do not see any difference when cold files are archived. Read this white paper to learn more about the differences between file tiering and block tiering.

      NAS Migration to the Cloud

      smart-data-migrations-icon-circle-300x295Cloud NAS is growing in popularity. But the right approach to migrating unstructured data to the cloud is essential. Unstructured data is everywhere. From genomics and medical imaging to streaming video, electric cars, and IoT products, all sectors generate unstructured file data. Data-heavy enterprises typically have petabytes of file data, which can consist of billions of files scattered across different storage vendors, architectures and locations. And while file data growth is exploding, IT budgets are not. That’s why enterprises’ IT organizations are looking to migrate file workloads to the cloud. However, they face many barriers, which can cause migrations to take weeks to months and require significant manual effort.

      Cloud NAS Migration Challenges

      Common unstructured data migration challenges include:

      • Billions of files, mostly small: Unstructured data migrations often require moving billions of files, the vast majority of which are small files that have tremendous overhead, causing data transfers to be slow.
      • Chatty protocols: Server message block (SMB) protocol workloads—which can be user data, electronic design automation (EDA) and other multimedia files or corporate shares—are often a challenge since the protocol requires many back-and-forth handshakes which increase traffic over the network.
      • Large WAN latency: Network file protocols are extremely sensitive to high-latency network connections, which are essentially unavoidable in wide area network (WAN) migrations.
      • Limited network bandwidth: Bandwidth is often limited or not always available, causing data transfers to become slow, unreliable and difficult to manage.
      Learn more about Komprise Smart Data Migration.

      Network Attached Storage FAQ

      These are some of the most commonly asked questions we get about network attached storage systems.

      How are NAS drives different than typical data storage hardware?

      NAS drives are specifically designed for constant 24×7 use with high reliability, built-in vibration mitigation, and optimized for use in RAID setups. Network attached storage systems also benefit from an abundance of health management systems designed to keep them running smoothly for longer than a standard hard drive would.

      Which features are the most important ones to have in a NAS device?

      The ideal NAS devices have multiple (2+) drive bays, should have hardware-level encryption acceleration, offer support for widely used platforms such as AWS glacier and S3, and have moderately powerful multicore CPU’s with at least 2GB of ram to pair with it.If you’re looking for these types of features, Seagate and Western Digital are some of the most reputable brands in the NAS industry.

      Are there any downsides to using NAS storage?

      NAS storage systems can be quite expensive when they’re not optimized to contain the right data, but this can be remedied with an analytics-driven NAS data management software, like Komprise Intelligent Data Management.

      Using NAS Data Management Tools to Substantially Reduce Storage Costs

      komprise-analysis-overview-white-paper-THUMB-3-768x512One of the biggest issues organizations are facing with NAS systems is trouble understanding which data they should be storing on their NAS drives and which should be offloaded to more affordable types of storage. To keep data storage costs lower, an analytics-based NAS data management system can be implemented to give your organization more insight into your NAS data and where it should be optimally stored.

      For the thousands of data-centric companies we’ve worked with, most of them needed less than 20% of their total data stored on high-performance NAS drives. With a more thorough understanding of their NAS data, organizations are able to realize that their NAS storage needs may be much lower than they originally thought, leading to substantial storage savings, often greater than 50%, in the long run.

      Komprise makes it possible for customers to know their NAS and S3 data usage and growth before buying more storage. Explore your storage scenarios to get a forecast of how much could be saved with the right data management tools.

      This is what Komprise Dynamic Data Analytics provides.

      NAS Fast Facts:

      • Network-attached storage (NAS) is a type of file computer storage device that provides a local-area network with file-based shared storage. This typically comes in the form of a manufactured computer appliance specialized for this purpose, containing one or more storage devices.
      • Network attached storage devices are used to remove the responsibility of file serving from other servers on a network, and allows for a convenient way to share files among multiple computers. Benefits of dedicated network attached storage include faster data access, easier administration, and simple configuration.
      • In an enterprise, a network attached storage array can be used as primary storage for storing unstructured data, and as backup for archiving or disaster recovery. It can also function as an email, media database or print server for a small business. Higher end network attached storage devices can hold enough disks to support RAID, a storage technology that allows multiple hard disks into one unit to provide better performance times, redundancy, and high availability.
      • Data on NAS systems is often mirrored (replicated) to another NAS system, and backups or snapshots of the footprint are kept on the NAS for weeks or months. This leads to at least three or more copies of the data being kept on expensive NAS devices.

      Read the white paper: How to Accelerate NAS Migrations and Cloud Data Migrations 

      Know the difference between NAS and Cloud Data Migration vs. Tiering and Archiving

      Elastic_DM_NASmigration_2020-FINAL1024_1

      Getting Started with Komprise:

    • Network File System (NFS)

      What is NFS?

      A network file system (NFS) is a mechanism that enables storage and retrieval of data from multiple hard drives and directories across a shared network, enabling local users to access remote data as if it was on the user’s own computer.

      What is the NFS protocol?

      The NFS protocol is one of several distributed file system standards for network-attached storage (NAS). It was originally developed in the 1980s by Sun Microsystems, and is now managed by the Internet Engineering Task Force (IETF).

      NFS is generally implemented in computing environments where centralized management of data and resources is critical. Network file system works on all IP-based networks. Depending on the version in use, TCP and UDP are used for data access and delivery.

      The NFS protocol is independent of the computer, operating system, network architecture, and transport protocol, which means systems using the NFS service may be manufactured by different vendors, use different operating systems, and be connected to networks with different architectures. These differences are transparent to the NFS application, and the user.

      komprise-elastic-data-migration-page-promo-1536x349

      Getting Started with Komprise:

    • New Technology File System (NTFS) Extended Attributes

      Properties organized in (name, value) pairs, optionally set to New Technology File System (NTFS) files or directories to record information that can’t be stored in the file itself.

      Getting Started with Komprise:

    • NFS Data Migration

      NFS protocol data migration refers to the process of transferring data stored in NFS protocol-based systems, such as Unix and Linux file servers, to another system, such as a new file server or a cloud-based storage service. The NFS (Network File System) protocol is a file sharing protocol used by Unix and Linux-based systems to access files and other resources on a server over a network.

      Like and data migration, NFS data migrations involves several steps, such as data extraction, data transformation, data loading, data verification, and data archiving. The goal of NFS protocol data migration is to ensure the accurate and secure transfer of data to the new system, while minimizing any disruptions to business operations and preserving the integrity of the data.

      Although a less chatty protocol than SMB, NFS data migrations can be challenging due to the complex nature of the NFS protocol and the large volumes of unstructured data that are often involved. To ensure a successful file data migration, organizations typically use specialized tools and services, such as data migration software, cloud data migration services, and managed data migration services.

      Komprise delivers 27x faster NFS migrations

      27x-Faster-v3-300x281

      To address the critical NFS migration issues (and SMB and S3/object protocols) IT faces today, Komprise has developed Elastic Data Migration. This super-fast data migration solution is a highly parallelized, multi-processing, multi-threaded approach that works at two levels:

      • Multi-level Parallelism: Maximizes the use of available resources by exploiting parallelism at multiple levels: shares and volumes, directories, files, and threads to maximize performance. Komprise Elastic Data Migration breaks up each migration task into smaller ones that execute across the Komprise Observers. Komprise Observers are a grid of one or more virtual appliances that run the Komprise Intelligent Data Management solution. All of this parallelism occurs automatically across the grid of Observers. The user simply creates a migration task and can configure the level of parallelism. Komprise does the rest.
      • Protocol-level Optimizations: Reduces the number of round-trips over the protocol during a migration to eliminate unnecessary chatter. Rather than relying on generic NFS clients provided by the underlying operating system, Komprise has fine-tuned the NFS client to minimize overhead and unnecessary back-and-forth messaging. This is especially beneficial when moving data over high-latency networks such as WANs.

      Read the Komprise Elastic Data Migration white paper.

      Getting Started with Komprise:

  • O
    • Object Data Migration

      Object data migration is a type of data migration that supports the movement of object-based data; object data storage uses a flat address space and assigns a unique identifier to each piece of data. There are several factors to consider when planning and executing object data migrations, including:

      • Data compatibility: Organizations need to ensure that the new data storage system is compatible with the existing data and can support the same data formats, protocols, and applications.
      • Data protection: Object data migrations can be complex and lengthy, and organizations need to ensure that their data is protected during the migration process. This may involve using backup and recovery tools, implementing data encryption and other security measures.
      • Performance and scalability: Ensure that the new storage system can meet IT’s performance and scalability requirements.

      Smarter, Faster, Proven Cloud Data Migration

      Learn more about NAS and object data migration with Komprise Elastic Migration. Whether migrating to the cloud, cloud NAS or to a NAS in your data center, with Komprise Elastic Data Migration you get the fast, predictable and cost-efficient data migration for file and object data.

      komprise-elastic-data-migration-page-promo

      Getting Started with Komprise:

    • Object Lock

      What is Object Lock?

      Object Lock is the Amazon S3 object storage API implementation of immutable storage. Object Lock prevents objects from alteration or deletion for a set retention period. Object Lock is available in two modes:

      • Governance mode, which allows privileged administrators to override the Object Lock protection .
      • Compliance, the more strict mode, which cannot be overridden even by administrators for the length of data retention.  

      Many of our customers use Komprise to archive cold data to Amazon S3 and want these files to be immutable for compliance and regulatory purposes. They may want protection against ransomware or malware incidents that can infect NAS shares. For both of these use cases, Komprise supports Amazon S3 buckets configured with S3 Object Lock, which allows customers to store objects using a Write-Once-Read-Many (WORM) model. Once Komprise archives data into such a bucket, the data cannot be overwritten or deleted, providing file retention that meets compliance regulations and protects data from being encrypted by malware or ransomware.

      Ransonware_Blog_pic1

      Learn more about Komprise for cyber resiliency, including optimizing your defenses against cyber incidents, system failure and file data.

      Read the blog post: How to Protect File Data from Ransomware at 80% Lower Cost

      Getting Started with Komprise:

    • Object Storage

      What is Object Storage?

      Object storage, also known as object-based storage, object data storage or cloud storage, is a way of addressing and manipulating data storage as objects. Objects are kept inside a single repository and are not nested in a folder inside other folders. 

      path-to-the-cloud-files-graphic

      Each object has a distinct global identifier or key that is unique within its namespace. The access method for object is via URL, which allows object storage to abstract multiple regions, data centers and nodes, for essentially unlimited capacity behind a simple namespace. 

      Objects, unlike file, have no hierarchy or directories but are stored in a flat namespace. Another key difference versus file is the user or application metadata is in the form of key value pairs. An example of object metadata is when you take a picture with your phone and store to the cloud it includes metadata such as “device=iphone.”

      Object storage can achieve extreme levels of durability by creating multiple copies or implementing erasure coding for data protection. Object storage is also cost-efficient and is a good option for cheap, deep, scale-on-demand storage. While many object storage APIs exist, Amazon’s Simple Storage Service or S3 has become the de-facto standard supported by other public and private cloud storage vendors.

      PttC_pagebanner-2048x639

      Getting Started with Komprise:

    • Observer (Komprise Observer)

      The Komprise Observer is a virtual appliance running at the customer site that analyzes data across NAS silos, moves and replicates data by data management policy, and provides transparent file access to data that’s stored in the cloud.

      Komprise-Architecture-Page-SOCIAL-768x402

      Getting Started with Komprise:

  • P
    • Policy-Based Data Management

      Policy-based data management is data management based on metrics such as data growth rates, data locations and file types, which data users regularly access and which they do not, which data has protection or not, and more.

      The trend to place strict policies on the preservation and dissemination of data has been escalating in recent years. This allows rules to be defined for each property required for preservation and dissemination that ensure compliance over time. For instance, to ensure accurate, reliable, and authentic data, a policy-based data management system should generate a list of rules to be enforced, define the data storage locations, storage procedures that generate data tiering and archival information packages, and manage replication.

      Policy-based data management is becoming critical as the amount of unstructured data continues to grow while IT budgets remain flat. By automating movement of data to cheaper storage such as cloud data storage or private object storage, IT organizations can rein in data sprawl and cut costs.

      Other things to consider are how to secure data from loss and degradation by assigning an owner to each file, defining access controls, verifying the number of replicas to ensure integrity of the data, as well as tracking the chain of custody. In addition, rules help to ensure compliance with legal obligations, ethical responsibilities, generating reports, tracking staff expertise, and tracking management approval and enforcement of the rules.

      As data footprint grows, managing billions and billions of files manually becomes untenable. Using analytics-driven data management to define governing policies for when data should move, to where and having data management solutions that automate based on these policies becomes critical. Policy-based data management systems rely on consensus. Validation of these policies is typically done through automatic execution – these should be periodically evaluated to ensure continued integrity of your data.

      Komprise-Deep-Analytics-Actions-Oct-2021-Blog-Social-768x402

      Getting Started with Komprise:

    • POSIX ACLS

      POSIX ACLs are fine-grained access rights for files and directories. An Access Control Lists (ACL) consists of entries specifying access permissions on an associated object. POSIX ACLs provides more granular control over file and directory permissions than the traditional POSIX permission model.

      The traditional POSIX permission model uses a set of file bits to define permissions for the owner, group, and other users. In contrast, POSIX ACLs provide a more flexible and fine-grained access control mechanism by allowing multiple entries in an access control list, each of which specifies a different user or group and a different set of permissions.

      With POSIX ACLs, you can grant or deny specific permissions to individual users or groups for a particular file or directory. For example, you can allow a particular user to read and write a file, but deny them the ability to execute it. You can also grant a group of users read-only access to a directory, but prevent them from modifying or deleting any files in that directory.

      POSIX ACLs are supported on many Unix-based operating systems, including Linux, BSD, and macOS. They can be managed using command-line utilities such as setfacl and getfacl.

      Not all filesystems support POSIX ACLs. Their behavior may vary across different implementations.

      Komprise-Architecture-Page-SOCIAL-768x402

      Getting Started with Komprise:

    • Primary Storage

      Primary Storage, also known as Network Attached Storage (NAS), is the main area where data is stored for quick access. It’s faster and more expensive as compared to secondary storage, so it shouldn’t hold cold data.

      Rein-In-Storage-768x512

      Getting Started with Komprise:

  • R
    • Ransomware

      What is ransomware?

      Ransomware is a form of malware or cyber-attack perpetrated by criminal organizations to hold a victim’s data for ransom. The attack is typically launched via a trojan that once clicked traverses the users network encrypting file data to deny user access and disrupt business operations. With users and application locked out, the criminals demand payment in exchange for decrypting the victim’s data.

      Ransomware Strategy for File Data Workloads

      In an eWeek article, Komprise co-founder and CEO Kumar Goswami reviewed the ransomware challenge for unstructured file data: The File Data Factor in Ransomware Defense: 3 Best Practices. To create a cost-effective layered ransomware strategy, he recommended the following:

      1. Prioritize visibility and audits
      2. Create a multi-layered data management defense
        • Create Snapshots and Backups for Hot Data
        • Establish Cloud Tiering and Immutable Storage for Cold Data
      3. Have a plan – and validate it

      Cost-Effective Ransomware Data Protection

      Komprise provides cost-effective protection and recovery of file data. Komprise transparently tiers cold data and archives it from expensive storage and backups into a resilient object-locked destination such as Amazon S3 IA with Object Lock.

      By putting the cold data in an object-locked storage and eliminating it from active storage and backups, you can create a logically isolated recovery copy while drastically cutting data storage costs and data backup costs. Komprise creates a logically-isolated copy of your file data with the following properties:

      • Physical Separation on Immutable Storage
      • File-level Isolation
      • Prevent Deletion
      • Instant access and recoverability in the cloud without expensive upfront investments

      Read the blog post: How to Protect File Data from Ransomware at 80 Lower Cost

      Learn more about Komprise for cyber resiliency, including optimizing your defenses against cyber incidents, system failure and file data.

      What is Ransomware?

      Ransomware is a type of malware that threatens to publish the victim’s personal data or perpetually block access to it unless a ransom is paid. According to Gartner, “Ransomware is one of the most common threats facing security and risk management leaders.” Most ransomware attacks target unstructured data on network shares, making centralized file data storage solutions a primary target.

      How to protect File data from Ransomware

      Data backup and disaster recovery (DR) solutions are where most enterprise IT organizations are investing in order to deliver better detection of and data protection against ransomware attacks. To protect file data from ransomware, the solution must:

      • Be cost-effective
      • Protect if backups and snapshots are infected
      • Provide simple recovery without significant upfront investment
      • Be verifiable

      Read the blog post How to Protect File Data from Ransomware at 80 Lower Cost

      Ransomware best practices

      In the eWeek article The File Data Factor in Ransomware Defense: 3 Best Practices, Komprise CEO and co-founder summarizes the following ransomware best practices:

      1. Prioritize visibility and audits
      2. Create a multi-layered data management defense
      3. Have a plan – and validate it

      Getting Started with Komprise:

    • Rehydration

      What is rehydration?

      Rehydration is the process to fully reconstitute files so the transferred data can be accessed and used. Block-level tiering requires rehydrating tiered archived data before it can be used, migrated or backed up. No rehydration is needed with Komprise, which uses file-based tiering.

      Rehydration and the Cloud

      In this post, Komprise CEO Kumar Goswami answers the question: “Will I lose storage efficiencies such as de-dupe by not using a storage tiering solution in the cloud?” He notes:

      Komprise_CloudTieringPool_blogsocial2Block-tiering-vs-file-tiering-Oct-2019-FINALv21024_1The overhead of keeping blocks in the cloud due to high egress costs, high data rehydration costs and high defragmentation costs significantly overshadows any potential de-dupe savings. When data is moved at the block level to the cloud, you are really not saving on any third-party backups and other applications because block tiering is a proprietary solution – read this white paper for more background on block-level vs file-based data tiering and cloud tiering. So if you consider all the additional backup licensing costs, cloud egress costs, cloud retrieval costs plus the fact that you are now locked-in and have to pay file system costs forever in the cloud to access your data (learn more about the benefits of cloud native unstructured data access), then the small savings you may get from dedupe are significantly overshadowed by overall costs and the loss of flexibility.

      Komprise provides a custom data rehydration policy that the user can configure to meet their needs. Data need not be re-hydrated on the first access. Komprise also provides a bulk recall feature if needed. Learn more about file-based cloud tiering with Komprise.

      Getting Started with Komprise:

    • REST (Representational State Transfer)

      REST (Representational State Transfer) is a software architectural style for distributed hypermedia systems, used in the development of Web services. Distributed file systems send and receive data via REST. Web services using REST are called RESTful APIs or REST APIs.

      There are several benefits to using REST APIs: it is a uniform interface so you don’t have to know the inner workings of an application to use the interface, it’s operations are well defined and so data in different storage formats can be acted upon by the same REST APIs, and it is stateless, so each interaction does not interfere with the next. Because of these benefits, REST APIs are fast, easy to implement with, and easy to use. As a result, REST has gained wide adoption.

      6 guiding principles for REST:

      1. Client–server – Separate user interface from data storage improves portability and scalability.
      2. Stateless – Each information request is wholly self contained so session state is kept entirely on the client.
      3. Cacheable – A client cache is given the right to reuse that response data for later, equivalent requests.
      4. Uniform interface – The overall REST system architecture is simplified and uniform due to the following constraints: identification of resources; manipulation of resources through representations; self-descriptive messages; and, hypermedia as the engine of application state.
      5. Layered system – The layered system is composed of hierarchical layers and
        each component cannot “see” beyond the immediate layer with which they are interacting.
      6. Code on demand (optional) – REST allows client functionality to be extended by downloading and executing code in the form of applets or scripts.

      The REST architecture and lighter weight communications between producer and consumer make REST popular for use in cloud-based APIs such as those authored by Amazon, Microsoft, and Google. REST is often used in social media sites, mobile applications and automated business processes.

      REST provides advantages over leveraging SOAP

      REST is often preferred over SOAP (Simple Object Access Protocol) because REST uses less bandwidth, making it preferable for use over the Internet. SOAP also requires writing or using a server program and a client program.

      RESTful Web services are easily leveraged using most tools, including those that are free or inexpensive. REST is also much easier to scale than SOAP services. Thus, REST is often chosen as the architecture for services available via the Internet, such as Facebook and most public cloud providers. Also, development time is usually reduced using REST over SOAP. The downside to REST is it has no direct support for generating a client from server-side-generated metadata whereas SOAP supports this with Web Service Description Language (WSDL).

      Unstructured data management software using REST APIs

      Open-APIs and a REST-based architecture are the keys to Komprise integrations. Using REST APIs gives customers the greatest amount of flexibility and here are some things customers can do with the Komprise Intelligent Data Management software via its REST API:

      • Get analysis results and reports on all their data
      • Run data migrations, data archiving and data replication operations
      • Search for data across all their storage by any metadata and tags
      • Build virtual data lakes to export to AI and Big Data applications

      A REST API is a very powerful, lightweight and fast way to interact with data management software. Here is an example of the Komprise API in action: Automated Data Tagging with Komprise.

      BLOG-Smart-Data-Workflows-Architecture-Overview-CFD14-THUMB-768x512

      Getting Started with Komprise:

  • S
    • S3

      Amazon_Web_Services_LogoThe S3 protocol is used in a URL that specifies the location of an Amazon S3 (Simple Storage Service) bucket and a prefix to use for reading or writing files in the bucket. See S3 Intelligent Tiering.

      Learn more about Komprise for AWS.

      Komprise-Smart-Data-Migration-for-AWS-White-Paper-SOCIAL-768x402

      Getting Started with Komprise:

    • S3 Data Migration

      S3 (Amazon Simple Storage Service) data migration entails transferring data stored in Amazon S3, a cloud-based object storage service offered by Amazon Web Services (AWS), to another system or S3 bucket within AWS.

      S3 data migration involves several steps, such as data extraction, data transformation, data loading, data verification, and data archiving. S3 data migration can be complex and time-consuming, especially for organizations with large volumes of data and strict security and compliance requirements.

      smart-file-data-migration-aws-thumbSmart Amazon S3 Data Migration and Data Management for File and Object Data

      Komprise Elastic Data Migration is designed to make cloud data migrations simple, fast and reliable. It eliminates sunk costs with continual data visibility and optimization even after the migration. Komprise has received the AWS Migration and Modernization Competency Certification, verifying the solution’s technical strengths in file data migration.

      A Smart Data Migration strategy for file workloads to Amazon S3 uses an analytics-driven approach to speed up data migrations and ensures the right data is delivered to the right tier in AWS, saving 70% or more on data storage and ultimately ensuring you can leverage advanced technologies in the cloud.

      Getting Started with Komprise:

    • S3 Intelligent Tiering

      S3 Intelligent Tiering is an Amazon cloud storage class. Amazon S3 offers a range of storage classes for different uses. S3 Intelligent Tiering is a storage class aimed at data with unknown or unpredictable data access patterns. It was introduced in 2018 by AWS as a solution for customers who want to optimize storage costs automatically when their data access patterns change.

      Instead of utilizing the other Amazon S3 storage classes and moving data across them based on the needs of the data, Amazon S3 Intelligent Tiering is a distinct storage class that has embedded tiers within it and data can automatically move across the four access tiers when access patterns change.

      To fully understand what S3 Intelligent Tiering offers it is important to have an overview of all the classes available through S3:

      Classes of AWS S3 Storage

      1. Standard (S3) – Used for frequently accessed data (hot data)
      2. Standard-Infrequent Access (S3-IA) – Used for infrequently accessed, long-lived data that needs to be retained but is not being actively used
      3. One Zone Infrequent Access – Used for infrequently accessed data that’s long-lived but not critical enough to be covered by storage redundancies across multiple locations
      4. Intelligent Tiering – Used for data with changing access patterns or uncertain need of access
      5. Glacier – Used to archive infrequently accessed, long-lived data (cold data) Glacier has a latency of a few hours to retrieve
      6. Glacier Deep Archive – Used for data that is hardly ever or never accessed and for digital preservation purposes for regulatory compliance
      Also be sure to read the blog post about Komprise data migration with AWS Snowball

      Accelerating Petabyte-Scale Cloud Migrations with Komprise and AWS Snowball

      AWS_Building_Logo-scaled

      What is S3 Intelligent Tiering?

      S3 Intelligent Tiering is a storage class that has multiple tiers embedded within it, each with its own access latencies and costs – it is an automated service that monitors your data access behavior and then moves your data on a per-object basis to the appropriate level of tier within the S3 Intelligent Tiering storage class. If your object has not been accessed for 30 consecutive days it will automatically move to the infrequent access tier within S3 Intelligent Tiering, and if the object is not accessed for 90 consecutive days it will automatically move the object to the Archive Access tier and then after 190 consecutive days to the Deep Archive access tier. If an object is moved to the archive tier, the retrieval can take 3 to 5 hours and if it is in the deep archive tier it can take 12 hours. and if it is then subsequently accessed it will move it into the frequently accessed storage class.

      What are the costs of AWS S3 Intelligent Tiering?

      You pay for monthly storage, request and data transfer. When using Intelligent-Tiering you also pay for a monthly per-object fee for monitoring and automation. While there is no retrieval fee in S3 Intelligent-Tiering and no fee for moving data between tiers, you do not manipulate each tier directly. S3 Intelligent Tier is a bucket, and it has tiers within it that objects move through. Objects in the Frequent Access tier are billed at the same rate as S3 Standard, objects stored in the Infrequent Access tier are billed at the same rate as S3 Standard Infrequent Access, objects stored in the Archive Access tier are billed at the same rate as S3 Glacier and objects stored in the Deep Archive access tier are billed at the same rate as S3 Deep Glacier.

      What are the advantages of S3 Intelligent tiering?

      The advantages of S3 Intelligent tiering are that savings can be made. There is no operational overhead, and there are no retrieval costs. Objects can be assigned a tier upon upload and then move between tiers based on access patterns. There is no impact on performance and it is designed for 99.999999999% durability and 99.9% availability over annual average.

      What are the disadvantages of S3 Intelligent tiering?

      The main disadvantage of S3 Intelligent Tiering is that it acts as a black-box – you move objects into it and cannot transparently access different tiers or set different versioning policies for the different tiers. You have to manipulate the whole of S3 Intelligent Tier as a single bucket. For example, if you want to transition an object that has versioning enabled, then you have to transition all the versions. Also, when objects move to the archive tiers, the latency of access is much higher than the access tiers. Not all applications may be able to deal with the high latency.

      S3 Intelligent tiering is not suitable for companies with predictable data access behavior or companies that want to control data access, versioning, etc with transparency. Other disadvantages are that it is limited to objects, and cannot tier from files to objects, the minimum object storage requirement is 30 days, objects smaller than 128kb are never moved from the frequent access tier and lastly, because it is an automated system, you cannot configure different policies for different groups.

      S3 Data Management with Komprise

      Komprise is an AWS Advance Tier partner and can offer intelligent data management with visibility, transparency and cost savings on AWS file and object data. How is this done? Komprise enables analytics-driven intelligent cloud tiering across EFS, FSX, S3 and Glacier storage classes in AWS so you can maximize price performance across all your data on Amazon. The Komprise mission is to radically simplify data management through intelligent automation.

      Komprise helps organizations get more value from their AWS storage investments while protecting data assets for future use through analysis and intelligent data migration and cloud data tiering.

      AWS-Use-Case-Table-2

      Learn more at Komprise for AWS.

      What is S3 Intelligent Tiering?

      S3 Intelligent Tiering is an Amazon cloud storage class that moves data to more cost-effective access tiers based on access frequency.

      How AWS S3 intelligent tiering works

      S3 Intelligent Tiering  is a storage class that has multiple tiers embedded within it. For a monitoring fee data is moved to optimize costs. Each tier with its own access latencies and costs:

      • Frequent – data accessed within 30 days
      • Infrequent – data accessed within 30-90 days
      • Archive Instant Access – data accessed greater than 90 days
      • Deep Archive Access – data not accessed for 180 days or greater (Optional*)

      * Deep Archive Access: Also known as Glacier provides low cost with the tradeoff that data is not available for instant access. Retrieval time is within 12 hours and may cause time out condition for many applications. As such Deep Archive Access must be configured with the default configuration of S3 Intelligent Tiering

      What are the advantages of S3 Intelligent tiering?

      The advantages of S3 Intelligent tiering are that savings can be made for data where access pattern is unpredictable or unknown. There is no operational overhead, and there are no additional retrieval costs. Objects can be assigned a tier upon upload and then move between tiers based on access patterns.

      What are the disadvantages of S3 Intelligent tiering?

      The main disadvantage of S3 Intelligent Tiering is that it acts as a black-box – you move objects into it and cannot transparently access different tiers or set different versioning policies for the different tiers. For well-known workloads selecting the appropriate tier of storage can be more cost-effective vs S3 Intelligent Tiering.

      Getting Started with Komprise:

    • Scale-Out Grid

      Komprise is a scale-out grid architecture. Traditional approaches to managing data have relied on a centralized architecture – using either a central database to store information, or requiring a primary-replica architecture with a central primary server to manage the system. These approaches do not scale to address the modern scale of data because they have a central bottleneck that limits scaling. A scale-out architecture delivers unprecedented scale because it has no central bottlenecks. Instead, multiple servers work together as a grid without any central database or master and more servers can be added or removed on-demand.

      Scale-out grid architectures are harder to build because they need to be designed from the ground up to not only distribute the workload across a set of processes but also need to provide fault-tolerance so if any of the processes fails the overall system is not impaired. Below is a screenshot of the Komprise elastic grid architecture.

      Read the Komprise Architecture Overview white paper to learn more.

      komprise_scale_out_grid_architecture-768x415

      Learn more about the Komprise architecture.

      Getting Started with Komprise:

    • Scale-Out Storage

      Scale-out storage is a type of storage architecture in which devices in connected arrays add to the storage architecture to expand disk storage space. This allows for the storage capacity to increase only as th3-Keys-to-Solving-Data-Growth-Challenges-June-2020-1e need arises. Scale-out storage architectures adds flexibility to the overall data storage environment while simultaneously lowering the initial storage set up costs.

      With data growing at exponential rates, enterprises will need to purchase additional storage space to keep up. This data growth comes largely from unstructured data, like photos, videos, PowerPoints, and Excel files. Another factor adding to the expansion of data is that the rate of data deletion is slowing, resulting in longer data retention policies. For example, many organizations are now implementing “delete nothing” data management policies for all kinds of data. With data storage demands skyrocketing and budgets shrinking, scale-out storage can help manage these growing costs.

      Whether it’s NetApp, Pure Storage, Dell EMC, Qumulo or other enterprise scale-out storage technology, including cloud services from AWS, Azure or Google, Komprise Intelligent Data Management ensures you get maximum cost savings and value from your unstructured data.

      Read the white paper: Why Data Growth is Not a Storage Problem

      Getting Started with Komprise:

    • Secondary Storage

      What is Secondary Storage?

      Secondary storage devices are storage devices that operate alongside the computer’s primary storage, RAM, and cache memory. Secondary storage is for any amount of data, from a few megabytes to petabytes. These devices store almost all types of programs and applications. This can consist of items like the operating system, device drivers, applications, and user data. For example, internal secondary storage devices include the hard disk drive, the tape disk drive, and compact disk drive.

      Some key facts about secondary storage:

      Secondary Storage Data Tiering

      Secondary storage typically tiers or archives inactive cold data and backs up primary storage through data replication or other data backup methods. This replication or data backup process, ensures there is a second copy of the data. In an enterprise environment, the storage of secondary data can be in the form of a network-attached storage (NAS) box, storage-area network (SAN), or tape. In addition, to lessen the demand on primary storage, object storage devices may also be used for secondary storage. The growth of organizational unstructured data has prompted storage managers to move data to lower tiers of storage, increasingly cloud data storage, to reduce the impact on primary storage systems. Furthermore, in moving data from more expensive primary storage to less expensive tiers of storage, knowns as cloud tiering, storage managers are able to save money. This keeps the data easily accessible in order to satisfy both business and compliance requirements.

      path-to-the-cloud-files-graphic

      When data tiering and archiving cold data to secondary storage, it is important that the archiving / tiering solution does not disrupt users by requiring them to rewrite applications to find the data on the secondary storage. Transparent archiving is key to ensuring that data moved to secondary storage still appears to reside on the primary storage and continues to be accessed from the primary storage without any changes to users or applications. Transparent move technology solutions that use file-level tiering to accomplish this.

      Learn More: Why Komprise is the Easy, Fast, No Lock-In Path to the Cloud for file and object data.

      What is Secondary Storage?

      Secondary storage, sometimes called auxiliary storage, is non-volatile and is used to store data and programs for later retrieval. It is also known as a backup storage device, tier 2 storage, external memory, secondary memory or external storage. It is a non-volatile device that holds data until it is deleted or overwritten.

      Secondary Storage Devices

      Here are some examples of secondary storage devices:

      • Hard drive
      • Solid-state drive
      • USB thumb drive
      • SD card
      • CD
      • DVD
      • Floppy Diskette
      • Tape Drive
      What is the difference between Primary and Secondary Storage?

      Primary storage is the main memory where the operating system resides and is likely to be temporary, more expensive, smaller and faster and is used for data that needs to be frequently accessed.

      Secondary storage can be hosted on premises, in an external device, or in the cloud. It is more likely to be permanent, cheaper, larger and slower and is typically used for long term storage for cold data.

      Getting Started with Komprise:

    • Shadow IT

      Shadow IT is a term used in information technology describing systems and solutions not compliant with internal organizational approval. This can mean typical internal complacence is not followed, such as documentation, security, reliability, etc.

      However, shadow IT can be an important source of innovation, and can also be in compliance, even when not under the control of an IT organization.

      An example of shadow IT is when business subject matter experts can use shadow IT systems and the cloud to manipulate complex datasets without having to request work from the IT department. IT departments must recognize this in order to improve the technical control environment, or select enterprise-class data analysis and management tools that can be implemented across the organization, while not stifling business experts from innovation.

      Ways to IT teams can cope with shadow IT are:

      • Reducing IT evaluation times for new applications
      • Consider cloud applications
      • Provide ways to safely identify and move relevant data to the cloud
      • Clearly document and inform business controls
      • Approve Shadow IT in the short term
      • Get involved with teams across your organization to help stay informed of upcoming needs

      Read the white paper: Getting Departments to Care About Storage Savings.

      Komprise_DepartmentalArchivingthumb.png-30x20

      Getting Started with Komprise:

    • Shared-Nothing Architecture

      A shared-nothing architecture is a distributed-computing architecture in which each update request is handled by a single node, which eliminates single points of failure, allowing continuous overall system operation despite individual node failure. Komprise Intelligent Data Management is based on a shared-nothing architecture.

      Learn more about the Komprise shared-nothing architecture.

      Komprise-Architecture-Page-SOCIAL

      Getting Started with Komprise:

    • Showback

      Showback is a method of tracking data center utilization rates of an organization’s business units or end users. Similar to IT chargeback, the metrics for showback are for informational purposes only; no one is billed. Some organizations refer to showback as “shameback.”

      The Showback model aims to allocate the costs of IT resources and services to the business units or departments that consume them. It helps organizations to better understand and track their IT expenses and make informed decisions about resource allocation and utilization. Showback involves collecting data on IT usage and presenting it in a way that is transparent and easily understandable for business stakeholders. This information can be used to make informed decisions about future investments in IT infrastructure, as well as to negotiate service level agreements and establish chargeback policies.

      In the white paper Getting Departments to Care About Storage Savings, the Showback model is explained.

      With Komprise analytics-driven unstructured data management, authorized departmental users can monitor and understand their data usage (examples: how many and what type of files, where stored and biggest consumers) in an interactive dashboard. This is an essential part of a showback model.

      Read the blog post: Komprise brings data storage insights to business teams and departments.

      Getting Started with Komprise:

    • Smart Data Workflows

      What are Komprise Smart Data Workflows?

      Smart Data Workflows, part of the Komprise Intelligent Data Management platform, is a systematic process to discover relevant file and object data across cloud, edge and on-premises datacenters and feed data in native format to AI and machine learning (ML) tools, data lakes and cloud file storage or cloud object storage. Smart Data Workflows solve common problems in unstructured data management: finding and moving the right unstructured data into data lakes, analytics platforms and cloud storage. Most of the work in finding and categorizing unstructured data to feed machine learning pipelines has been manual, delaying time to value and impeding the results of machine learning and AI projects.

      Komprise-Smart-Data-Workflows-Diagram-9-1536x685

      Users can create automated workflows for all the steps required to find the right data across your storage assets, tag and enrich the data, and send it to external tools for analysis. The Komprise Global File Index and Smart Data Workflows together reduce the time it takes to find, enrich and move the right unstructured data by up to 80%.

      The components of Smart Data Workflows:

      Search: Define and execute a custom query across on-prem, edge and cloud data silos to find the data you need.​

      Execute & Enrich: Execute an external function on a subset of data and tag it with additional metadata. ​

      Cull & Mobilize: Move only tagged data to the cloud.​

      Manage Data Lifecycle: Move the data to a lower storage tier for cost savings once the analysis is complete.​

      Watch the Smart Data Workflows chalk-talk.

      Getting Started with Komprise:

    • SMB Data Migration

      SMB protocol data migration refers to the process of transferring data stored in SMB protocol-based systems, such as Windows file servers, to another system, such as a new file server or a cloud-based storage service. The SMB (Server Message Block) protocol is a network file sharing protocol used by Windows-based systems to access files and other resources on a server over a network.

      Like any file data migration, an SMB data migration involves several steps, such as data extraction, data transformation, data loading, data verification, and data archiving. The goal of an SMB data migration is to ensure that all data accurately and securely transfers to the new system, while minimizing any disruptions to business operations and preserving the integrity of the data.

      SMB protocol data migration can be challenging due to the complex nature of the SMB protocol and the large volumes of data that are often involved. To ensure a successful migration, organizations typically use specialized tools and services, such as data migration software, cloud data migration services, and managed data migration services.

      The Barriers to Fast SMB Migrations

      From the Hypertransfer white paper:

      Unstructured data is everywhere. From genomics and medical imaging to streaming video, electric cars, and IoT products, all sectors generate unstructured file data. Data-heavy enterprises typically have petabytes of file data, which can consist of billions of files scattered across different storage vendors, architectures and locations. And while file data growth is exploding, IT budgets are not. That’s why enterprises’ IT organizations are looking to migrate file workloads to the cloud. However, they face many barriers, which can cause migrations to take weeks to months and require significant manual effort. These include:

      • Billions of files, mostly small: Unstructured data migrations often require moving billions of files, the vast majority of which are small files that have tremendous overhead, causing data transfers to be slow.
      • Chatty protocols: Server message block (SMB) protocol workloads—which can be user data, electronic design automation (EDA) and other multimedia files or corporate shares—are often a challenge since the protocol requires many back-and-forth handshakes which increase traffic over the network.
      • Large WAN latency: Network file protocols are extremely sensitive to high-latency network connections, which are essentially unavoidable in wide area network (WAN) migrations.
      • Limited network bandwidth: Bandwidth is often limited or not always available, causing data transfers to become slow, unreliable and difficult to manage.

      Speed up SMB Migration with Hypertransfer

      Blocks and Files coverage: Komprise speeds SMB data migration to cloud 

      Hypertransfer for Komprise Elastic Data Migration delivers 25x performance gains compared to other tools. Komprise Elastic Data Migration is a SaaS solution available with the Komprise Intelligent Data Management platform or standalone. Designed to be fast, easy and reliable with elastic scale-out parallelism and an analytics-driven approach, it is the market leader in file and object data migrations, routinely migrating petabytes of data (SMB, NFS, Dual) for customers in many complex scenarios. Komprise Elastic Data Migration ensures data integrity is fully preserved by propagating access control and maintaining file-level data integrity checks such as SHA-1 and MD5 checks with audit logging.

      ———-

      Getting Started with Komprise:

    • SMB protocol (Server Message Block)

      What is the SMB protocol?

      Server Message Block (SMB) protocol is network communication protocol for providing shared access to files, printers, and serial ports between nodes on a network. (SMB is also known as Common Internet File Systems (CIFS)).

      Cloud File Data Migration and SMB

      Unstructured data migrations to the cloud can be billions of (mostly small) files, which have significant overhead, causing data transfers to be slow. In addition, SMB protocol workloads, which can be user data, corporate shares, electronic design automation (EDA) and other multimedia files, etc., are bring even more challenges since the SMB protocol requires many back-and-forth handshakes, thereby increasing traffic over the network. Another challenge with SMB cloud data migrations is wide are network (WAN) latency. Network file protocols like SMB are extremely sensitive to high-latency network connections, which are unavoidable in WAN migrations. Bandwidth is also often limited or not always available, causing file data transfers to become slow, unreliable and difficult to manage.

      Komprise Hypertransfer for Faster SMB Migrations

      Read the blog post: Turbo Charge Your SMB Cloud Migrations with Hypertransfer for Elastic Data Migration

      Getting Started with Komprise:

    • Storage Area Network (SAN)

      What is a Storage Area Network (SAN)?

      A Storage Area Network (SAN) is a dedicated high-speed network (usually Fibre Channel) that provides block-level access to storage devices such as disk arrays and tape libraries. The goal of a SAN is to provide centralized data storage and data management that can be easily accessed by multiple servers. SANs can increase storage utilization, improve data security, and speed access times compared to using direct-attached storage (DAS).

      Storage Area Networks (SANs) are still widely used in modern data centers. They provide centralized data storage and management of data, which allows for improved data availability, performance, and security compared to traditional direct-attached storage (DAS) solutions. SANs also typically provide advanced features such as storage virtualization, disaster recovery, and basic data tiering. In recent years, cloud computing adoption has led to greater use of network-attached storage (NAS) and object storage solutions, but SANs remain a popular choice for many organizations due to their performance, reliability, and compatibility with existing infrastructure.

      Komprise Intelligent Data Management is a storage-agnostic solution that works across NAS technologies, from the data center to the cloud, to deliver visibility and mobility of unstructured data. Komprise helps customers with petabyte-scale data environments be more efficient in managing unstructured data and also proactively (and intelligently) moves file and object data to the right location at the right time for cost savings and value.

      Getting Started with Komprise:

    • Storage Array

      A storage array is a type of data storage system that provides centralized, scalable storage for multiple computer systems. It typically consists of multiple disk drives, along with the hardware and software required to manage and control the disk storage.

      Storage arrays can be used for a variety of purposes, including data backup and recovery (DR), data archiving, and as a shared storage resource for virtualized environments. They offer several benefits over traditional direct-attached storage (DAS) systems, including increased reliability, scalability, and performance.

      Types of Storage Arrays

      There are several types of storage arrays, including:

      • Network Attached Storage (NAS): A NAS storage array provides file-level access to storage over a network.
      • Storage Area Network (SAN): A SAN storage array provides block-level access to storage over a high-speed network.
      • Hybrid Storage Array: A hybrid storage array combines the features of both NAS and SAN storage arrays and delivers a balance of file-level and block-level access to storage.
      • All-Flash Storage Array: This type of storage array uses only solid-state drives (SSDs) for storage, rather than traditional hard disk drives (HDDs). It provides extremely high performance and low latency, making it suitable for demanding applications such as database and virtualization environments.

      Cloud Storage Arrays

      The trend of storage arrays moving to the cloud has been growing in recent years, as organizations seek to leverage the benefits of cloud computing for their storage needs. Cloud storage arrays offer several advantages over traditional on-premise storage arrays, including:

      • Scalability: Cloud storage arrays can easily scale up or down as storage needs change, without the need for physical hardware upgrades.
      • Cost savings: Cloud storage arrays can be less expensive than on-premise storage arrays, especially for organizations with rapidly changing storage needs.
      • Flexibility: Cloud storage arrays are accessible from anywhere with an internet connection.
      • Disaster recovery: Cloud storage arrays can be used to recover data quickly and easily in the event of a disaster or outage, without the need for physical hardware or tapes.

      Getting Started with Komprise:

    • Storage Pool

      What is a storage pool?

      Storage pools are collections of storage volumes exported to a shared storage environment. Traditionally, storage pools were limited to storage volumes from a single vendor – for instance, you may have Flash and Disk storage volumes in a storage pool.

      Storage pools may be homogeneous – that is, all the storage volumes are SSD/Flash, or all the storage volumes are disk, etc.  or they may be heterogeneous – the storage volumes are different classes of storage e.g. Flash, Disk, etc. Storage data tiering is an integral solution to handling heterogeneous storage pools.

      What is storage tiering within a storage pool and why is it needed?

      Storage tiering is a technique whereby the file metadata and the frequently-accessed blocks are stored in the highest tier and less-accessed blocks are downgraded to lower, cheaper tiers within a storage pool. This automated storage tiering approach allows the vendor to reduce costs by using smaller, faster tiers while still providing good performance. Storage tiering is often touted as a storage efficiency technique for customers to save on storage costs.  But a key thing to remember is that the bulk of the cost of data is not in the storage but in the active management and backups of the data.  Storage efficiency impacts the storage cost but not the active data management costs.

      What is cloud tiering and how does it relate to storage pools?

      Storage array vendors are now using their tiering technologies to tier data to the cloud. This is not what the technology was originally designed for, since the storage pool is no longer under a single vendor’s control and no longer local to a network.

      Storage array vendors like NetApp and Dell EMC have created “Pool” solutions to externally tier data to less expensive storage such as in the cloud. Storage pools can reduce the cost of fast, expensive flash-based storage by migrating non-critical data sets to lower-cost storage for archiving and compliance and for “cold” data which hasn’t been accessed for a designated period of time to the cloud.

      Read the blog post: What you need to know before jumping into the cloud tiering pool

      Komprise_CloudTieringPool_blogthumb

      What are the challenges and considerations for cloud storage pools?

      While these solutions work well for tiering secondary data such as snapshot copies to the cloud, they result in unnecessary costs and lock-in when tiering and archiving files. As well, the pool approach tiers data in proprietary blocks versus files that all applications can understand. This presents the following challenges:

      • Policies to specify the blocks to be tiered are limited, resulting in much higher access rate to the cloud, and higher egress costs.
      • Block tiering to the cloud can reduce the performance of the storage array. Given the vast quantities of data most enterprises are dealing with today, block tiering is not suited for general data tiering to a public cloud across high latency channels.
      • Block tiering locks you into your storage vendor. Since the cold data is tiered to the cloud in a proprietary format, when it is time to decommission your storage array and replace it with a new one you must stay with the same vendor.
      • Proprietary lock-in. You cannot directly use native cloud services to access your data in the cloud. It has to be through the proprietary storage filesystem itself. This creates unnecessary licensing costs that customers must pay forever to access their data., resulting in much higher access rate to the cloud, and higher egress costs.

      Download the white paper: Cloud Tiering: Storage-Based vs Gateways vs File-Based: Which is Better and Why?

      Getting Started with Komprise:

    • Storage Tiering

      What is Storage Tiering?

      Storage Tiering refers to a technique of moving less frequently used data, also known as cold data, from higher performance storage such as SSD to cheaper levels of storage or tiers such as cloud or spinning disk. The term “storage tiering” arose from moving data around different tiers or classes of storage within a storage system, but has expanded now to mean tiering or archiving data from a storage system to other clouds and storage systems. Storage tiering is now considered a core feature of modern storage systems and recently has become part of default configuration for next generation storage like AWS FSx ONTAP.

      Storage-agnostic data management and data tiering have emerged as more and more enterprise organizations adopt hybrid, multi-cloud, and edge IT infrastructure strategies. See also cloud tiering and choices for cloud data tiering.

      komprise-file-tiering-image-768x404

       

      Storage Tiering Cuts Costs Because 70%+ of Data is Cold

      As data grows, data storage costs grow. It is easy to think the solution is more efficient storage. Or simply buy more storage. But data management is the real solation. Typically over 70% of data is cold and has not been accessed in months, yet it sits on expensive storage hardware or cloud infrastructure and consumes the same backup resources as hot data. As a result, data storage costs are rising, backup times are slowing, disaster recovery (DR) is unreliable, and the sheer bulk of this data makes it difficult to leverage newer options like Flash and Cloud.

      Data Tiering Was Initially Used within a Storage Array

      Data Tiering was initially a technique used by storage systems to reduce the cost of data storage by tiering cold data within the storage array to cheaper but less performant options – for example, moving data that has not been touched in a year or more from an expensive Flash tier to a low-cost SATA disk tier.

      Typical storage tiers within a storage array or on-premises storage device include:

      • Flash or SSD: A high-performance storage class but also very expensive. Flash is usually used on smaller data sets that are being actively used and require the highest performance.
      • SATA Disks: High-capacity disks with lower performance that offer better price per GB vs SSD.
      • Secondary Storage, often Object Storage: Usually a good choice for capacity storage – to store large volumes of cool data that is not as frequently accessed, at a much lower cost.

      Increasingly, enterprise IT organization are looking at another option – tiering or archiving data to a public cloud.

      • Public Cloud Storage: Public clouds currently have a mix of object and file storage options. The object storage classes such as Amazon S3 and Azure Blob (Azure Storage) provide tremendous cost efficiency and all the benefits of object storage without the headaches of setup and management.
      • Cloud NAS has also become increasingly popular, but if unstructured data is not well managed, data storage costs will be prohibitive.

      Cold-Data-TieringCloud Storage Tiering is now Popular

      Tiering and archiving less frequently used data or cold data to public cloud storage classes is now more popular. This is because customers can leverage the lower cost storage classes within the cloud to keep the cold data and promote them to the higher cost storage classes when needed. For example, data can be archived or tiered from on-premises NAS to Amazon S3 Infrequent Access or Amazon Glacier for low ongoing costs, and then promoted to Amazon EFS or FSX when you want to operate on it and need performance.

      Cloud isn’t just low-cost data storage 

      The cloud offers more than low-cost data storage. Advanced security features such immutable storage that can defeat ransomware. Cloud native services from analytics to machine learning can drive value from your unstructured data.

      But in order to take advantage of these capabilities, and to ensure you’re not treating the cloud as just a cheap storage locker, data that is tiered to the cloud needs to be accessible natively in the cloud without requiring third-party software. This requires the right approach to storage tiering, which is file-tiering, not block-tiering.

      Komprise_ArchivingTiering_blogthumb-768x512

      Block Tiering Creates Unnecessary Costs and Lock-In

      Block-level storage tiering was first introduced as a technique within a storage array to make the storage box more efficient by leveraging a mix of technologies such as more expensive SSD disks as well as cheaper SATA disks.

      Block storage tiering breaks a file into various blocks – metadata blocks that contain information about the file, and data blocks that are chunks of the original file. Block-tiering or Block-level tiering moves less used cold blocks to lower, less expensive tiers, while hot blocks and metadata are typically retained in the higher, faster, and more expensive storage tiers.

      Block tiering is a technique used within the storage operating system or filesystem and is proprietary. Storage vendors offer block tiering as a way to reduce the cost of their storage environment. Many storage vendors are now expanding block tiering to move data to the public cloud or on-premises object storage.

      But, since block storage tiering (often called CloudPools – examples are NetApp FabricPool and Dell EMC Isilon CloudPools) is done inside the storage operating system as a proprietary solution, it has several limitations when it comes to efficiency of reuse and efficiency of storage savings. Firstly, with block tiering, the proprietary storage filesystem must be involved in all data access since it retains the metadata and has the “map” to putting the file together from the various blocks. This also means that the cold blocks that are moved to a lower tier or the cloud cannot be directly accessed from the new location without involving the proprietary filesystem because the cloud does not have the metadata map and the other data blocks and the file context and attributes to put the file together. So, block tiering is a proprietary approach that often results in unnecessary rehydration of the data and treats the cloud as a cheap storage locker rather than as a powerful way to use data when needed.

      With block storage tiering, the only way to access data in the cloud is to run the proprietary storage file system in the cloud which adds to costs. Also, many third-party applications such as backup software that operate at a file level require the cold blocks to be brought back or rehydrated, which defeats the purpose of tiering to a lower cost storage and erodes the potential savings. For more details, read the white paper: Block vs. File-Level Tiering and Archiving.

      PttC_pagebanner-2048x639

      Getting Started with Komprise:

    • Stubs

      What are Stubs?

      Stubs are placeholders of the original data after it has been migrated to the secondary storage. Stubs replace the archived files in the location selected by the user during the archive. Because stubs are proprietary and static, if the stub file is corrupted or deleted, the moved data gets orphaned. Komprise does not use stubs, which eliminates this risk of disruption to users, applications, or data protection workflows.

      Challenges with Stubs

      Stubs are brittle. When stubbed data is moved from its storage (file, object, cloud, or tape) to another location, the stubs can break. The storage management system no longer knows where the data has been moved to and it becomes orphaned, preventing data access. Most storage management solutions on the market use client-server architecture and do not scale to support data at massive scale.

      Proprietary interface like stubs can be used to make tiered data appear to reside on primary storage, but the transparency ends there. To access data, the storage management system intercepts access requests, retrieves the data from where it resides, and then rehydrates it back to primary storage. This process adds latency and increases the risk of data loss and corruption.

      Standards-Based Transparent Data Tiering

      A true transparent data tiering solution creates no disruption, and that’s only achievable with a standards-based approach. Komprise Intelligent Data Management is the only standards-based transparent data tiering solution that uses Transparent Move Technology™ (TMT), which uses Dynamic Links that are based on industry-standard symbolic links instead of proprietary stubs.

      Komprise-Transparent-Move-Technology-White-Paper-SOCIAL-768x402

      Learn more about the differences between stubs, symbolic links and Dynamic Links from Komprise.

      Read the Komprise Architecture Overview white paper to learn more.

      Getting Started with Komprise:

    • Sustainable Data Management

      What is Sustainable Data Management?

      Sustainable data management refers to the practice of collecting, storing, and using data in a way that is environmentally friendly, economically feasible, and socially responsible. This involves reducing the carbon footprint of data centers, leveraging renewable energy sources, and following ethical principles in the collection, storage, and use of data (regardless of source, structure or location). It also involves implementing data management strategies that ensure the long-term preservation and accessibility of valuable data, while reducing waste and avoiding data hoarding. The goal of sustainable data management is to balance the economic, environmental, and social impacts of data operations and ensure that data is managed in a way that supports the well-being of both current and future generations

      In an an article for Sustainability magazine, Komprise co-founder and COO Krishna Subramanian noted:

      Most organizations have hundreds of terabytes of data, if not petabytes, which can be managed more efficiently and even deleted but are hidden and/or not understood well enough to manage appropriately. In most businesses, 70% of the cost of data is not in storage but in data protection and management. Creating multiple backup and DR copies of rarely used cold data is inefficient and costly, not to mention its environmental impact. Furthermore, storing obsolete “zombie data” on expensive on-premises hardware (or even, cloud file storage, which is the highest cost tier for cloud storage), doesn’t make sage economic sense and consumes the most energy resources.

      She summarized the steps to sustainable unstructured data management as:

      1. Understand your unstructured data (analyze your unstructured data)
      2. Automate data actions by policy (data management policy)
      3. Work with data owners and key stakeholders (getting departments to care about data storage savings)
      Her sustainable data management conclusion:

      Sustainable data center and data management practices are no longer nice to have – but in many respects, a need to have. The world is storing too much data and without smart strategies for managing it, the price is becoming too high: significantly higher IT infrastructure costs, lack of opportunity to participate in government incentives, potential customer attrition, and long-term potential brand damage by ignoring the sustainability movement.

      Read the full article here.

      ———-

      Getting Started with Komprise:

    • Symbolic Link

      NAS_Diagram.png

      What is a Symbolic Link?

      What is a symlink?

      Symbolic Links, also known as symlinks, are file-system objects that point toward another file or folder. These links act as shortcuts with advanced properties that allow access to files from locations other than their original place in the folder hierarchy by providing operating systems with instructions on where the “target” file can be found.
      For the operating system, the symlink is transparent for many operations and functions in the same manner as the target file or folder would even though it’s only a link that points to the original. For example, if a program needs to be in folder A to run, but you want to store it in folder B instead, the entire A folder could be moved into the B folder with a symbolic link created in folder A which points to folder B. When the program is launched, the operating system would refer to folder A, find the symbolic link to folder B, and run the program from folder B as if it was still in its original place in folder A.
      This method is widely used in the storage industry in programs such as OneDrive, Google Drive, and Dropbox to sync files and folders across different platforms of storage or in the cloud.
      These types of links began to appear in operating systems in the late 70’s such as RDOS. In modern computing, symbolic links are present in most Unix-like operating systems which are supported by the POSIX standard such as Linux, macOS, and Tru64. This feature was also added to Microsoft Windows starting with Windows Vista.

      Symbolic Links vs Hard Links

      Both types of symbolic links (also known as symbolic linking) allow seamless and mostly transparent targeting of a file, but they do so in different ways.

      Soft links, also referred to as symbolic links by Microsoft, work similarly to a normal shortcut in the sense that they point directly to file or folder itself. These types of links also use less memory overall.
      On the other hand, hard links point to the storage space designated to hold the contents of the file or folder.
      In this sense, if the location or the name of the file changes, then a soft link would no longer work since it was pointing to the original file itself, but with a hard link, any changes made to the original file or the hard link contents are mirrored by the other because both are pointing to the same location on the storage.
      Hard links act as a secondary entrance to the same file or folder which they are linked to, but they can only be used to connect two entities within the same file system, whereas soft links can bridge the gap between different storage devices and file systems.

      Hard symbolic links also have more restrictive requirements than soft links:
      • Hard links may not be able to link to directories.
      • The target file or folder for a hard link must exist.
      • Hard links cannot point to targets that are located on different partitions, volumes, or file systems.

      Junctions

      A Junction is a lesser-used, third type of symbolic link that combines aspects from both hard and soft links. The target file must exist for the junction to be created, but if the target file or folder is erased afterward, the link will still be there but will no longer be functional.

      How are Soft and Hard Symbolic Links Commonly Used?

      Hard links are used to create “backups” on filesystems without using any additional storage space. This is a benefit as it is often easier to manage a single directory with multiple references pointing to it rather than managing multiple instances of the same directory. If the file or folder is no longer accessible from its original location, then the hard link can be used as a backup to regain access to those files.
      The Time Machine feature on macOS uses hard symbolic links to create images to be used for backup.
      Soft links are used more heavily to enable access for files and folders on different devices or filesystems. These types of symbolic links are also used in situations where multiple names are being used to link to the same location.

      Types of Businesses that Make Use of Symbolic Links

      Symbolic links are leveraged in nearly every industry that uses computers, but some industries make use of these links more than others. Below are industries where symbolic links are most commonly used:

      Creating Symbolic Links

      The process used to create symbolic links is different on each type of operating system. Below are brief instructions on how a soft or hard link can be set up in Linux and Windows.

      How to Create a Soft Link in Linux

      To create a soft symbolic link in Linux, the ln command-line utility can be used as such:
      ln -s [OPTIONS] FILE LINK
      The FILE argument represents the origin of the link. The LINK argument represents the target destination for the soft link.
      When the command is successful, there is no output and the command-line will return zero.

      How to Create a Hard Link in Linux

      For creating hard links in Linux, a similar version of the ln command is used but without the -s:
      ln [OPTIONS] FILE LINK
      The FILE argument is still the origin location and the LINK argument is still the destination file or directory.

      Creating a Windows Soft Link

      The mklink command can be used to create soft links in Windows Vista & later through a command prompt or powershell with elevated permissions. By default, this command with no options will produce a soft link.
      mklink command:
      mklink Link Target

      The Link argument is the origin file/directory location and the Target argument represents the intended destination file.
      For creating a soft link pointing to a directory, this command is used instead:
      mklink /D Link Target

      Creating a Windows Hard Link

      Similarly to creating a soft link in Windows, the mklink can also be used to create hard links when /H is included as an option as such:
      mklink /H Link Target
      For creating a junction, the /J option is used instead of /H:
      mklink /J Link Target

      Komprise Transparent Move Technology (TMT) and Symlinks

      The patented Komprise Transparent Move Technology™ (TMT) goes beyond storage-based data tiering to analyze, migrate, tier and replicate data across multi-vendor storage and clouds while enabling native use of the data at each layer. This storage-agnostic data management is possible without disrupting users and without locking data in a proprietary format one vendor’s storage silo.

      Komprise TMT uses the standard, built-in feature of Windows, Linux, and Mac symbolic links, which replace a file with a tiny pointer to another location. By using Dynamic Links inside the standard symbolic link, Komprise extends the file system to call these files from the cloud or other storage systems. Dynamic Links dynamically bind a request to the actual data so it can move a file from NFS or SMB to a native cloud object and still provide transparent access from the source.

      Read the white paper: Leveraging the Full Power of the Cloud with Komprise Transparent Move Technology.

      Getting Started with Komprise:

  • U
    • Unstructured Data

      What is Unstructured Data?

      Data can be of two broad types: structured data and unstructured data.

      • Structured Data: Structured data is data that can be organized by structured categories, such as rows and columns in an Excel spreadsheet or a database. For example, accounting records are structured data because you can organize them by customer, by geography, by product, etc. Structured data is typically stored in a database and can be queried using query languages such as Structured Query Language (SQL). Most data was predominantly structured until 2000 but since then we have seen an explosion of unstructured data. Today, structured data accounts for less than twenty percent of the world’s data.
      • Unstructured Data: Unstructured data is data that doesn’t fit neatly in a traditional database and has no identifiable internal structure. This is the opposite of structured data, which is data stored in a database. Up to 80% of business data is considered unstructured, with this number increasing year over year. Examples of unstructured data are text documents, e-mail messages, photos, audio and video files, CAD / CAM files, genomics sequencing data, medical images,  presentations, IoT and machine-generated data, log files, user documents stored across teams and departments, and much more.

      unstructuredData-768x415

      Unstructured data usually does not include a predefined data model, and it does not match well with relational tables. Text heavy, unstructured data may include numbers and dates, as well as facts. This leads to difficulty in identifying this data using conventional software programs.

      Unstructured data is the predominant data type that is generated by most applications today – from self-driving cars, to Internet of Things (IOT) devices, to genome sequencers, to video and audio files, most of the data we generate and use today is unstructured.

      Why is Unstructured Data Growing so Fast?

      The analyst firm IDC predicts that we will generate over 175 zettabytes of data by 2025 (one zettabyte is 4.4 Billion 1 terabyte drives!). They also predict that in the next three years we will generate more data than what we created over the past 30 years, and this growth trend will continue.

      Most of the data we generate today is unstructured because unstructured data has several advantages over structured data:

      • Wider Use Cases for Unstructured Data: Structured data has a rigid pre-defined structure and it can only be used for its intended purpose. This narrows the number of use cases for structured data – while it is useful for transactional applications like revenue tracking or catalogs, it is not a good use for applications that generate data that is not so easy to categorize such as video or genomics.
      • Various Formats: Unstructured data can be stored in a variety of formats – from a mp4 video to a genomics BAM file to a .log diagnostics file to an X-RAY image that may be stored as a digital PACS format, all of these are types of unstructured data. So, an accurate way to describe unstructured data is that it has a variety of formats and not just one format. This means more applications can generate unstructured data and tailor the format to their use.
      • Various Sizes: Unlike a cell in a database, unstructured data does not have to be a specific size or character limit. For example, you can have small video files for short snippets and large video files for full length movies. This also increases flexibility in how unstructured data is generated and used.

      Since unstructured data is easier to create and use, more applications and users are working with unstructured data.

      Unstructured Data Management


      Managing growing volumes of unstructured data generated within an organization are leading to higher expenses.

      What to know about unstructured data:

      • Volume: The sheer quantity of data will continue to grow in a incomprehensible rate
      • Velocity: The quantity of data is coming in at a continually faster rate
      • Variety: The types of data continue to be more varied

      These 3 Vs of unstructured data, originally defined by former Meta Group / Gartner industry analyst Doug Laney, means that managing unstructured data growth is critical for organizations as they find their budgets and resources are getting stretched to their limits.

      Unstructured data management requires an understanding of what data is hot and actively used, and what data is cold and rarely accessed. In most enterprises, over 80% of unstructured data becomes cold within a year of creation – yet it continues to be managed on the most expensive storage and it continues to consume expensive backup resources. Analytics-driven data management of unstructured data can change this by identifying hot data and cold data across storage and managing hot data on expensive environments while offloading cold data to lower cost passive management. Unstructured data management should be done without restricting access to the cold data – so users and applications continue to see and access the cold data exactly as before, while the organization saves on cold data storage and backups. To understand how Komprise enables enterprise IT organizations to analyze, move, and manage unstructured data and save costs on storage, backup and cloud infrastructure read the white paper: Komprise Intelligent Data Management Architecture Overview.

      Unstructured Data Migration Challenges

      Migrating unstructured data to the cloud has grown in popularity to save data storage costs, consolidate data centers, modernize IT infrastructure and take advantage of cloud-based services such as AI, ML and analytics. But there are many challenges when it comes to unstructured data migrations to the cloud, including:

      • A global enterprise typically has billions of predominantly small files, which have significant overhead, causing data transfers to be slow.
      • Server message block (SMB) and NFS protocol workloads, which can be user data, electronic design automation (EDA) and other multimedia files or corporate shares, are problematic since the protocol requires many back-and-forth handshakes which increase traffic over the network. The SMB protocol in particular, is known to to have WAN transfer performance challenges, meaning cloud migrations can take much more time than IT organizations anticipate if not done correctly.
      • File protocols are sensitive to high-latency network connections, which are unavoidable in WAN migrations.
      • Bandwidth is often limited or not always available, causing cloud NAS migration data transfers to become slow, unreliable and difficult to manage.
      Komprise-Hypertransfer-Migration-White-Paper-SOCIAL-2-768x402
      25 times faster unstructured data migrations with Hypertransfer

      AI Needs Unstructured Data

      In a 2022 blog post, Komprise co-founder and CEO wrote about unstructured data management as the foundation for artificial intelligence (AI) and machine learning (ML) initiatives.

      Enterprises need to be ready for this wave of change and it starts by getting unstructured data prepped, as this data is the critical ingredient for AI/ML. This entails new data management strategies which create automated ways to index, segment, curate, tag and move unstructured data continuously to feed AI and ML tools. Unforeseen changes to society, fueled by AI, are coming soon and you don’t want to be caught flat-footed.

      Komprise-blog-Artifical-Intelligence-needs-unstructured-data-THUMB-768x512
      AI needs unstructured data

      Getting Started with Komprise:

    • Unstructured Data Analytics

      Unstructured data analytics refers to the process of extracting insights and knowledge from large amounts of unstructured data, which is data that does not conform to a traditional structured model, such as relational databases (RDBMS). It includes text documents, images, audio and video files, emails, sensor data and other forms of data that do not have a pre-defined format.

      Unstructured data analytics involves several techniques and technologies to process and analyze the data, such as natural language processing (NLP), machine learning, text mining, image and video analysis, and data visualization. The goal of unstructured data analytics is to discover insights that can inform decisions, improve business processes, and drive innovation.

      The importance of unstructured data analytics is growing in many data-heavy industries, including healthcare, finance, retail and government and across many functions, including marketing, engineering, research and development. The right approach to unstructured data analytics can deliver a competitive advantage, help you understand customer behavior, suggest operational improvements and influence R&D initiatives. The challenge of unstructured data analytics is to manage and process large volumes of data in a scalable and efficient manner, and to extract meaningful insights from the data. Data Lakes, Data Lakehouses, and cloud data storage are typically part of an unstructured data analytics IT infrastructure.

      Komprise-State-of-Unstructured-Data-Management-Report-2022-PR-SOCIAL-1

      According to the Komprise 2022 State of Unstructured Data Management survey, 65% of IT organizations are delivering unstructured data to big data analytics programs.

      Komprise Smart Data Workflows is an automated process for all the steps required to find the right data across your storage assets, tag and enrich the data, and send it to external tools such as a data lakehouse for analysis. Komprise makes it easier and more streamlined to find and prepare the right data for analytics projects.

      Getting Started with Komprise:

    • Unstructured Data Management

      What is Unstructured Data Management?

      Unstructured Data Management is a category of software that has emerged to address the explosive growth of unstructured data in the enterprise and the modern reality of hybrid cloud storage. Data storage and data backup technology vendors are now increasingly recognizing the importance of unstructured data management as data outlives infrastructure and increasingly data mobility is needed as more data migrates to cloud data storage.

      Unstructured data management must be independent and agnostic from data storage, backup, and cloud infrastructure technology platforms. There are 5 requirements:

      1. Goes Beyond Storage Efficiency
      2. Must be Multi-Directional
      3. Doesn’t Disrupt Users and Workflows
      4. Should Create New Uses for Your Data
      5. Puts Your Data First

      In August 2021, Komprise published the State of Unstructured Data Management Report:

      State-of-Unstructured-Data-management-Report-Thumbnail

      Highlights of the Unstructured Data Management Report

      Unstructured Data is Growing, as are its Costs

      Data-Storage-Spend-Charts-1

      • 65.5% of organizations spend more than 30% of their IT budgets on data storage and data management.
      • Most (62.5%) will spend more on storage in 2021 versus 2020.
      Getting More Unstructured Data to the Cloud is a Key Priority

      Majority-of-Data-Stored-Chart-1

      • 50% of enterprises have data stored in a mix of on-premises and cloud-based storage.
      • Top priorities for cloud data management include: migrating data to the cloud (56%) cutting storage and data costs (46%) and governance and security of data in the cloud (41%).
      IT Leaders Want Visibility First Before Investing in More Data Storage
      • Investing in analytics tools was the highest priority (45%) over buying more cloud or on-premises storage or modernizing backups.
      • One-third of enterprises acknowledge that over 50% of data is cold while 20% don’t know, suggesting a need to right-place data through its lifecycle.
      Unstructured Data Management Goals & Challenges: Visibility, Cost Management and Data Lakes
      • 44.9% wish to avoid rising costs.
      • 44.5% want better visibility for planning.
      • 42% are interested in tagging data for future use and enabling data lakes.

      Komprise-State-of-Unstructured-Data-Management-Report-SOCIAL-2-1

      2022 State Unstructured Data Management Report

      In August 2022, Komprise published the 2nd annual State of Unstructured Data Management Report: Komprise Survey Finds 65% of Enterprise IT Leaders are Investing in Unstructured Data Analytics. The Top 5 trends from the report are summarized here. They are:

      1. User Self-Service: In data management, self-service typically refers to the ability for authorized users outside of storage disciplines to search, tag and enrich and act on data through automation—such as a research scientist wanting to continuously export project files to a cloud analytics service.
      2. Moving Data to Analytics Platforms: A majority (65%) of organizations plan to or are already delivering unstructured data to their big data analytics platforms.
      3. Cloud File Storage Gains Favor: Cloud NAS topped the list for storage investments in the next year (47%).
      4. User Expectations Beg Attention: Organizations want to move data without disrupting users and applications (42%).
      5. IT and Storage Directors want Flexibility: A top goal for unstructured data management (42%) is to adopt new storage and cloud technologies without incurring extra licensing penalties and costs, such as cloud egress fees.

      Komprise-State-of-Unstructured-Data-Management-Report-2022-SOCIAL

      Why you need to manage your unstructured data

      In a 2022 interview, Komprise co-founder and COO Krishna Subramanian defined unstructured data this way:

      Unstructured data is any data that doesn’t fit neatly into a database, and isn’t really structured in rows and columns. So every photo on your phone, every X-ray, every MRI scan, every genome sequence, all the data generated by self-driving cars – all of that is unstructured data. And perhaps more relevant to more businesses, artificial intelligence (AI) and machine learning (ML) – they depend on, and usually output, unstructured data too.

      Unstructured data is growing every day at a truly astonishing rate. Today, 85% of the world’s data is unstructured data.

      And it’s more than doubling, every two years.

      The importance of an unstructured data strategy for enterprise

      In part two of the interview, she noted:

      Unstructured data doesn’t have a common structure. But it does have something called metadata. So every time you take a picture on your phone, there’s certain information that the phone captures, like the time of day, the location where the picture was taken, and if you tag it as a favorite, it’ll have that metadata tag on it too. It might know who’s in the photo, there are certain metadata that are kept.

      All filing systems store some metadata about the data. A product like Komprise Intelligent Data Management has a distributed way to search across all the different environments where you’ve stored data, and create a global index of all that metadata around the data. And that in itself is a difficult problem, because again, unstructured data is so huge. A petabyte of data might be a few billion files, and a lot of these customers are dealing with tens to hundreds of petabytes.

      So you need a system that can create an efficient index of hundreds of billions of files that could be distributed in different places. You can’t use a database, you have to have a distributed index, and that’s the technology we use under the hood, but we optimize it for this use case. So you create a global index. Learn more about unstructured data tagging.

      The Future of Unstructured Data Management

      In an end of the year blog post, Komprise executives review unstructured data management and data storage predictions for 2023 and the implications of adopting data services, processing data at the edge, multi-cloud challenges, the importance of getting smart data migration strategies, and more.

      Getting Started with Komprise:

    • Unstructured Data Migration

      Data-Migration-Icon-603x603What is Unstructured Data Migration?

      Unstructured Data Migration is the process of selecting and moving data from one location to another – this may involve moving data across different storage vendors, and across different formats.

      Data migrations are often done in the context of retiring a system and moving to a new system, or in the context of a cloud migration, or in the context of a modernization or upgrade strategy.

      When it comes to unstructured data migrations and migrating enterprise file data workloads to the cloud, data migrations can be laborious, error prone, manual, and time consuming. Migrating data may involve finding and moving billions of files (large and small), which can succumb to storage and network slowdowns or outages. Also, different file systems do not often preserve metadata in exactly the same way, so migrating data without loss of fidelity and integrity can be a challenge.

      NAS Data Migration

      Network Attached Storage (NAS) migration is the process of migrating from one NAS storage environment to another. This may involve migrations within a vendor’s ecosystem such as NetApp data migration to NetApp or across vendors such as NetApp data migration to Isilon or EMC to NetApp or EMC to Pure FlashBlade. A high-fidelity NAS migration solution should preserve not only the file itself but all of its associated metadata and access controls.

      Network Attached Storage (NAS) to Cloud data migration is the process of moving data from an on-premises data center to a cloud. It requires data to be moved from a file format (NFS or SMB) to an Object/Cloud format such as S3. A high-fidelity NAS-to-Cloud migration solution preserves all the file metadata including access control and privileges in the cloud. This enables data to be used either as objects or as files in the cloud.

      Storage migration is a general-purpose term that applies to moving data across storage arrays.

      Unstructured Data Migration Phases

      Data migrations typically involve four phases:

      • Planning – Deciding what data should be migrated. Planning may often involve analyzing various sources to find the right data sets. For example, several customers today are interested in upgrading some data to Flash – finding hot, active data to migrate to Flash can be a useful planning exercise.
      • Initial Migration – Do a first migration of all the data. This should involve migrating the files, the directories and the shares.
      • Iterative Migrations – Look for any changes that may have occurred during the initial migration and copy those over.
      • Final Cutoff – A final cutoff involves deleting data at the original storage and managing the mounts, etc., so data can be accessed from the new location going forward.

      Resilient data migration refers to an approach that automatically adjusts for failures and slowdowns and retries as needed. It also checks the integrity of the data at the destination to ensure full fidelity.

      Types of Unstructured Data Migrations

      When it comes to file data, there are NAS Migrations and Cloud Migrations. There are also NAS migrations to the cloud. Data migrations are often seen as a dreaded and laborious part of the storage management lifecycle. Free tools are often considered first but they can introduce risk, time and cost overruns and they are typically labor intensive and error-prone. On the other hand, traditional migration tools have complex legacy architectures and are expensive point products that do not provide ongoing value – resulting in sunk costs.

      Look for easy-to-use, fast, reliable data migration tools are not one-and-done point tools. The right data migration solution should be able to handle other unstructured data management use cases, including cloud data tiering and data replication.

      How to Plan a Smart NAS or Cloud Unstructured Data Migration?

      The typical steps for any unstructured data migration project are:

      Analytics: Before you start an unstructured data migration project, it’s important to have visibility into:

      •  How fast is your data growing?
      •  How much data is hot vs. cold data?
      •  Who is using your data?

      Savings: Estimate how much you’ll save by moving to the new NAS or cloud infrastructure. This information will guide which NAS or cloud storage mix is best for your data.

      Offload heavy lifting: Your data migration solution should be able to manage multiple iterations of the migration and handle problems by automatically retrying in a
      slowdown or a network or storage failure.

      Preserve data integrity: Your data migration solution should provide MD5 checksum on every file and assure all metadata and access controls migrate to the new environment.

      Avoid sunk costs: File data migrations are a lot of heaving lifting. Your data migration solution should include automatic parallelization at every level for elastic
      scaling and the ability to migrate petabytes of data seamlessly and reliably.

      Reduce downtime: It is recommended that your data migration solution runs multiple iterations for more efficient cutovers.

      Planning Your Cloud File Migration

      Komprise and Unstructured Data Migration

      Komprise Elastic Data Migration is included in the Komprise Intelligent Data Management platform or is available standalone. Designed for cloud migrations and NAS migrations, with Komprise Elastic Data Migration you can run, monitor, and manage hundreds of data migrations faster than ever at a fraction of the cost. Learn more about Komprise Smart Data Migrations.

      Smart-Data-Migration-Blog-SOCIAL-768x402

      Unstructured Data Migration and the Cloud

      As unstructured data continues to grow exponentially, organizations struggle to control costs for file data storage. Many are turning to the cloud to scale and manage spend. However, choosing the right files to move can be challenging as there can easily be billions of files. Many enterprises have over 1 PB of data, which represents roughly 3 billion files. This unstructured data is growing exponentially and resides in multi-vendor storage silos for access by various applications and departments.
      For these reasons, organizations often lack visibility into file data and are making decisions in the dark. To be agile and competitive, IT teams must evolve storage management to become a holistic data management strategy. The right approach to data migration and the cloud for file and object data is to use analytics in cloud data management:

      1. Understand your data patterns
      2. Plan using a cost model
      3. Use data to drive stakeholder buy-in
      4. Eliminate user disruption
      5. Create a systematic plan for ongoing data management

      Read the eBook: 5 Ways to Use Analytics for Cloud Data Migrations

      Top Unstructured Data Migration Challenges

      Businesses today are looking at modernizing storage and moving to a multi-cloud strategy. As they evolve to faster, flash-based Network Attached Storage (NAS) and the cloud, migrating data into these environments can be challenging. The goal is to migrate large production data sets quickly, without errors, and without disruption to user productivity.

      The top cloud data migration challenges are:

      1. How do you manage cloud data migrations without downtime?
      2. How can you automate cloud data migrations to eliminate manual effort?
      3. How can you ensure all the permissions, ACLs, metadata are copied correctly during a cloud data migration so you can access the data in the cloud as files?

      You can overcome these challenges with some planning and automation that preserves file-based access both from on-premises and the cloud.

      Unstructured Data Migration Tools

      • Free Tools: Require a lot of babysitting and are not reliable for migrating large volumes of data.
      • Point Data Migration Solutions: Have complex legacy architectures and create sunk costs.
      • Komprise Elastic Data Migration: Makes cloud data migrations simple, fast, reliable and eliminates sunk costs since you continue to use Komprise after the migration. Komprise is the only solution that gives you the option to cut 70%+ cloud storage costs by placing cold data in Object classes while maintaining file metadata so it can be promoted in the cloud as files when needed. Learn more >

      Learn more about Smart Data Migrations for unstructured file and object data:

      Part 1

      Part 2

      PttC_pagebanner-2048x639

      What is Data Migration?

      Data Migration is the process of selecting and moving data from one location to another and can involve moving data across different storage vendors, and across different formats.

      How is data migration done?

      Data migrations are often done in the context of retiring a system and moving to a new system, or in the context of a cloud migration, or in the context of a modernization or upgrade strategy.

      What tools to use for data migration?

      There are a variety of free tools but these require the most babysitting. Point Data Migration solutions have complex legacy architectures and can create sunk costs. Komprise Elastic Data Migration makes cloud data migrations simple, fast and reliable and eliminates sunk costs.

      Getting Started with Komprise:

    • Unstructured Data Workflows

      Unstructured data workflows can include a variety of processes and technologies, such as data management tools, document management systems, content management systems, and collaboration platforms. Data is no longer static and needs to move between systems and clouds to satisfy changing requirements and to support big data and AI/ML initiatives. Technologies and processes that automate and streamline these workflows can shave significant time and costs from finding, preparing and moving data into data lakes and analytics platforms or to meet compliance requirements.

      Overall, unstructured data workflows play an important role in modern data management and are critical for organizations that generate and use large volumes of unstructured data. By implementing effective unstructured data workflows, organizations can ensure that data lives at the right place and at the right time to satisfy a variety of enterprise and departmental needs.

      Komprise Smart Data Workflows for File and Object Data

      BLOG-Smart-Data-Workflows-Architecture-Overview-CFD14-SOCIAL

      Komprise Smart Data Workflows allow you to define and execute automated processes, which could be industry or domain specific, to search and fine, migrate and tier, and ultimately get greater value from unstructured data. With Smart Data Workflows you can create custom queries across hybrid, multi-cloud, on-premises and edge data silos to find the file and object data you need, which is often locked away in data storage silos, execute Komprise or external functions on a subset of data and tag the data with additional metadata. ​Move only the data you need and manage the lifecycle of unstructured data intelligently.

      Watch the unstructured data management workflow with Komprise CTO and co-founder at Cloud Field Day.

      Getting Started with Komprise:

  • V
    • Virtual Data Lakes

      A virtual data lake, for Komprise called the Global File Index, is a granular, flexible and searchable index across file, object and cloud data storage spanning petabytes of unstructured data. A virtual data lake has been called a metadata lake, allowing organizations to find and execute Smart Data Workflows that enable Big Data, AI, and ML projects.

      Research has shown that with Big Data projects, up to 80% or more time is spent on finding the right data and getting it out of data centers and cloud infrastructure. With Komprise, powerful metadata-based search and indexing technology automates the process of finding unstructured data based on your specific criteria. This capability allows organizations to dynamically build virtual data lakes across storage silos on the fly so they can better manage and reuse your data for AI and ML.

      Komprise Deep Analytics lets you build specific queries to find the files you need, tag it to build real-time virtual data lakes that the entire organization can use, without having to first move the data.

      Learn more about Komprise Deep Analytics.

      Getting Started with Komprise:

Back

Data Tiering

Data Tiering refers to a technique of moving less frequently used data, also known as cold data, to cheaper levels of storage or tiers. The term “data tiering” arose from moving data around different tiers or classes of storage within a storage system, but has expanded now to mean tiering or archiving data from a storage system to other clouds and storage systems. See also cloud tiering and choices for cloud data tiering.

komprise-file-tiering-image-768x404

Data Tiering Cuts Costs Because 70%+ of Data is Cold

As data grows, storage costs are escalating. It is easy to think the solution is more efficient storage. But the real cause of storage costs is poor data management. Over 70% of data is cold and has not been accessed in months, yet it sits on expensive storage and consumes the same backup resources as hot data. As a result, data storage costs are rising, backups are slow, recovery is unreliable, and the sheer bulk of this data makes it difficult to leverage new options like Flash and Cloud.

Data Tiering Was Initially Used within a Storage Array

Data Tiering was initially a technique used by storage systems to reduce the cost of data storage by tiering cold data within the storage array to cheaper but less performant options – for example, moving data that has not been touched in a year or more from an expensive Flash tier to a low-cost SATA disk tier.

Typical storage tiers within a storage array include:
  • Flash or SSD: A high-performance storage class but also very expensive. Flash is usually used on smaller data sets that are being actively used and require the highest performance.
  • SAS Disks: Usually the workhorse of a storage system, they are moderately good at performance but more expensive than SATA disks.
  • SATA Disks: Usually the lowest price-point for disks but not as performant as SAS disks.
  • Secondary Storage, often Object Storage: Usually a good choice for capacity storage – to store large volumes of cool data that is not as frequently accessed, at a much lower cost.

Cloud-Data-Tieringv2-1-300x225

Cloud Data Tiering is now Popular

Increasingly, customers are looking at another option – tiering or archiving data to a public cloud.

  • Public Cloud Storage: Public clouds currently have a mix of object and file storage options. The object storage classes such as Amazon S3 and Azure Blob (Azure Storage) provide tremendous cost efficiency and all the benefits of object storage without the headaches of setup and management.

Tiering and archiving less frequently used data or cold data to public cloud storage classes is now more popular. This is because customers can leverage the lower cost storage classes within the cloud to keep the cold data and promote them to the higher cost storage classes when needed. For example, data can be archived or tiered from on-premises NAS to Amazon S3 Infrequent Access or Amazon Glacier for low ongoing costs, and then promoted to Amazon EFS or FSX when you want to operate on it and need performance.

But in order to get this level of flexibility, and to ensure you’re not treating the cloud as just a cheap storage locker, data that is tiered to the cloud needs to be accessible natively in the cloud without requiring third-party software. This requires file-tiering, not block-tiering.

Block Tiering Creates Unnecessary Costs and Lock-In

Block-level tiering was first introduced as a technique within a storage array to make the storage box more efficient by leveraging a mix of technologies such as more expensive SAS disks as well as cheaper SATA disks.

Block tiering breaks a file into various blocks – metadata blocks that contain information about the file, and data blocks that are chunks of the original file. Block-tiering or Block-level tiering moves less used cold blocks to lower, less expensive tiers, while hot blocks and metadata are typically retained in the higher, faster, and more expensive storage tiers.

Block tiering is a technique used within the storage operating system or filesystem and is proprietary. Storage vendors offer block tiering as a way to reduce the cost of their storage environment. Many storage vendors are now expanding block tiering to move data to the public cloud or on-premises object storage.

But, since block tiering (often called CloudPools – examples are NetApp FabricPool and Dell EMC Isilon CloudPools) is done inside the storage operating system as a proprietary solution, it has several limitations when it comes to efficiency of reuse and efficiency of storage savings. Firstly, with block tiering, the proprietary storage filesystem must be involved in all data access since it retains the metadata and has the “map” to putting the file together from the various blocks. This also means that the cold blocks that are moved to a lower tier or the cloud cannot be directly accessed from the new location without involving the proprietary filesystem because the cloud does not have the metadata map and the other data blocks and the file context and attributes to put the file together. So, block tiering is a proprietary approach that often results in unnecessary rehydration of the data and treats the cloud as a cheap storage locker rather than as a powerful way to use data when needed.

The only way to access data in the cloud is to run the proprietary storage filesystem in the cloud which adds to costs. Also, many third-party applications such as backup software that operate at a file level require the cold blocks to be brought back or rehydrated, which defeats the purpose of tiering to a lower cost storage and erodes the potential savings. For more details, read the white paper: Block vs. File-Level Tiering and Archiving.

Know Your Cloud Tiering Choices

CloudTieringMigrations-WebinarOnDemandthumb

File Tiering Maximizes Savings and Eliminates Lock-In

File-tiering is an advanced modern technology that uses standard protocols to move the entire file along with its metadata in a non-proprietary fashion to the secondary tier or cloud. File tiering is harder to build but better for customers because it eliminates vendor lock-in and maximizes savings. Whether files have POSIX-based Access Control Lists (ACLs) or NTFS extended attributes, all this metadata along with the file itself is fully tiered or archived to the secondary tier and stored in a non-proprietary format. This ensures that the entire data can be brought back as a file when needed. File tiering does not just move the file, but it also moves the attributes and security permissions and ACLS along with the file and maintains full file fidelity even when you are moving a file to a different storage architecture such as object storage or cloud. This ensures that applications and users can use the moved file from the original location, and they can directly open the file natively in the secondary location or cloud without requiring any third-party software or storage operating system.

Since file tiering maintains full file fidelity and native access based on standards at every tier, it also means that third party applications can access the moved data without requiring any agents or proprietary software. This ensures that savings are maximized since backup software and other third -arty applications can access moved data without rehydrating or bringing the file back to the original location. It also ensures that the cloud can be used to run valuable applications such as compliance search or big data analytics on the trove of tiered and archived data without requiring any third-party software or additional costs.

File-tiering is an advanced technique for archiving and cloud tiering that maximizes savings and breaks vendor lock-in.

Data Tiering Can Cut 70%+ Storage and Backup Costs When Done Right

In summary, data tiering is an efficient solution to cut storage and backup costs because it tiers or archives cold, unused files to a lower-cost storage class, either on-premises or in the cloud. However, to maximize the savings, data tiering needs to be done at the file level, not block level. Block-level tiering creates lock-in and erodes much of the cost savings because it requires unnecessary rehydration of the data. File tiering maximizes savings and preserves flexibility by enabling data to be used directly in the cloud without lock-in.

Why Komprise is the easy, fast, no lock-in path to the cloud for file and object data.

PttC_pagebanner-2048x639

Want To Learn More?

Related Terms

Getting Started with Komprise: