![]()
This blog has been adapted from its original version on ITOpsTimes.
As artificial intelligence (AI) becomes deeply embedded in enterprise operations, IT storage professionals are being pulled into uncharted territory. Where once their focus was confined to provisioning, performance tuning, data protection and backups, today they are expected to orchestrate data services, ensure regulatory compliance, optimize cost models, and even help train AI models.
Here are 7 key requirements and strategies for storage teams to adapt to the new world of AI:
1. Understand Your Own Data Landscape
In the AI era, storage teams need granular metrics about their own environment: data about the data.
- Know where data resides, its access patterns, growth trends, duplication rates, and compliance status.
- Segment data by department, project, or data type to make smarter storage management decisions and improve data searchability and usability for internal customers.
- This often requires metadata enrichment or tagging and tools that can crack open files to supply additional context about the data. This supports accurate, efficient AI data ingestion.
2. Get Analysis for Smarter Spending
Between storage hardware, backups, disaster recovery, and hybrid cloud capacity, enterprises invest millions each year. IT organizations are now managing enormous volumes of unstructured (file and object) data: 20, 30, or even upwards of 50 petabytes (PB). But as unstructured data footprints expand, it’s clear that not all data can remain active or be treated equally. Treating everything as “hot” data drives up unnecessary costs, increases exposure to ransomware attacks, and clogs infrastructure needed for AI workloads.
To address this, IT leaders are embracing transparent, automated data tiering strategies that operate across storage vendors. Cold data can be shifted to lower-cost storage or cloud-based solutions transparently without user disruption. Some organizations even layer in “cool” storage tiers to retain a consistent experience while cutting costs. At the same time, chargeback or showback models allow departments to see how much data they’re using, how old it is, and who are the top consumers. Learn more about file tiering with Komprise.
3. Delivering AI-Ready Data with Context and Control
Once a cost optimization plan and strategy is in place to handle the ongoing unstructured data deluge, it’s time to focus on preparing data for AI. AI systems can’t function without data and not just any data: contextual, curated, and compliant data. This has made data classification a top priority for storage teams.
To restrict what AI bots can access, protect sensitive information and avoid redundant processing, IT organizations are prioritizing metadata tagging and automated data workflows. Automated tools that allow end users to tag and classify their data are becoming essential.
- For example, researchers need to distinguish between data tagged “internal,” “sensitive,” or “public” to comply with governance policies.
- Power users such as analysts, data scientists and researchers also need easier ways to search across their data – such as project code, project name, and any other relevant keyword indicating the contents.
Since unstructured data can easily span billions of files strewn across tens to hundreds of enterprise directories, efficiently classifying, searching and curating unstructured data is integral to AI.
4. Creating Tighter Connections with Key Stakeholders
To meet the evolving demands of AI, storage professionals should aim to build relationships not only within IT but across the business. Storage teams now serve as trusted advisors to departments, researchers and IT peers, helping define data needs, governance policies, and infrastructure priorities. To be effective, storage IT practitioners can aim to gather details on enterprise objectives so they can align technical decisions with business outcomes, whether that’s cost control, regulatory compliance, or supporting cutting-edge research.
5. Redefining Metrics for the Modern Enterprise
As AI workloads and cross-functional collaboration become standard, traditional storage SLAs may no longer be sufficient. Look into new data management metrics, such as:
- Top data owners by individual or department;
- Percent of non-compliant or orphaned data;
- Data classification completeness;
- Duplication reduction;
- Chargeback effectiveness.
These KPIs help demonstrate value, encourage better data hygiene and align IT services with business needs.
6. Lock Down File Data Against Cyber Threats
AI’s reliance on data makes storage a prime target for ransomware attacks. Offloading cold data to immutable storage in the cloud is one effective mitigation strategy. Immutable storage ensures that once written, data cannot be altered or deleted, effectively reducing the active attack surface by up to 80%.
7. Be a Partner in AI Infrastructure Building
Training and deploying models often require high-performance compute: GPUs, TPUs, and advanced networking. Whether organizations choose to build their own environments or use cloud-based options, storage teams must be involved from the start to determine where AI should live (on-prem, cloud, or hybrid), how to manage data movement, and how to ensure security and performance at scale.
Research has found that by 2025, half of all employees would need to reskill due to technology shifts. That prediction has arrived. For IT storage professionals, the shift is more than technical. It’s about protecting and furthering their careers and becoming trusted data services providers in the age of AI.
