New Medicare Policy will Accelerate Rapid Genomics Data Growth

genomics-blog-768x512

On March 30th, the Center for Medicare and Medicaid Services announced that the federal health care program will cover the cost of genomic testing for cancer patients—enabling their cells to be sequenced to determine which treatments will be most effective. If private insurers follow suit, as has traditionally been the case, genomic testing has just become routine care for cancer patients and precision medicine has now become mainstream.

But how will all this new genomics data be stored?

IT administrators are faced with a new challenge—How to handle an explosion of genomics data caused by the mainstreaming of genomic testing and precision medicines without using up spend and resources that could be better used on finding better cures and outcomes for patients.

The data storage requirements for genomics data are projected to be enormous—genomics data is estimated to reach up to 20 times the size of all content on YouTube by 2025!

Here is the challenge massive data growth poses:

The traditional approach of buying more high-performance primary storage to keep up with this data growth won’t be tenable from a budget perspective. Nor is it necessary given that the majority of this data will become inactive, or cold, within months of creation and does not need to reside or be managed, replicated and backed up on the highest performing storage.

Moving this data to a traditional offline archive on secondary storage also presents challenges as it requires IT to restore the files if they are required again, slowing the pace of research and even patient care. And, because an offline archive requires users to look elsewhere for data, it impacts productivity and creates user resistance. Also, as IT is not the creator of this data and also lacks visibility to which data is safe to archive and which is in use or needed in the future, determining what data is safe to move has traditionally been a near-impossible task for IT.

To handle this coming explosion of data growth IT is going to have to leverage secondary storage without disrupting users.

By utilizing Komprise Intelligent Data Management, they can do so in a way that is analytics-driven, so the right data is moved to secondary storage at the right time after it becomes inactive, and the access to this data is transparent, so there is no change to how users access the moved cold data.

Komprise analyzes across all of an organizations storage silos and presents a single view of an organizations data. IT can use this data to set policies on how to migrate, transparently archive, and replicate for disaster recovery. Because the data is moved transparently users and applications still retain file-based access as if it is still right where they put it on the source.

Genomics companies are utilizing Komprise to manage their data growth. Watch how Pacific BioSciences gave itself a storage report card and utilizes Komprise to manage a 7x YOY data growth on a flat budget.

Getting Started with Komprise:

Contact | Data Assessment