This article was adapted from its original version at HealthITAnswers.
Like no other time in medical history, new technologies combined with industry collaboration are delivering groundbreaking opportunities for more accurate clinical decision-making and faster development of treatments for life-threatening conditions: Consider the speed at which major research institutions came together to develop and release Covid-19 vaccines, which wouldn’t have been possible without the cloud and digital tools for rapid research, testing, analysis, and communications. As another example, Internet of Medical Things (IoMT) delivers remote monitoring and diagnosis using wearable biosensors which monitor medications and deliver alerts on chronic conditions such as emphysema, multiple sclerosis, and diabetes.
The flip side of medical technology innovation is that these devices and sensors are generating massive amounts of unstructured data – adding to the overall healthcare data deluge.
Roughly 30% of the world’s data volume is being generated by the healthcare industry, according to RBC Capital Markets.
The collection and analysis of quality data is vital to healthcare but the rising volume of unstructured data, data that doesn’t fit nicely into rows-and-column based spreadsheets and databases is stretching IT budgets.
Life sciences: patient care innovation and competitive gain in the cloud
One outcome of life-sciences unstructured data growth is an acceleration of data center consolidation and migration to the public cloud. Sixty percent of pharma executives have already made changes or have a plan in place to invest in cloud-based services to support their digital transformation efforts, according to PwC.
Cloud-based artificial intelligence (AI) is particularly exciting. A report by Deloitte outlined several use cases including to integrate data and improve the workflow for clinical trials: “They can even use AI to generate insights from past and current trials to inform and improve future trials.”
Yet the question is: which data sets should move to the cloud and how?
Answering this starts with taking a closer look at the traditional life-sciences data infrastructure. Pharmaceutical companies, biotechs and research institutions frequently face the following data management issues, which drive up costs and impede R&D activities:
- Data silos hampering collaboration across teams and departments as well as audits.
- Data visibility issues with data spread across many different hybrid IT environments and disparate applications.
- Difficulties searching and securely accessing and using data exported into cloud-based data lakes and other new data platforms.
- Continual change in regulations, affecting data practices.
- Too much time–at least 50%– spent on data preparation and deployment, according to IDC.
Read the case study: How Pfizer Uses Analytics to Cut Storage Costs by 75%
Here are the leading considerations for managing life sciences data in the cloud:
Analytics and segmentation:
Before moving data to the cloud or buying more data storage, understanding data across all hybrid storage and usage/access patterns can direct optimal placement. A company may want to move clinical trials data to the cloud after the trial has concluded. This addresses compliance issues for storing data for the required time without clogging up expensive on-premises storage–which should be preserved for active, regularly accessed data. Analyzing and right-placing data can save significantly on storage costs, freeing up budget for R&D projects while ensuring that critical workloads have the appropriate protection and performance.
Watch the Webinar: Komprise Unstructured Data Management for Healthcare and Life Sciences
Data tagging for context and search:
With scalable data lake and data warehouse technologies now commonplace, IT teams can move data into cloud services where data scientists can run machine learning and other processes on it. The trick is finding the right data. When moving data from clinical applications and instruments into cloud storage, contextual data is lost. A data management platform which facilitates tagging can apply metadata such as project, disease type, instrument type and demographics to the files. That way when files are moved into the cloud the researcher can search on keywords and find what they need without manual digging.
Driving collaboration between research data scientists and central IT:
Research data specialists and IT directors will benefit by working more closely together. The former know what scientists are looking for while the latter understand the nature of cloud infrastructure and data analytics implementations so they can create the best technical foundation to meet these research requirements. The end goal is to make it more viable for everyday analysts to search across distributed data sets to find what they need so that IT can continually, through policy-based automation, move the right data to the right platforms for analysis.
Privacy and security is a high priority for life science organizations, yet fewer than half (45%) have active ransomware protections in place to prevent breaches and data loss, according to research by Egnyte. One strategy is to move cold data to object-lock storage such as AWS S3 and eliminate it from active storage and backups. This allows organizations to create a logically isolated recovery copy while cutting storage and backup costs by up to 80%.
Read the Blog: How to Protect File Data from Ranswomware at 80% Lower Cost
Life-sciences organizations today have the tools and technologies to bring new and better products to market faster than ever before. Yet wrangling unstructured data is slowing down the process of leveraging new types of clinical data for research and creating a blind spot in data analytics. Rethinking traditional data management practices to fully leverage clinical and laboratory images, patient files including telehealth video, sensor data from wearables, research files and more is a must-have capability for life sciences leaders.