It is no secret that there is a digital transformation happening across life sciences and healthcare. This seismic shift is being facilitated by cutting-edge technology that is enabling the interrogation and aggregation of the vast quantities of data that we now have at our fingertips. The adoption of automated processes using technologies like artificial intelligence (AI), which incorporates human intelligence into machines through a set of rules, and machine learning (ML), an application of AI whereby the computer learns automatically through its experiences, is key to collecting and analyzing the vast and varied data that is now available and facilitating a move to a personalized approach to medicine.

A new realm of possibility

While the term has been used more frequently in recent years, personalized medicine is not a new concept; healthcare professionals have been tailoring treatment to an individual’s specific needs since the advent of medicine. However, the ground-breaking advancements being made today allow us to understand a patient’s potential susceptibility to certain diseases, and their predicted response to treatments.

Facilitating this shift is the availability of ‘big data’, characterized by the three Vs:

  • Velocity – the speed at which data is being collected
  • Volume – the amount of data collected
  • Variety – the broad range of data types being collected, for example imaging or unstructured data

It is clear that routine patient data fulfils each of these categories, with neuroimaging, for example, producing more than 10 petabytes of data each year, with a nine-fold increase in data complexity over the last three decades. [1] Pharmaceutical researchers working with biomarkers and genome sequencing have also made rapid developments in recent years. In fact, genomic data are expected to reach exponential dimensions by the next decade, which would exceed other big data areas such as astronomy. [1]

In addition to more standard data types, such as images or phenotypic data from patients’ individual electronic health records (EHR), we are now also able to access patient-generated health data from wearable and implantable devices. Currently, a lot of this information is collected for specific purposes, such as for an oncology patient’s treatment plan, but we are starting to see the use of real-time sensors to collect continuous biometrics data too. One of the main consequences of this shift is the ability to analyze data on health, treatment, and lifestyle as one combined data set.

The ways in which we can collect data are becoming more varied and sophisticated, however, collecting the data is just the first hurdle. Rigorous processing and analysis is needed so the data stands up to the levels of scrutiny surrounding drug development and healthcare.

Understanding analytics

As more data of more types is gleaned from more sources, it all needs to come together to allow for real-time processing and cleaning during the time a study is conducted. There is an urgent need for the life sciences research industry to better understand data science and focus on the expert application of the deep knowledge we can glean from modern statistical techniques.

These techniques are already being used effectively in genomics and signal processing of images, which are in the discovery phase. However, we will see even more value using them to refine treatments in the ‘real world’, in signal detection for safety surveillance, to identify subgroups of patients who may get increased benefit from a treatment, or who are at most risk of side effects.

Adopting a new approach

The use of big data analytics and AI will become more mainstream in the months and years to come, and we will see it being used across the clinical research pipeline. This will enable a greater collaboration between public health and clinical research, which is currently divided. It is important to note, however, that we still need active experimentation to discern cause and effect, to identify the primary factors and noise variables, to separate efficacy versus effectiveness, and to demonstrate scientific rigour. Any systems adopted must be able to allow this much-needed collaboration while still enabling the ongoing requirements of successful trials.

Platforms that can take in real-world data and support data science and AI analysis with full data provenance for regulatory submission are vital to the success of our industry’s digital transformation. Much of the foundation is already in place, now it is time for us to grasp the opportunity with both hands.

Schedule a time to meet with us and find out how we can help you make the leap to better, future-proof clinical trials.

[1] Cirillo, D., Valencia, A., Big data analytics for personalized medicine (6 April 2019) Current Opinion in Biotechnology https://doi.org/10.1016/j.copbio.2019.03.004

Learn how encapsia can help you!

To book a no-obligation meeting or demonstration please get in touch by using the contact form below:

Contact us

Latest articles

Minimizing Disruption Through Hybrid Clinical Trial Adoption

There have been significant disruptions in the healthcare industry in recent years, particularly in the conduct of clinical trials. The traditional, centralized model of clinical trial design has faced significant challenges in terms of patient recruitment, retention, and overall operational…

Read article

Employee Spotlight: Women in Tech

Q&A with encapsia’s Executive Director of Business Development Temitope Keyes   Superior innovation and performance are achieved with diverse teams that bring new perspectives and skill sets. In acknowledgment of International Women’s Day, we’re shining the spotlight on women in…

Read article

Delayed Clinical Data and Its Impact

The increasing complexity of modern trials demands more intensive data capture. Innovative study designs seek to minimize patient exposure to potential risks, improve convenience and shorten therapy time-to-market —requiring collecting and assimilating more data points than ever before. Data now…

Read article