Toward A Distributed, Benchmarking Ecosystem Using Electronic Healthcare Data

The deployment and use of artificial intelligence (AI) in healthcare is regarded by many as a fait accompli, despite few examples of success. As with most technological movements, hype precedes results, and this certainly holds true for clinical AI. However, given the enormous inefficiencies and costs within today’s healthcare system, the question is not whether AI will impact healthcare, but when.

Part of the allure of AI is the desire of healthcare institutions to move away from a rules-based approach to clinical care toward a data-driven model. However, in the translation of tools and algorithms into the clinic, the ability to rapidly and robustly verify the performance characteristics of an algorithm is a major bottleneck. Several barriers – among many – that stand out are areas that we at Sage Bionetworks frequently encounter and have developed strategies to address:

  1. Inability to access data. Many clinical algorithms are developed within a host sites’ EHR system, using site-specific data. Given the sensitivity of electronic healthcare data, it is difficult to gain access to verify performance in other contexts, thereby limiting the ability to assess the generalizability of an algorithm.
  2. Bias in evaluation. In most cases, the evaluation of algorithms is carried out by the same person who developed the algorithm, a circumstance known as the “self-assessment trap.” This tends to produce highly biased performance measurements and leads to results that often do not hold up under closer scrutiny.
  3. Irreproducibility. Many clinical algorithms are not well-documented and are hard to use by third parties. The problem of reproducibility in biomedical science has been well-described and presents an especially acute problem when translating research into clinical practice.

New DREAM Challenge

To overcome these challenges, Sage Bionetworks has partnered with NCATSClinical Data To Health (CD2H), and the University of Washington to launch the Electronic Healthcare Record (EHR) DREAM Challenge. This Challenge represents a first-of-its-kind demonstration for using electronic healthcare patient data to prospectively benchmark AI algorithms in a community challenge. As healthcare institutions move increasingly toward quantitative, data-driven decision making, there is a concomitant need to objectively assess and report the performance and generalizability of clinical algorithms. In the EHR DREAM Challenge, we intend to show how governance, technology, and community engagement can be combined to robustly assess AI within the healthcare system.

This Challenge represents a first-of-its-kind demonstration for using electronic healthcare patient data to prospectively benchmark AI algorithms in a community challenge.

In this first of a series of EHR-related Challenges, we are asking participants to predict patient mortality within six months of their last hospital visit. The data host for this Challenge is the University of Washington Medical System, which has prepared a curated dataset from their EHR enterprise data warehouse. The data collected span 10 years (2009-2019), with 1.3 million patients, 22 million visits, 33 million procedures, 5 million drug exposure records, 48 million condition records, 10 million

Given the highly sensitive nature of EHR data and associated risks of re-identifiability, we are using the model-to-data approach for receiving and evaluating submitted algorithms. This means participants will never have direct access to the data, thereby preserving the integrity, privacy, and security of the data. Participants must submit their algorithm in the form of a Docker container, which will be executed on their behalf within a secure and private cloud. Performance results of the algorithms will be reported back to participants via a leaderboard, providing an objective benchmarking and reporting of results to the research community.

Broader Vision

We are approaching this Challenge as the first step toward a larger goal: developing an ecosystem for evaluating algorithms on EHR data across a secure and distributed network of healthcare providers (Fig 1). We are extending the model-to-data framework for simpler deployment in multiple cloud and on-premises compute environments. In Challenges, we will ask participants to predict patient or population outcomes.

While we are starting with patient mortality, planning is already under way for future EHR Challenges that will address pressing clinical issues such as sepsis, cardiovascular disease, and patient re-admission. We intend on expanding to additional data modalities, such as genetics and imaging, as we demonstrated with the Digital Mammography DREAM Challenge (in partnership with Kaiser Permanente).

In developing a robust evaluation network, we are hopeful that this framework will accelerate the use and – more importantly – the benefits of AI to patients. The EHR DREAM Challenge is a first step toward this goal.


Learn more about the EHR DREAM Challenge:

REGISTER NOW


Original post

 

Justin Guinney is the Vice President of the Computational Oncology group at Sage Bionetworks. His group contains specialists from multiple domains, including molecular biology, computer science, and oncology, and focuses on the development of computational models for optimizing patient diagnosis, prognosis, and treatment in cancer. Dr. Guinney is an expert at large-scale analysis of genomic data, and works regularly with clinicians to link these models to complex cancer phenotypes.