CD2H Phase 2 Proposal
Project Title: Patient mortality prediction DREAM Challenge
Timothy Bergquist, email@example.com, University of Washington
Justin Guinney, firstname.lastname@example.org, Sage Bionetworks
Community challenges have a track record for stimulating innovative solutions to scientific questions. The Sage/CD2H DREAM challenges are bringing the DREAM challenge platform to the CTSAs to allow large scale community challenges with Electronic Health Records while maintaining the privacy and security of the data. This first project will feature challenge questions around mortality and diagnosis prediction with UW OMOP data.
During phase 1, we were laying the groundwork to enable DREAM challenges usingdata from various CTSA sites. In order to assess the feasibility of any given DREAM Challenge, we needed a tool to establish the quality of the data we were working with. We adapted an existing data quality evaluation tool, DQe-c, to the OMOP v5 data standard. The results of the effort have produced an open-source tool that evaluates the completeness of an OMOP v4 or v5 data repository.
DREAM challenges are an instrumental tool for harnessing the wisdom of the broader scientific community to develop computational solutions to biomedical problems. While previous DREAM challenges have worked with complex biological data as well as sensitive medical data, running DREAM Challenges with Electronic Health Records present unique complications. While previous challenges developed techniques to facilitate model to data approaches to maintain the privacy of the data, ensuring that the EHR data is of a specific quality is important for a challenge. EHR data is also more complicated than the data previously used in model to data approaches. We will be using the OMOP data standard to make development of models standardized, but even with a standardized and well documented dataset, complications can arise when facilitating the model development and submission from multiple parties.
We will run a pilot DREAM challenge, asking participants to predict patient mortality and risk for 30 day re-admissions. Our purpose for asking these questions is these are widely studied questions, models already exist (thereby mitigating the need for access to training data), and highly tractable using EHR data.
In this next phase of the DREAM challenge, we will ask CTSAs and other non-CTSA collaborators to submit models that they have already developed to a pilot DREAM challenge to predict on the patient outcomes specified above. We will use this phase to test the quality of our UW OMOP repository. We will also use this phase to debug and stress test the DREAM challenge management infrastructure and to develop a set of best practices when running an EHR DREAM challenge.
The greatest benefit to the CTSA community is that – through these Challenges – we hope to lay the groundwork for federated model evaluation across CTSA hubs. One of the principal bottlenecks for algorithm development and deployment in the clinic is that lack of robust evaluation and metrics. By building an ecosystem whereby models can be developed and assessed on different EHR systems, we expect that CTSA sites will achieve a higher rate of translation of research models and algorithms into the clinic..
Expected outputs (6 months):
In the first 6 months, we expect to be at step 5 in the high level steps above. We will be accepting submissions from the various involved parties in 6 months.