A multimodal and temporal foundation model for virtual patient representations at healthcare system scale
Andrew Zhang, Tong Ding, Sophia J. Wagner, Caiwei Tian, Ming Y. Lu + 6 more
TLDR
Apollo is a multimodal temporal foundation model that integrates decades of diverse clinical data to create unified patient representations for advanced forecasting.
Key contributions
- Apollo is a multimodal temporal foundation model trained on 7.2M patients, 25B records, 28 modalities over 30 years.
- Learns unified patient representations by integrating 100K medical events, images, and clinical text.
- Achieves generalized clinical forecasting across 322 tasks, predicting disease onset, progression, and treatment response.
- Functions as a multimodal medical search engine, enabling queries with text and images.
Why it matters
This paper introduces a foundational step towards computable medicine by creating unified, comprehensive patient representations from vast, multimodal clinical data. Apollo enables advanced clinical forecasting and multimodal search, making the full context of patient care accessible to computational reasoning. This could revolutionize personalized medicine and healthcare operations.
Original Abstract
Modern medicine generates vast multimodal data across siloed systems, yet no existing model integrates the full breadth and temporal depth of the clinical record into a unified patient representation. We introduce Apollo, a multimodal temporal foundation model trained and evaluated on over three decades of longitudinal hospital records from a major US hospital system, composed of 25 billion records from 7.2 million patients, representing 28 distinct medical modalities and 12 major medical specialties. Apollo learns a unified representation space integrating over 100 thousand unique medical events in our clinical vocabulary as well as images and clinical text. This "atlas of medical concepts" forms a computational substrate for modeling entire patient care journeys comprised of sequences of structured and unstructured events, which are compressed by Apollo into virtual patient representations. To assess the potential of these whole-patient representations, we created 322 prognosis and retrieval tasks from a held-out test set of 1.4 million patients. We demonstrate the generalized clinical forecasting potential of Apollo embeddings, including predicting new disease onset risk up to five years in advance (95 tasks), disease progression (78 tasks), treatment response (59 tasks), risk of treatment-related adverse events (17 tasks), and hospital operations endpoints (12 tasks). Using feature attribution techniques, we show that model predictions align with clinically-interpretable multimodal biomarkers. We evaluate semantic similarity search on 61 retrieval tasks, and moreover demonstrate the potential of Apollo as a multimodal medical search engine using text and image queries. Together, these modeling capabilities establish the foundation for computable medicine, where the full context of patient care becomes accessible to computational reasoning.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.