Benchmarking Foundation Models with Multimodal Public Electronic Health Records.
Foundation models have emerged as a powerful approach for processing electronic health records (EHRs), offering flexibility to handle diverse medical data modalities. In this study, we present a comprehensive benchmark that evaluates the performance, fairness, and interpretability of foundation models, both as unimodal encoders and as multimodal learners, using the publicly available MIMIC-IV database. To support consistent and reproducible evaluation, we developed a standardized data processing pipeline that harmonizes heterogeneous clinical records into an analysis-ready format. We systematically compared twelve foundation models, encompassing both unimodal and multimodal models, as well as domain-specific and general-purpose variants. Our findings demonstrate that incorporating multiple data modalities generally improves predictive performance without introducing additional bias. While domain-specific fine-tuning offers a cost-effective solution for unimodal foundation models, this effectiveness does not translate well to multimodal scenarios. Additionally, our experiments reveal limited task generalizability in current large vision-language models (LVLMs), emphasizing the need for more versatile and robust medical LVLMs. Through this benchmark, we aim to support the development of effective and trustworthy multimodal artificial intelligence (AI) systems for real-world clinical applications.