FAIM: Fairness-aware interpretable modeling for trustworthy machine learning in healthcare.
The escalating integration of machine learning in high-stakes fields such as healthcare raises substantial concerns about model fairness. We propose an interpretable framework, fairness-aware interpretable modeling (FAIM), to improve model fairness without compromising performance, featuring an interactive interface to identify a "fairer" model from a set of high-performing models and promoting the integration of data-driven evidence and clinical expertise to enhance contextualized fairness. We demonstrate FAIM's value in reducing intersectional biases arising from race and sex by predicting hospital admission with two real-world databases, the Medical Information Mart for Intensive Care IV Emergency Department (MIMIC-IV-ED) and the database collected from Singapore General Hospital Emergency Department (SGH-ED). For both datasets, FAIM models not only exhibit satisfactory discriminatory performance but also significantly mitigate biases as measured by well-established fairness metrics, outperforming commonly used bias mitigation methods. Our approach demonstrates the feasibility of improving fairness without sacrificing performance and provides a modeling mode that invites domain experts to engage, fostering a multidisciplinary effort toward tailored AI fairness.
Duke Scholars
Published In
DOI
EISSN
Publication Date
Volume
Issue
Start / End Page
Location
Related Subject Headings
- 4905 Statistics
- 4611 Machine learning
- 4603 Computer vision and multimedia computation
Citation
Published In
DOI
EISSN
Publication Date
Volume
Issue
Start / End Page
Location
Related Subject Headings
- 4905 Statistics
- 4611 Machine learning
- 4603 Computer vision and multimedia computation