Skip to main content

Interpretable Deep Learning Models for Better Clinician-AI Communication in Clinical Mammography

Publication ,  Conference
Barnett, AJ; Sharma, V; Gajjar, N; Fang, J; Schwartz, FR; Chen, C; Lo, JY; Rudin, C
Published in: Progress in Biomedical Optics and Imaging - Proceedings of SPIE
January 1, 2022

There is increasing interest in using deep learning and computer vision to help guide clinical decisions, such as whether to order a biopsy based on a mammogram. Existing networks are typically black box, unable to explain how they make their predictions. We present an interpretable deep-learning network which explains its predictions in terms of BI-RADS features mass shape and mass margin. Our model predicts mass margin and mass shape, then uses the logits from those interpretable models to predict malignancy, also using an interpretable model. The interpretable mass margin model explains its predictions using a prototypical parts model. The interpretable mass shape model predicts segmentations, fits an ellipse, then determines shape based on the goodness of fit and eccentricity of the fitted ellipse. While including mass shape logits in the malignancy prediction model did not improve performance, we present this technique as part of a framework for better clinician-AI communication. 2022 SPIE.

Duke Scholars

Published In

Progress in Biomedical Optics and Imaging - Proceedings of SPIE

DOI

ISSN

1605-7422

ISBN

9781510649453

Publication Date

January 1, 2022

Volume

12035
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Barnett, A. J., Sharma, V., Gajjar, N., Fang, J., Schwartz, F. R., Chen, C., … Rudin, C. (2022). Interpretable Deep Learning Models for Better Clinician-AI Communication in Clinical Mammography. In Progress in Biomedical Optics and Imaging - Proceedings of SPIE (Vol. 12035). https://doi.org/10.1117/12.2612372
Barnett, A. J., V. Sharma, N. Gajjar, J. Fang, F. R. Schwartz, C. Chen, J. Y. Lo, and C. Rudin. “Interpretable Deep Learning Models for Better Clinician-AI Communication in Clinical Mammography.” In Progress in Biomedical Optics and Imaging - Proceedings of SPIE, Vol. 12035, 2022. https://doi.org/10.1117/12.2612372.
Barnett AJ, Sharma V, Gajjar N, Fang J, Schwartz FR, Chen C, et al. Interpretable Deep Learning Models for Better Clinician-AI Communication in Clinical Mammography. In: Progress in Biomedical Optics and Imaging - Proceedings of SPIE. 2022.
Barnett, A. J., et al. “Interpretable Deep Learning Models for Better Clinician-AI Communication in Clinical Mammography.” Progress in Biomedical Optics and Imaging - Proceedings of SPIE, vol. 12035, 2022. Scopus, doi:10.1117/12.2612372.
Barnett AJ, Sharma V, Gajjar N, Fang J, Schwartz FR, Chen C, Lo JY, Rudin C. Interpretable Deep Learning Models for Better Clinician-AI Communication in Clinical Mammography. Progress in Biomedical Optics and Imaging - Proceedings of SPIE. 2022.

Published In

Progress in Biomedical Optics and Imaging - Proceedings of SPIE

DOI

ISSN

1605-7422

ISBN

9781510649453

Publication Date

January 1, 2022

Volume

12035