Longitudinal Mammogram Exam-Based Breast Cancer Diagnosis Models: Vulnerability to Adversarial Attacks
In breast cancer detection and diagnosis, the longitudinal analysis of mammogram images is crucial. Contemporary models excel in detecting temporal imaging feature changes, thus enhancing the learning process over sequential imaging exams. Yet, the resilience of these longitudinal models against adversarial attacks remains underexplored. In this paper, we propose a novel blackbox attack approach that capitalizes on the feature-level relationship between two sequential mammogram exams of a longitudinal model, guided by both cross-entropy loss and distance metric learning, to achieve significant attack efficacy as implemented using attack transferring in a black-box attacking manner. We perform experiments on a cohort of 590 breast cancer patients (each has two sequential mammogram exams) in a case-control setting. Results show that our proposed method surpasses several state-of-the-art adversarial attacks in fooling the diagnosis models to give opposite outputs. Our method remains effective even if the model is trained by employing existing defense mechanisms for adversarial attacks.
Duke Scholars
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Start / End Page
Related Subject Headings
- Artificial Intelligence & Image Processing
- 46 Information and computing sciences
Citation
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Start / End Page
Related Subject Headings
- Artificial Intelligence & Image Processing
- 46 Information and computing sciences