DPatch: An adversarial patch attack on object detectors
Object detectors have emerged as an indispensable module in modern computer vision systems. In this work, we propose DPATCH– a black-box adversarial-patch-based attack towards mainstream object detectors (i.e. Faster R-CNN and YOLO). Unlike the original adversarial patch that only manipulates image-level classifier, our DPATCH simultaneously attacks the bounding box regression and object classification so as to disable their predictions. Compared to prior works, DPATCH has several appealing properties: (1) DPATCH can perform both untargeted and targeted effective attacks, degrading the mAP of Faster R-CNN and YOLO from 75.10% and 65.7% down to below 1%, respectively; (2) DPATCH is small in size and its attacking effect is location-independent, making it very practical to implement real-world attacks; (3) DPATCH demonstrates great transferability among different detectors as well as training datasets. For example, DPATCH that is trained on Faster R-CNN can effectively attack YOLO, and vice versa. Extensive evaluations imply that DPATCH can perform effective attacks under black-box setup, i.e., even without the knowledge of the attacked network’s architectures and parameters. Successful realization of DPATCH also illustrates the intrinsic vulnerability of the modern detector architectures to such patch-based adversarial attacks.
Duke Scholars
Published In
ISSN
Publication Date
Volume
Related Subject Headings
- 4609 Information systems
Citation
Published In
ISSN
Publication Date
Volume
Related Subject Headings
- 4609 Information systems