Detecting Adversarial Samples Using Influence Functions and Nearest Neighbors

Published

Conference Paper

© 2020 IEEE. Deep neural networks (DNNs) are notorious for their vulnerability to adversarial attacks, which are small perturbations added to their input images to mislead their prediction. Detection of adversarial examples is, therefore, a fundamental requirement for robust classification frameworks. In this work, we present a method for detecting such adversarial attacks, which is suitable for any pre-trained neural network classifier. We use influence functions to measure the impact of every training sample on the validation set data. From the influence scores, we find the most supportive training samples for any given validation example. A k-nearest neighbor (k-NN) model fitted on the DNN's activation layers is employed to search for the ranking of these supporting training samples. We observe that these samples are highly correlated with the nearest neighbors of the normal inputs, while this correlation is much weaker for adversarial inputs. We train an adversarial detector using the k-NN ranks and distances and show that it successfully distinguishes adversarial examples, getting state-of-the-art results on six attack methods with three datasets. Code is available at https://github.com/giladcohen/NNIF_adv_defense.

Full Text

Duke Authors

Cited Authors

  • Cohen, G; Sapiro, G; Giryes, R

Published Date

  • January 1, 2020

Published In

Start / End Page

  • 14441 - 14450

International Standard Serial Number (ISSN)

  • 1063-6919

Digital Object Identifier (DOI)

  • 10.1109/CVPR42600.2020.01446

Citation Source

  • Scopus