Markov Chain Based Efficient Defense Against Adversarial Examples in Computer Vision

Journal Article (Journal Article)

Adversarial examples are the inputs to machine learning models that result in erroneous outputs, which are usually generated from normal inputs via subtle modification and seem to remain unchanged to human observers. They have severely threatened the applications of machine learning, especially in the areas with high-security requirements. Unfortunately, for this issue, there is neither unambiguous interpretation about the causes nor almighty defenses in spite of the increasing attention and discussions. Based on the distinguished statistical feature of Markov chain, an effective defense method is proposed in this paper by exploring the differences in the probability distributions of adjacent pixels between normal images and adversarial examples. Specifically, the concept of overall probability value (OPV) is defined to estimate the modification to an input, which can be used to preliminarily determine whether the input is an adversarial example or not. Furthermore, by calculating the OPV of an input and modifying its pixel value to destroy the potential adversarial characteristics, the proposed method can efficiently purify adversarial examples. A series of experiments demonstrate the effectiveness of the defense method. When facing various attacks, it obtains excellent performance with accuracy over 92% for MNIST and 70% for ImageNet.

Full Text

Duke Authors

Cited Authors

  • Zhou, Y; Hu, X; Wang, L; Duan, S; Chen, Y

Published Date

  • January 1, 2019

Published In

Volume / Issue

  • 7 /

Start / End Page

  • 5695 - 5706

Electronic International Standard Serial Number (EISSN)

  • 2169-3536

Digital Object Identifier (DOI)

  • 10.1109/ACCESS.2018.2889409

Citation Source

  • Scopus