Attacking Sequential Learning Models with Style Transfer Based Adversarial Examples
In the field of deep neural network security, it has been recently found that non-sequential networks are vulnerable to adversarial examples. There are however few studies to investigate the adversarial attack on sequential tasks. To this end, in this paper, we propose a novel method to generate adversarial examples for sequential tasks. Specifically, an image style transfer method is used to generate for a Scene Text Recognition (STR) network adversarial examples, which are only different from the original image on the style. While they will not interfere with the recognition of image information by human vision, the adversarial examples would significantly mislead the recognition results of sequential networks. Moreover, based on a black-box attack, both in digital and physical environments, we show that the proposed method can use cross text shape information and attack successfully the TPS-ResNet-BiLSTM-Attention (TRBA) and Convolutional Recurrent Neural Network (CRNN) models. Finally, we demonstrate further that physical adversarial examples can easily mislead commercial recognition algorithms, e.g. iFLYTEK and Youdao, suggesting that STR models are also highly vulnerable to attacks from adversarial examples.
Duke Scholars
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Issue
Related Subject Headings
- 0299 Other Physical Sciences
- 0204 Condensed Matter Physics
- 0202 Atomic, Molecular, Nuclear, Particle and Plasma Physics
Citation
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Issue
Related Subject Headings
- 0299 Other Physical Sciences
- 0204 Condensed Matter Physics
- 0202 Atomic, Molecular, Nuclear, Particle and Plasma Physics