Skip to main content

Snooping attacks on deep reinforcement learning

Publication ,  Conference
Inkawhich, M; Chen, Y; Li, H
Published in: Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
January 1, 2020

Adversarial attacks have exposed a significant security vulnerability in state-of-the-art machine learning models. Among these models include deep reinforcement learning agents. The existing methods for attacking reinforcement learning agents assume the adversary either has access to the target agent's learned parameters or the environment that the agent interacts with. In this work, we propose a new class of threat models, called snooping threat models, that are unique to reinforcement learning. In these snooping threat models, the adversary does not have the ability to interact with the target agent's environment, and can only eavesdrop on the action and reward signals being exchanged between agent and environment. We show that adversaries operating in these highly constrained threat models can still launch devastating attacks against the target agent by training proxy models on related tasks and leveraging the transferability of adversarial examples.

Duke Scholars

Published In

Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS

EISSN

1558-2914

ISSN

1548-8403

ISBN

9781450375184

Publication Date

January 1, 2020

Volume

2020-May

Start / End Page

557 / 565
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Inkawhich, M., Chen, Y., & Li, H. (2020). Snooping attacks on deep reinforcement learning. In Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS (Vol. 2020-May, pp. 557–565).
Inkawhich, M., Y. Chen, and H. Li. “Snooping attacks on deep reinforcement learning.” In Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, 2020-May:557–65, 2020.
Inkawhich M, Chen Y, Li H. Snooping attacks on deep reinforcement learning. In: Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS. 2020. p. 557–65.
Inkawhich, M., et al. “Snooping attacks on deep reinforcement learning.” Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, vol. 2020-May, 2020, pp. 557–65.
Inkawhich M, Chen Y, Li H. Snooping attacks on deep reinforcement learning. Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS. 2020. p. 557–565.

Published In

Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS

EISSN

1558-2914

ISSN

1548-8403

ISBN

9781450375184

Publication Date

January 1, 2020

Volume

2020-May

Start / End Page

557 / 565