Thwarting Replication Attack against Memristor-based Neuromorphic Computing System

Published

Journal Article

IEEE Neuromorphic architectures are widely used in many applications for advanced data processing and often implement proprietary algorithms. However, in an adversarial scenario, such systems may face elaborate security attacks including learning attack. In this work, we prevent an attacker with physical access from learning the proprietary algorithm implemented by the neuromorphic hardware. For this purpose, we leverage the obsolescence effect in memristors to judiciously reduce the accuracy of outputs for any unauthorized user. For a legitimate user, we regulate the obsolescence effect, thereby maintaining the accuracy of outputs in a suitable range. We extensively examine the feasibility of our proposed method with four datasets. We experiment under different settings such as activation functions and constraints such as process variations, and estimate the calibration overhead. The security vs. cost and performance vs. resistance range trade-offs for different applications are also analyzed. We then prove that the defense is still valid even if the attacker has the prior knowledge of the defense mechanism. Overall, our methodology is compatible with mainstream classification applications, memristor devices, and security and performance constraints.

Full Text

Duke Authors

Cited Authors

  • Yang, C; Liu, B; Li, H; Chen, Y; Barnell, M; Wu, Q; Wen, W; Rajendran, J

Published Date

  • January 1, 2019

Published In

Electronic International Standard Serial Number (EISSN)

  • 1937-4151

International Standard Serial Number (ISSN)

  • 0278-0070

Digital Object Identifier (DOI)

  • 10.1109/TCAD.2019.2937817

Citation Source

  • Scopus