Security of neuromorphic systems: Challenges and solutions

Conference Paper

With the rapid growth of big-data applications, advanced data processing technologies, e.g., machine learning, are widely adopted in many industry fields. Although these technologies demonstrate powerful data analyzing and processing capability, there exist some security concerns that may potentially expose the user/owner of the services to information safety risk. In particular, the adoption of neuromorphic computing systems that implement neural network and machine learning algorithms on hardware generates the need for protecting the data security in such systems. In this paper, we introduce the security concerns in the learning-based applications and propose a secured neuromorphic system design that can prevent potential attackers from replicating the learning model. Our results show that the computation accuracy of the designed neuromorphic computing system will quickly degrade when no proper authorization is given, by leveraging the drifting effect of the memristor device.

Full Text

Duke Authors

Cited Authors

  • Liu, B; Yang, C; Li, H; Chen, Y; Wu, Q; Barnell, M

Published Date

  • July 29, 2016

Published In

Volume / Issue

  • 2016-July /

Start / End Page

  • 1326 - 1329

International Standard Serial Number (ISSN)

  • 0271-4310

International Standard Book Number 13 (ISBN-13)

  • 9781479953400

Digital Object Identifier (DOI)

  • 10.1109/ISCAS.2016.7527493

Citation Source

  • Scopus