Security of neuromorphic systems: Challenges and solutions
With the rapid growth of big-data applications, advanced data processing technologies, e.g., machine learning, are widely adopted in many industry fields. Although these technologies demonstrate powerful data analyzing and processing capability, there exist some security concerns that may potentially expose the user/owner of the services to information safety risk. In particular, the adoption of neuromorphic computing systems that implement neural network and machine learning algorithms on hardware generates the need for protecting the data security in such systems. In this paper, we introduce the security concerns in the learning-based applications and propose a secured neuromorphic system design that can prevent potential attackers from replicating the learning model. Our results show that the computation accuracy of the designed neuromorphic computing system will quickly degrade when no proper authorization is given, by leveraging the drifting effect of the memristor device.