Research Progress on Memristor: From Synapses to Computing Systems

Journal Article (Journal Article)

As the limits of transistor technology are approached, feature size in integrated circuit transistors has been reduced very near to the minimum physically-realizable channel length, and it has become increasingly difficult to meet expectations outlined by Moore's law. As one of the most promising devices to replace transistors, memristors have many excellent properties that can be leveraged to develop new types of neural and non-von Neumann computing systems, which are expected to revolutionize information-processing technology. This survey provides a comparative overview of research progress on memristors. Different memristor synaptic devices are classified according to stimulation patterns and the working mechanisms of these various synaptic devices are analyzed in detail. Crossbar-based memristors have demonstrated advantages in physically executing vector-matrix multiplication and enabling highly power-efficient and area-efficient neuromorphic system designs. The extensive uses of crossbar-based memristors cover in-memory logic, vector-matrix multiplication, and many other fundamental computing operations. Furthermore, memristor-based architectures for efficient neural network training and inference have been studied. However, memristors have non-ideal properties due to programming inaccuracies and device imperfections from fabrication, which lead to error or mismatch in computed results. To build reliable memristor-based designs, circuit-level, algorithm-level, and system-level solutions to memristor reliability issues are being studied. To this end, state-of-the-art realizations of memristor crossbars, crossbar-based designs, and peripheral circuitry are presented, which show both promising full-system inference accuracy and excellent power efficiency in multiple tasks. Memristor in-situ learning benefits from high energy efficiency and biologically-imitative characteristics, which are conducive to further realizing hardware acceleration of cognitive learning. At present, the learning and training processes of brain-like networks are complex, presenting great challenges for network design and implementation.

Full Text

Duke Authors

Cited Authors

  • Yang, X; Taylor, B; Wu, A; Chen, Y; Chua, LO

Published Date

  • May 1, 2022

Published In

Volume / Issue

  • 69 / 5

Start / End Page

  • 1845 - 1857

Electronic International Standard Serial Number (EISSN)

  • 1558-0806

International Standard Serial Number (ISSN)

  • 1549-8328

Digital Object Identifier (DOI)

  • 10.1109/TCSI.2022.3159153

Citation Source

  • Scopus