Skip to main content

Processing-in-Memory Designs Based on Emerging Technology for Efficient Machine Learning Acceleration

Publication ,  Conference
Kim, B; Li, H; Chen, Y
Published in: Proceedings of the ACM Great Lakes Symposium on VLSI, GLSVLSI
June 12, 2024

The unprecedented success of artificial intelligence (AI) enriches machine learning (ML)-based applications. The availability of big data and compute-intensive algorithms empowers versatility and high accuracy in ML approaches. However, the data processing and innumerable computations burden conventional hardware systems with high power consumption and low performance. Breaking away from the traditional hardware design, non-conventional accelerators exploiting emerging technology have gained significant attention with a leap forward since the emerging devices enable processing-in-memory (PIM) designs of dramatic improvement in efficiency. This paper presents a summary of state-of-the-art PIM accelerators over a decade. The PIM accelerators have been implemented for diverse models and advanced algorithm techniques across diverse neural networks in language processing and image recognition to expedite inference and training. We will provide the implemented designs, methodologies, and results, following the development in the past years. The promising direction of the PIM accelerators, vertically stacking for More than Moore, is also discussed.

Duke Scholars

Published In

Proceedings of the ACM Great Lakes Symposium on VLSI, GLSVLSI

DOI

Publication Date

June 12, 2024

Start / End Page

614 / 619
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Kim, B., Li, H., & Chen, Y. (2024). Processing-in-Memory Designs Based on Emerging Technology for Efficient Machine Learning Acceleration. In Proceedings of the ACM Great Lakes Symposium on VLSI, GLSVLSI (pp. 614–619). https://doi.org/10.1145/3649476.3660367
Kim, B., H. Li, and Y. Chen. “Processing-in-Memory Designs Based on Emerging Technology for Efficient Machine Learning Acceleration.” In Proceedings of the ACM Great Lakes Symposium on VLSI, GLSVLSI, 614–19, 2024. https://doi.org/10.1145/3649476.3660367.
Kim B, Li H, Chen Y. Processing-in-Memory Designs Based on Emerging Technology for Efficient Machine Learning Acceleration. In: Proceedings of the ACM Great Lakes Symposium on VLSI, GLSVLSI. 2024. p. 614–9.
Kim, B., et al. “Processing-in-Memory Designs Based on Emerging Technology for Efficient Machine Learning Acceleration.” Proceedings of the ACM Great Lakes Symposium on VLSI, GLSVLSI, 2024, pp. 614–19. Scopus, doi:10.1145/3649476.3660367.
Kim B, Li H, Chen Y. Processing-in-Memory Designs Based on Emerging Technology for Efficient Machine Learning Acceleration. Proceedings of the ACM Great Lakes Symposium on VLSI, GLSVLSI. 2024. p. 614–619.

Published In

Proceedings of the ACM Great Lakes Symposium on VLSI, GLSVLSI

DOI

Publication Date

June 12, 2024

Start / End Page

614 / 619