Skip to main content

End-to-End Transformer Acceleration Through Processing-in-Memory Architectures

Publication ,  Conference
Yang, X; Chen, P; Molom-Ochir, T; Chen, Y
Published in: Proceedings of the International Conference on Microelectronics Icm
January 1, 2025

Transformers have become central to natural language processing and large language models, but their deployment at scale faces three major challenges. First, the attention mechanism requires massive matrix multiplications and frequent movement of intermediate results between memory and compute units, leading to high latency and energy costs. Second, in long-context inference, the key-value cache (KV cache) can grow unpredictably and even surpass the model's weight size, creating severe memory and bandwidth bottlenecks. Third, the quadratic complexity of attention with respect to sequence length amplifies both data movement and compute overhead, making large-scale inference inefficient. To address these issues, this work introduces processing-in-memory solutions that restructure attention and feed-forward computation to minimize off-chip data transfers, dynamically compress and prune the KV cache to manage memory growth, and reinterpret attention as an associative memory operation to reduce complexity and hardware footprint. Moreover, we evaluate our processing-in-memory design against state-of-the-art accelerators and general-purpose GPUs, demonstrating significant improvements in energy efficiency and latency. Together, these approaches address computation overhead, memory scalability, and attention complexity, further enabling efficient, end-to-end acceleration of Transformer models.

Duke Scholars

Published In

Proceedings of the International Conference on Microelectronics Icm

DOI

ISSN

2332-7014

Publication Date

January 1, 2025
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Yang, X., Chen, P., Molom-Ochir, T., & Chen, Y. (2025). End-to-End Transformer Acceleration Through Processing-in-Memory Architectures. In Proceedings of the International Conference on Microelectronics Icm. https://doi.org/10.1109/ICM66518.2025.11322529
Yang, X., P. Chen, T. Molom-Ochir, and Y. Chen. “End-to-End Transformer Acceleration Through Processing-in-Memory Architectures.” In Proceedings of the International Conference on Microelectronics Icm, 2025. https://doi.org/10.1109/ICM66518.2025.11322529.
Yang X, Chen P, Molom-Ochir T, Chen Y. End-to-End Transformer Acceleration Through Processing-in-Memory Architectures. In: Proceedings of the International Conference on Microelectronics Icm. 2025.
Yang, X., et al. “End-to-End Transformer Acceleration Through Processing-in-Memory Architectures.” Proceedings of the International Conference on Microelectronics Icm, 2025. Scopus, doi:10.1109/ICM66518.2025.11322529.
Yang X, Chen P, Molom-Ochir T, Chen Y. End-to-End Transformer Acceleration Through Processing-in-Memory Architectures. Proceedings of the International Conference on Microelectronics Icm. 2025.

Published In

Proceedings of the International Conference on Microelectronics Icm

DOI

ISSN

2332-7014

Publication Date

January 1, 2025