Skip to main content

A Survey on Small Language Models in the Era of Large Language Models: Architecture, Capabilities, and Trustworthiness

Publication ,  Conference
Wang, F; Lin, M; Ma, Y; Liu, H; He, Q; Tang, X; Tang, J; Pei, J; Wang, S
Published in: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
August 3, 2025

Large language models (LLMs) based on Transformer architecture are powerful but face challenges with deployment, inference latency, and costly fine-tuning. These limitations highlight the emerging potential of small language models (SLMs), which can either replace LLMs through innovative architectures and technologies, or assist them as efficient proxy or reward models. Emerging architectures such as Mamba and xLSTM address the quadratic scaling of inference with window length in Transformers by enabling linear scaling. To maximize SLM performance, test-time compute scaling strategies reduce the performance gap with LLMs by allocating extra compute budget during test time. Beyond standalone usage, SLMs could also assist in LLMs via weak-to-strong learning, proxy tuning, and guarding, fostering secure and efficient LLM deployment. Lastly, the trustworthiness of SLMs remains a critical yet underexplored research area. However, there is a lack of tutorials on cutting-edge SLM technologies, prompting us to conduct one.

Duke Scholars

Published In

Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining

DOI

ISSN

2154-817X

Publication Date

August 3, 2025

Volume

2

Start / End Page

6173 / 6183
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Wang, F., Lin, M., Ma, Y., Liu, H., He, Q., Tang, X., … Wang, S. (2025). A Survey on Small Language Models in the Era of Large Language Models: Architecture, Capabilities, and Trustworthiness. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (Vol. 2, pp. 6173–6183). https://doi.org/10.1145/3711896.3736563
Wang, F., M. Lin, Y. Ma, H. Liu, Q. He, X. Tang, J. Tang, J. Pei, and S. Wang. “A Survey on Small Language Models in the Era of Large Language Models: Architecture, Capabilities, and Trustworthiness.” In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2:6173–83, 2025. https://doi.org/10.1145/3711896.3736563.
Wang F, Lin M, Ma Y, Liu H, He Q, Tang X, et al. A Survey on Small Language Models in the Era of Large Language Models: Architecture, Capabilities, and Trustworthiness. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2025. p. 6173–83.
Wang, F., et al. “A Survey on Small Language Models in the Era of Large Language Models: Architecture, Capabilities, and Trustworthiness.” Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, vol. 2, 2025, pp. 6173–83. Scopus, doi:10.1145/3711896.3736563.
Wang F, Lin M, Ma Y, Liu H, He Q, Tang X, Tang J, Pei J, Wang S. A Survey on Small Language Models in the Era of Large Language Models: Architecture, Capabilities, and Trustworthiness. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2025. p. 6173–6183.

Published In

Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining

DOI

ISSN

2154-817X

Publication Date

August 3, 2025

Volume

2

Start / End Page

6173 / 6183