Skip to main content

CoreMatching: A Co-adaptive Sparse Inference Framework with Token and Neuron Pruning for Comprehensive Acceleration of Vision-Language Models

Publication ,  Conference
Wang, Q; Ye, H; Chung, MY; Liu, Y; Lin, Y; Kuo, M; Ma, M; Zhang, J; Chen, Y
Published in: Proceedings of Machine Learning Research
January 1, 2025

Vision-Language Models (VLMs) excel across diverse tasks but suffer from high inference costs in time and memory. Token sparsity mitigates inefficiencies in token usage, while neuron sparsity reduces high-dimensional computations, both offering promising solutions to enhance efficiency. Recently, these two sparsity paradigms have evolved largely in parallel, fostering the prevailing assumption that they function independently. However, a fundamental yet underexplored question remains: Do they truly operate in isolation, or is there a deeper underlying interplay that has yet to be uncovered? In this paper, we conduct the first comprehensive investigation into this question. By introducing and analyzing the matching mechanism between Core Neurons and Core Tokens, we found that key neurons and tokens for inference mutually influence and reinforce each other. Building on this insight, we propose CoreMatching, a co-adaptive sparse inference framework, which leverages the synergy between token and neuron sparsity to enhance inference efficiency. Through theoretical analysis and efficiency evaluations, we demonstrate that the proposed method surpasses state-of-theart baselines on ten image understanding tasks and three hardware devices. Notably, on the NVIDIA Titan Xp, it achieved 5× FLOPs reduction and a 10× overall speedup. Code is released at https://github.com/wangqinsi1/2025-ICML-CoreMatching/tree/main.

Duke Scholars

Published In

Proceedings of Machine Learning Research

EISSN

2640-3498

Publication Date

January 1, 2025

Volume

267

Start / End Page

65236 / 65252
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Wang, Q., Ye, H., Chung, M. Y., Liu, Y., Lin, Y., Kuo, M., … Chen, Y. (2025). CoreMatching: A Co-adaptive Sparse Inference Framework with Token and Neuron Pruning for Comprehensive Acceleration of Vision-Language Models. In Proceedings of Machine Learning Research (Vol. 267, pp. 65236–65252).
Wang, Q., H. Ye, M. Y. Chung, Y. Liu, Y. Lin, M. Kuo, M. Ma, J. Zhang, and Y. Chen. “CoreMatching: A Co-adaptive Sparse Inference Framework with Token and Neuron Pruning for Comprehensive Acceleration of Vision-Language Models.” In Proceedings of Machine Learning Research, 267:65236–52, 2025.
Wang Q, Ye H, Chung MY, Liu Y, Lin Y, Kuo M, et al. CoreMatching: A Co-adaptive Sparse Inference Framework with Token and Neuron Pruning for Comprehensive Acceleration of Vision-Language Models. In: Proceedings of Machine Learning Research. 2025. p. 65236–52.
Wang, Q., et al. “CoreMatching: A Co-adaptive Sparse Inference Framework with Token and Neuron Pruning for Comprehensive Acceleration of Vision-Language Models.” Proceedings of Machine Learning Research, vol. 267, 2025, pp. 65236–52.
Wang Q, Ye H, Chung MY, Liu Y, Lin Y, Kuo M, Ma M, Zhang J, Chen Y. CoreMatching: A Co-adaptive Sparse Inference Framework with Token and Neuron Pruning for Comprehensive Acceleration of Vision-Language Models. Proceedings of Machine Learning Research. 2025. p. 65236–65252.

Published In

Proceedings of Machine Learning Research

EISSN

2640-3498

Publication Date

January 1, 2025

Volume

267

Start / End Page

65236 / 65252