Skip to main content

Characterizing the Efficiency vs. Accuracy Trade-off for Long-Context NLP Models

Publication ,  Conference
Ang, P; Dhingra, B; Wills, LW
Published in: NLP-Power 2022 - 1st Workshop on Efficient Benchmarking in NLP, Proceedings of the Workshop
January 1, 2022

With many real-world applications of Natural Language Processing (NLP) comprising of long texts, there has been a rise in NLP benchmarks that measure the accuracy of models that can handle longer input sequences. However, these benchmarks do not consider the trade-offs between accuracy, speed, and power consumption as input sizes or model sizes are varied. In this work, we perform a systematic study of this accuracy vs. efficiency trade-off on two widely used long-sequence models - Longformer-Encoder-Decoder (LED) and Big Bird - during fine-tuning and inference on four datasets from the SCROLLS benchmark. To study how this trade-off differs across hyperparameter settings, we compare the models across four sequence lengths (1024, 2048, 3072, 4096) and two model sizes (base and large) under a fixed resource budget. We find that LED consistently achieves better accuracy at lower energy costs than Big Bird. For summarization, we find that increasing model size is more energy efficient than increasing sequence length for higher accuracy. However, this comes at the cost of a large drop in inference speed. For question answering, we find that smaller models are both more efficient and more accurate due to the larger training batch sizes possible under a fixed resource budget.

Duke Scholars

Published In

NLP-Power 2022 - 1st Workshop on Efficient Benchmarking in NLP, Proceedings of the Workshop

DOI

Publication Date

January 1, 2022

Start / End Page

113 / 121
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Ang, P., Dhingra, B., & Wills, L. W. (2022). Characterizing the Efficiency vs. Accuracy Trade-off for Long-Context NLP Models. In NLP-Power 2022 - 1st Workshop on Efficient Benchmarking in NLP, Proceedings of the Workshop (pp. 113–121). https://doi.org/10.18653/v1/2022.nlppower-1.12
Ang, P., B. Dhingra, and L. W. Wills. “Characterizing the Efficiency vs. Accuracy Trade-off for Long-Context NLP Models.” In NLP-Power 2022 - 1st Workshop on Efficient Benchmarking in NLP, Proceedings of the Workshop, 113–21, 2022. https://doi.org/10.18653/v1/2022.nlppower-1.12.
Ang P, Dhingra B, Wills LW. Characterizing the Efficiency vs. Accuracy Trade-off for Long-Context NLP Models. In: NLP-Power 2022 - 1st Workshop on Efficient Benchmarking in NLP, Proceedings of the Workshop. 2022. p. 113–21.
Ang, P., et al. “Characterizing the Efficiency vs. Accuracy Trade-off for Long-Context NLP Models.” NLP-Power 2022 - 1st Workshop on Efficient Benchmarking in NLP, Proceedings of the Workshop, 2022, pp. 113–21. Scopus, doi:10.18653/v1/2022.nlppower-1.12.
Ang P, Dhingra B, Wills LW. Characterizing the Efficiency vs. Accuracy Trade-off for Long-Context NLP Models. NLP-Power 2022 - 1st Workshop on Efficient Benchmarking in NLP, Proceedings of the Workshop. 2022. p. 113–121.

Published In

NLP-Power 2022 - 1st Workshop on Efficient Benchmarking in NLP, Proceedings of the Workshop

DOI

Publication Date

January 1, 2022

Start / End Page

113 / 121