Skip to main content

Integrating task specific information into pretrained language models for low resource fine tuning

Publication ,  Conference
Wang, R; Si, S; Wang, G; Zhang, L; Carin, L; Henao, R
Published in: Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020
January 1, 2020

Pretrained Language Models (PLMs) have improved the performance of natural language understanding in recent years. Such models are pretrained on large corpora, which encode the general prior knowledge of natural languages but are agnostic to information characteristic of downstream tasks. This often results in overfitting when fine-tuned with low resource datasets where task-specific information is limited. In this paper, we integrate label information as a task-specific prior into the self-attention component of pretrained BERT models. Experiments on several benchmarks and real-word datasets suggest that the proposed approach can largely improve the performance of pretrained models when fine-tuning with small datasets. The code repository is released in https://github.com/RayWangWR/BERT_label_embedding.

Duke Scholars

Published In

Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020

Publication Date

January 1, 2020

Start / End Page

3181 / 3186
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Wang, R., Si, S., Wang, G., Zhang, L., Carin, L., & Henao, R. (2020). Integrating task specific information into pretrained language models for low resource fine tuning. In Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020 (pp. 3181–3186).
Wang, R., S. Si, G. Wang, L. Zhang, L. Carin, and R. Henao. “Integrating task specific information into pretrained language models for low resource fine tuning.” In Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020, 3181–86, 2020.
Wang R, Si S, Wang G, Zhang L, Carin L, Henao R. Integrating task specific information into pretrained language models for low resource fine tuning. In: Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020. 2020. p. 3181–6.
Wang, R., et al. “Integrating task specific information into pretrained language models for low resource fine tuning.” Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020, 2020, pp. 3181–86.
Wang R, Si S, Wang G, Zhang L, Carin L, Henao R. Integrating task specific information into pretrained language models for low resource fine tuning. Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020. 2020. p. 3181–3186.

Published In

Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020

Publication Date

January 1, 2020

Start / End Page

3181 / 3186