Baseline needs more love: On simple word-embedding-based models and associated pooling mechanisms

Published

Journal Article

© 2018 Association for Computational Linguistics Many deep learning architectures have been proposed to model the compositionality in text sequences, requiring a substantial number of parameters and expensive computations. However, there has not been a rigorous evaluation regarding the added value of sophisticated compositional functions. In this paper, we conduct a point-by-point comparative study between Simple Word-Embedding-based Models (SWEMs), consisting of parameter-free pooling operations, relative to word-embedding-based RNN/CNN models. Surprisingly, SWEMs exhibit comparable or even superior performance in the majority of cases considered. Based upon this understanding, we propose two additional pooling strategies over learned word embeddings: (i) a max-pooling operation for improved interpretability; and (ii) a hierarchical pooling operation, which preserves spatial (n-gram) information within text sequences. We present experiments on 17 datasets encompassing three tasks: (i) (long) document classification; (ii) text sequence matching; and (iii) short text tasks, including classification and tagging.

Duke Authors

Cited Authors

  • Shen, D; Wang, G; Wang, W; Min, MR; Su, Q; Zhang, Y; Li, C; Henao, R; Carin, L

Published Date

  • January 1, 2018

Published In

  • Acl 2018 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers)

Volume / Issue

  • 1 /

Start / End Page

  • 440 - 450

Citation Source

  • Scopus