Improving sequence-to-sequence learning via optimal transport

Published

Conference Paper

© 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. Sequence-to-sequence models are commonly trained via maximum likelihood estimation (MLE). However, standard MLE training considers a word-level objective, predicting the next word given the previous ground-truth partial sentence. This procedure focuses on modeling local syntactic patterns, and may fail to capture long-range semantic structure. We present a novel solution to alleviate these issues. Our approach imposes global sequence-level guidance via new supervision based on optimal transport, enabling the overall characterization and preservation of semantic features. We further show that this method can be understood as a Wasserstein gradient flow trying to match our model to the ground truth sequence distribution. Extensive experiments are conducted to validate the utility of the proposed approach, showing consistent improvements over a wide variety of NLP tasks, including machine translation, abstractive text summarization, and image captioning.

Duke Authors

Cited Authors

  • Chen, L; Zhang, Y; Zhang, R; Tao, C; Gan, Z; Zhang, H; Li, B; Shen, D; Chen, C; Carin, L

Published Date

  • January 1, 2019

Published In

  • 7th International Conference on Learning Representations, Iclr 2019

Citation Source

  • Scopus