Skip to main content

Improving disentangled text representation learning with information-theoretic guidance

Publication ,  Conference
Cheng, P; Min, MR; Shen, D; Malon, C; Zhang, Y; Li, Y; Carin, L
Published in: Proceedings of the Annual Meeting of the Association for Computational Linguistics
January 1, 2020

Learning disentangled representations of natural language is essential for many NLP tasks, e.g., conditional text generation, style transfer, personalized dialogue systems, etc. Similar problems have been studied extensively for other forms of data, such as images and videos. However, the discrete nature of natural language makes the disentangling of textual representations more challenging (e.g., the manipulation over the data space cannot be easily achieved). Inspired by information theory, we propose a novel method that effectively manifests disentangled representations of text, without any supervision on semantics. A new mutual information upper bound is derived and leveraged to measure dependence between style and content. By minimizing this upper bound, the proposed method induces style and content embeddings into two independent low-dimensional spaces. Experiments on both conditional text generation and text-style transfer demonstrate the high quality of our disentangled representation in terms of content and style preservation.

Duke Scholars

Published In

Proceedings of the Annual Meeting of the Association for Computational Linguistics

ISSN

0736-587X

Publication Date

January 1, 2020

Start / End Page

7530 / 7541
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Cheng, P., Min, M. R., Shen, D., Malon, C., Zhang, Y., Li, Y., & Carin, L. (2020). Improving disentangled text representation learning with information-theoretic guidance. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 7530–7541).
Cheng, P., M. R. Min, D. Shen, C. Malon, Y. Zhang, Y. Li, and L. Carin. “Improving disentangled text representation learning with information-theoretic guidance.” In Proceedings of the Annual Meeting of the Association for Computational Linguistics, 7530–41, 2020.
Cheng P, Min MR, Shen D, Malon C, Zhang Y, Li Y, et al. Improving disentangled text representation learning with information-theoretic guidance. In: Proceedings of the Annual Meeting of the Association for Computational Linguistics. 2020. p. 7530–41.
Cheng, P., et al. “Improving disentangled text representation learning with information-theoretic guidance.” Proceedings of the Annual Meeting of the Association for Computational Linguistics, 2020, pp. 7530–41.
Cheng P, Min MR, Shen D, Malon C, Zhang Y, Li Y, Carin L. Improving disentangled text representation learning with information-theoretic guidance. Proceedings of the Annual Meeting of the Association for Computational Linguistics. 2020. p. 7530–7541.

Published In

Proceedings of the Annual Meeting of the Association for Computational Linguistics

ISSN

0736-587X

Publication Date

January 1, 2020

Start / End Page

7530 / 7541