Skip to main content

Distilled Wasserstein learning for word embedding and topic modeling

Publication ,  Conference
Xu, H; Wang, W; Liu, W; Carin, L
Published in: Advances in Neural Information Processing Systems
January 1, 2018

We propose a novel Wasserstein method with a distillation mechanism, yielding joint learning of word embeddings and topics. The proposed method is based on the fact that the Euclidean distance between word embeddings may be employed as the underlying distance in the Wasserstein topic model. The word distributions of topics, their optimal transports to the word distributions of documents, and the embeddings of words are learned in a unified framework. When learning the topic model, we leverage a distilled underlying distance matrix to update the topic distributions and smoothly calculate the corresponding optimal transports. Such a strategy provides the updating of word embeddings with robust guidance, improving the algorithmic convergence. As an application, we focus on patient admission records, in which the proposed method embeds the codes of diseases and procedures and learns the topics of admissions, obtaining superior performance on clinically-meaningful disease network construction, mortality prediction as a function of admission codes, and procedure recommendation.

Duke Scholars

Published In

Advances in Neural Information Processing Systems

ISSN

1049-5258

Publication Date

January 1, 2018

Volume

2018-December

Start / End Page

1716 / 1725

Related Subject Headings

  • 4611 Machine learning
  • 1702 Cognitive Sciences
  • 1701 Psychology
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Xu, H., Wang, W., Liu, W., & Carin, L. (2018). Distilled Wasserstein learning for word embedding and topic modeling. In Advances in Neural Information Processing Systems (Vol. 2018-December, pp. 1716–1725).
Xu, H., W. Wang, W. Liu, and L. Carin. “Distilled Wasserstein learning for word embedding and topic modeling.” In Advances in Neural Information Processing Systems, 2018-December:1716–25, 2018.
Xu H, Wang W, Liu W, Carin L. Distilled Wasserstein learning for word embedding and topic modeling. In: Advances in Neural Information Processing Systems. 2018. p. 1716–25.
Xu, H., et al. “Distilled Wasserstein learning for word embedding and topic modeling.” Advances in Neural Information Processing Systems, vol. 2018-December, 2018, pp. 1716–25.
Xu H, Wang W, Liu W, Carin L. Distilled Wasserstein learning for word embedding and topic modeling. Advances in Neural Information Processing Systems. 2018. p. 1716–1725.

Published In

Advances in Neural Information Processing Systems

ISSN

1049-5258

Publication Date

January 1, 2018

Volume

2018-December

Start / End Page

1716 / 1725

Related Subject Headings

  • 4611 Machine learning
  • 1702 Cognitive Sciences
  • 1701 Psychology