Adversarial symmetric variational autoencoder

Published

Journal Article

© 2017 Neural information processing systems foundation. All rights reserved. A new form of variational autoencoder (VAE) is developed, in which the joint distribution of data and codes is considered in two (symmetric) forms: (i) from observed data fed through the encoder to yield codes, and (ii) from latent codes drawn from a simple prior and propagated through the decoder to manifest data. Lower bounds are learned for marginal log-likelihood fits observed data and latent codes. When learning with the variational bound, one seeks to minimize the symmetric Kullback-Leibler divergence of joint density functions from (i) and (ii), while simultaneously seeking to maximize the two marginal log-likelihoods. To facilitate learning, a new form of adversarial training is developed. An extensive set of experiments is performed, in which we demonstrate state-of-the-art data reconstruction and generation on several image benchmark datasets.

Duke Authors

Cited Authors

  • Pu, Y; Wang, W; Henao, R; Chen, L; Gan, Z; Li, C; Carin, L

Published Date

  • January 1, 2017

Published In

Volume / Issue

  • 2017-December /

Start / End Page

  • 4331 - 4340

International Standard Serial Number (ISSN)

  • 1049-5258

Citation Source

  • Scopus