Symmetric variational autoencoder and connections to adversarial learning

Published

Conference Paper

Copyright 2018 by the author(s). A new form of the variational autoencoder (VAE) is proposed, based on the symmetric Kullback-Leibler divergence. It is demonstrated that learning of the resulting symmetric VAE (sVAE) has close connections to previously developed adversarial-learning methods. This relationship helps unify the previously distinct techniques of VAE and adversarially learning, and provides insights that allow us to ameliorate shortcomings with some previously developed adversarial methods. In addition to an analysis that motivates and explains the sVAE, an extensive set of experiments validate the utility of the approach.

Duke Authors

Cited Authors

  • Chen, L; Dai, S; Pu, Y; Zhou, E; Li, C; Su, Q; Chen, C; Carin, L

Published Date

  • January 1, 2018

Published In

  • International Conference on Artificial Intelligence and Statistics, Aistats 2018

Start / End Page

  • 661 - 669

Citation Source

  • Scopus