X2 generative adversarial network

Published

Conference Paper

© 2018 by the Authors All rights reserved. To assess the difference between real and synthetic data, Generative Adversarial Networks (GANs) are trained using a distribution discrepancy measure. Three widely employed measures are information-theoretic divergences, integral probability metrics, and Hilbert space discrepancy metrics. We elucidate the theoretical connections between these three popular GAN training criteria and propose a novel procedure, called x2-GAN, that is conceptually simple, stable at training and resistant to mode collapse. Our procedure naturally generalizes to address the problem of simultaneous matching of multiple distributions. Further, we propose a resampling strategy that significantly improves sample quality, by repurpos-ing the trained critic function via an importance weighting mechanism. Experiments show that the proposed procedure improves stability and convergence, and yields state-of-art results on a wide range of generative modeling tasks.

Duke Authors

Cited Authors

  • Tao, C; Chen, L; Henao, R; Feng, J; Carin, L

Published Date

  • January 1, 2018

Published In

  • 35th International Conference on Machine Learning, Icml 2018

Volume / Issue

  • 11 /

Start / End Page

  • 7787 - 7796

International Standard Book Number 13 (ISBN-13)

  • 9781510867963

Citation Source

  • Scopus