Generative adversarial networks with mixture of t-distributions noise for diverse image generation.
Image generation is a long-standing problem in the machine learning and computer vision areas. In order to generate images with high diversity, we propose a novel model called generative adversarial networks with mixture of t-distributions noise (tGANs). In tGANs, the latent generative space is formulated using a mixture of t-distributions. Particularly, the parameters of the components in the mixture of t-distributions can be learned along with others in the model. To improve the diversity of the generated images in each class, each noise vector and a class codeword are concatenated as the input of the generator of tGANs. In addition, a classification loss is added to both the generator and the discriminator losses to strengthen their performances. We have conducted extensive experiments to compare tGANs with a state-of-the-art pixel by pixel image generation approach, pixelCNN, and related GAN-based models. The experimental results and statistical comparisons demonstrate that tGANs perform significantly better than pixleCNN and related GAN-based models for diverse image generation.
Duke Scholars
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Start / End Page
Related Subject Headings
- Neural Networks, Computer
- Machine Learning
- Image Processing, Computer-Assisted
- Artificial Intelligence & Image Processing
- 4905 Statistics
- 4611 Machine learning
- 4602 Artificial intelligence
Citation
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Start / End Page
Related Subject Headings
- Neural Networks, Computer
- Machine Learning
- Image Processing, Computer-Assisted
- Artificial Intelligence & Image Processing
- 4905 Statistics
- 4611 Machine learning
- 4602 Artificial intelligence