Variational autoencoder for deep learning of images, labels and captions

Published

Conference Paper

© 2016 NIPS Foundation - All Rights Reserved. A novel variational autoencoder is developed to model images, as well as associated labels or captions. The Deep Generative Deconvolutional Network (DGDN) is used as a decoder of the latent image features, and a deep Convolutional Neural Network (CNN) is used as an image encoder; the CNN is used to approximate a distribution for the latent DGDN features/code. The latent code is also linked to generative models for labels (Bayesian support vector machine) or captions (recurrent neural network). When predicting a label/caption for a new image at test, averaging is performed across the distribution of latent codes; this is computationally efficient as a consequence of the learned CNN-based encoder. Since the framework is capable of modeling the image in the presence/absence of associated labels/captions, a new semi-supervised setting is manifested for CNN learning with images; the framework even allows unsupervised CNN learning, based on images alone.

Duke Authors

Cited Authors

  • Pu, Y; Gan, Z; Henao, R; Yuan, X; Li, C; Stevens, A; Carin, L

Published Date

  • January 1, 2016

Published In

Start / End Page

  • 2360 - 2368

International Standard Serial Number (ISSN)

  • 1049-5258

Citation Source

  • Scopus