Learning structural weight uncertainty for sequential decision-making

Published

Conference Paper

Copyright 2018 by the author(s). Learning probability distributions on the weights of neural networks (NNs) has recently proven beneficial in many applications. Bayesian methods, such as Stein variational gradient descent (SVGD), offer an elegant framework to reason about NN model uncertainty. However, by assuming independent Gaussian priors for the individual NN weights (as often applied), SVGD does not impose prior knowledge that there is often structural information (dependence) among weights. We propose efficient posterior learning of structural weight uncertainty, within an SVGD framework, by employing matrix variate Gaussian priors on NN parameters. We further investigate the learned structural uncertainty in sequential decisionmaking problems, including contextual bandits and reinforcement learning. Experiments on several synthetic and real datasets indicate the superiority of our model, compared with state-of-the-art methods.

Duke Authors

Cited Authors

  • Zhang, R; Li, C; Chen, C; Carin, L

Published Date

  • January 1, 2018

Published In

  • International Conference on Artificial Intelligence and Statistics, Aistats 2018

Start / End Page

  • 1137 - 1146

Citation Source

  • Scopus