On Misspecified Parameter Bounds with Application to Sparse Bayesian Learning
The sparse vector recovery problem can lead to a combinatorial search of prohibitive computations. Hence, reformulations amenable to convex optimization strategies have been considered. Alternatively, Bayesian inference approaches can curtail computations such as variational Bayesian methods (VBM). VBM, however, intentionally introduces a misspecified model to improve the efficiency of computational requirements. This talk will review the theory of misspecified parameter bounds and extensions to the Bayesian framework. Additionally, it will be shown that misspecified bounds can provide tight prediction of sparse Bayesian learning approaches, and thus can be used to tune the hyperparameters of VBM for improved performance. The VBM gains in computational efficiency, however, come at the cost of increased mean squared error (MSE) when compared to the perfectly specified model case. Examples will be shown that quantify this MSE increase and illustrate the apparent tradespace.