Single Letter Formulas for Quantized Compressed Sensing with Gaussian Codebooks
Theoretical and experimental results have shown that compressed sensing with quantization can perform well if the signal is very sparse, the noise is very low, and the bitrate is sufficiently large. However, a precise characterization of the fundamental tradeoffs between these quantities has remained elusive. In our previous work, we considered a quantization scheme that first computes the conditional expectation of the signal. In this paper, we focus on a different approach in which the measurements are encoded directly using Gaussian codebooks. We show that that mean-square error (MSE) distortion of this approach can be analyzed by studying a degraded measurement model without any bitrate constraints. Building upon ideas from statistical physics and random matrix theory, we then provide single-letter formulas for the reconstruction error associated with optimal decoding. These formulas provide an explicit characterization of the mean-squared error (MSE) as a function of: (1) the average quantization bitrate, (2) the prior distribution of the signal, and (3) the spectral distribution of the sensing matrix. These formulas provide upper bounds on the fundamental limits of compressed sensing with quantization. Interestingly, it is shown that in some problem regimes, this method achieves the best known performance, even though the encoding stage does not use any information about the signal distribution other than its mean and variance.