Skip to main content

The sensitivity of compressed sensing performance to relaxation of sparsity

Publication ,  Journal Article
Donoho, D; Reeves, G
Published in: IEEE International Symposium on Information Theory - Proceedings
October 22, 2012

Many papers studying compressed sensing consider the noisy underdetermined system of linear equations: y = Ax 0+ z, with n x N measurement matrix A, n < N, and Gaussian white noise z ∼ N(0,σ 2 I). Both y and A are known, both x 0 and z are unknown, and we seek an approximation to x 0; we let δ = n/N ∈ (0,1) denote the undersampling fraction. In the popular strict sparsity model of compressed sensing, such papers further assume that x 0 has at most a specified fraction ε of nonzeros. In this paper, we relax the assumption of strict sparsity by assuming the vector x 0 is close in mean p-th power to a sparse signal. We study how this relaxation affects the performance of ℓ 1-penalized ℓ 2 minimization, in which the reconstruction x̂ 1,λ solves min min ∥y - A x∥ 22/2 + λ∥x∥ 1. We study asymptotic mean-squared error (AMSE), the large-system limit of the MSE of x 1, λ. Using recently developed tools based on Approximate Message Passing (AMP), we develop expressions for minimax AMSE M* ε, p(δ, ξ, σ) - max over all approximately sparse signals, min over penalizations λ, where ξ measures the deviation from strict sparsity. There is of course a phase transition curve δ* = δ*(ε); only above this curve, δ > δ*(ε), can we have exact recovery even in the noiseless-data strict-sparsity setting. It turns out that the minimax AMSE can be characterized succinctly by a coefficient sens* p(ε, δ) which we refer to as the sparsity-relaxation sensitivity. We give explicit expressions for sens* p(ε, δ), compute them, and interpret them. Our approach yields precise formulas in place of loose order bounds based on restricted isometry property and instance optimality results. Our formulas reveal that sensitivity is finite everywhere exact recovery is possible under strict sparsity, and that sensitivity to added random noise in the measurements y is smaller than the sensitivity to adding a comparable amount of noise to the estimand x 0. Our methods can also treat the mean q-th power loss. The methods themselves are based on minimax decision theory and seem of independent interest. © 2012 IEEE.

Duke Scholars

Published In

IEEE International Symposium on Information Theory - Proceedings

DOI

Publication Date

October 22, 2012

Start / End Page

2211 / 2215
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Donoho, D., & Reeves, G. (2012). The sensitivity of compressed sensing performance to relaxation of sparsity. IEEE International Symposium on Information Theory - Proceedings, 2211–2215. https://doi.org/10.1109/ISIT.2012.6283846
Donoho, D., and G. Reeves. “The sensitivity of compressed sensing performance to relaxation of sparsity.” IEEE International Symposium on Information Theory - Proceedings, October 22, 2012, 2211–15. https://doi.org/10.1109/ISIT.2012.6283846.
Donoho D, Reeves G. The sensitivity of compressed sensing performance to relaxation of sparsity. IEEE International Symposium on Information Theory - Proceedings. 2012 Oct 22;2211–5.
Donoho, D., and G. Reeves. “The sensitivity of compressed sensing performance to relaxation of sparsity.” IEEE International Symposium on Information Theory - Proceedings, Oct. 2012, pp. 2211–15. Scopus, doi:10.1109/ISIT.2012.6283846.
Donoho D, Reeves G. The sensitivity of compressed sensing performance to relaxation of sparsity. IEEE International Symposium on Information Theory - Proceedings. 2012 Oct 22;2211–2215.

Published In

IEEE International Symposium on Information Theory - Proceedings

DOI

Publication Date

October 22, 2012

Start / End Page

2211 / 2215