Abstract
We consider a problem of recovering a high-dimensional vector μ observed in white noise, where the unknown vector μ is assumed to be sparse. The objective of the paper is to develop a Bayesian formalism which gives rise to a family of l0-type penalties. The penalties are associated with various choices of the prior distributions πn(•) on the number of nonzero entries of μ and, hence, are easy to interpret. The resulting Bayesian estimators lead to a general thresholding rule which accommodates many of the known thresholding and model selection procedures as particular cases corresponding to specific choices of πn(•). Furthermore, they achieve optimality in a rather general setting under very mild conditions on the prior. We also specify the class of priors πn(•) for which the resulting estimator is adaptively optimal (in the minimax sense) for a wide range of sparse sequences and consider several examples of such priors.
Original language | English |
---|---|
Pages (from-to) | 2261-2286 |
Number of pages | 26 |
Journal | Annals of Statistics |
Volume | 35 |
Issue number | 5 |
DOIs | |
State | Published - Oct 2007 |
Keywords
- Adaptivity
- Complexity penalty
- Maximum a posteriori rule
- Minimax estimation
- Sequence estimation
- Sparsity
- Thresholding