On optimality of bayesian testimation in the normal means problem

Felix Abramovich*, Vadim Grinshtein, Marianna Pensky

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review


We consider a problem of recovering a high-dimensional vector μ observed in white noise, where the unknown vector μ is assumed to be sparse. The objective of the paper is to develop a Bayesian formalism which gives rise to a family of l0-type penalties. The penalties are associated with various choices of the prior distributions πn(•) on the number of nonzero entries of μ and, hence, are easy to interpret. The resulting Bayesian estimators lead to a general thresholding rule which accommodates many of the known thresholding and model selection procedures as particular cases corresponding to specific choices of πn(•). Furthermore, they achieve optimality in a rather general setting under very mild conditions on the prior. We also specify the class of priors πn(•) for which the resulting estimator is adaptively optimal (in the minimax sense) for a wide range of sparse sequences and consider several examples of such priors.

Original languageEnglish
Pages (from-to)2261-2286
Number of pages26
JournalAnnals of Statistics
Issue number5
StatePublished - Oct 2007


  • Adaptivity
  • Complexity penalty
  • Maximum a posteriori rule
  • Minimax estimation
  • Sequence estimation
  • Sparsity
  • Thresholding


Dive into the research topics of 'On optimality of bayesian testimation in the normal means problem'. Together they form a unique fingerprint.

Cite this