TY - CHAP
T1 - Gradient-based algorithms with applications to signal-recovery problems
AU - Beck, Amir
AU - Teboulle, Marc
N1 - Publisher Copyright:
© Cambridge University Press 2010.
PY - 2009/1/1
Y1 - 2009/1/1
N2 - This chapter presents, in a self-contained manner, recent advances in the design and analysis of gradient-based schemes for specially structured, smooth and nonsmooth minimization problems. We focus on the mathematical elements and ideas for building fast gradient-based methods and derive their complexity bounds. Throughout the chapter, the resulting schemes and results are illustrated and applied on a variety of problems arising in several specific key applications such as sparse approximation of signals, total variation-based image-processing problems, and sensor-location problems. Introduction The gradient method is probably one of the oldest optimization algorithms going back as early as 1847 with the initial work of Cauchy. Nowadays, gradient-based methods have attracted a revived and intensive interest among researches both in theoretical optimization, and in scientific applications. Indeed, the very large-scale nature of problems arising in many scientific applications, combined with an increase in the power of computer technology have motivated a “return” to the “old and simple” methods that can overcome the curse of dimensionality; a task which is usually out of reach for the current more sophisticated algorithms. One of the main drawbacks of gradient-based methods is their speed of convergence, which is known to be slow. However, with proper modeling of the problem at hand, combined with some key ideas, it turns out that it is possible to build fast gradient schemes for various classes of problems arising in applications and, in particular, signal-recovery problems.
AB - This chapter presents, in a self-contained manner, recent advances in the design and analysis of gradient-based schemes for specially structured, smooth and nonsmooth minimization problems. We focus on the mathematical elements and ideas for building fast gradient-based methods and derive their complexity bounds. Throughout the chapter, the resulting schemes and results are illustrated and applied on a variety of problems arising in several specific key applications such as sparse approximation of signals, total variation-based image-processing problems, and sensor-location problems. Introduction The gradient method is probably one of the oldest optimization algorithms going back as early as 1847 with the initial work of Cauchy. Nowadays, gradient-based methods have attracted a revived and intensive interest among researches both in theoretical optimization, and in scientific applications. Indeed, the very large-scale nature of problems arising in many scientific applications, combined with an increase in the power of computer technology have motivated a “return” to the “old and simple” methods that can overcome the curse of dimensionality; a task which is usually out of reach for the current more sophisticated algorithms. One of the main drawbacks of gradient-based methods is their speed of convergence, which is known to be slow. However, with proper modeling of the problem at hand, combined with some key ideas, it turns out that it is possible to build fast gradient schemes for various classes of problems arising in applications and, in particular, signal-recovery problems.
UR - http://www.scopus.com/inward/record.url?scp=84926076587&partnerID=8YFLogxK
U2 - 10.1017/CBO9780511804458.003
DO - 10.1017/CBO9780511804458.003
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.chapter???
AN - SCOPUS:84926076587
SN - 9780521762229
VL - 9780521762229
SP - 42
EP - 88
BT - Convex Optimization in Signal Processing and Communications
PB - Cambridge University Press
ER -