Convergence rate analysis of nonquadratic proximal methods for convex and linear programming

Alfredo N Iusem, Marc Teboulle

Research output: Contribution to journalArticlepeer-review

Abstract

The φ-divergence proximal method is an extension of the proximal minimization algorithm, where the usual quadratic proximal term is substituted by a class of convex statistical distances, called φ-divergences. In this paper, we study the convergence rate of this nonquadratic proximal method for convex and particularly linear programming. We identify a class of φ-divergences for which superlinear convergence is attained both for optimization problems with strongly convex objectives at the optimum and linear programming problems, when the regularization parameters tend to zero. We prove also that, with regularization parameters bounded away from zero, convergence is at least linear for a wider class of φ-divergences, when the method is applied to the same kinds of problems. We further analyze the associated class of augmented Lagrangian methods for convex programming with nonquadratic penalty terms, and prove convergence of the dual generated by these methods for linear programming problems under a weak nondegeneracy assumption.
Original languageUndefined/Unknown
Pages (from-to)657-677
Number of pages21
JournalMathematics of Operations Research
Volume20
Issue number3
StatePublished - 1995

Cite this