Entropy-Like Proximal Methods in Convex Programming

Alfredo N. Iusem, B. F. Svaiter, Marc Teboulle

Research output: Contribution to journalArticlepeer-review


We study an extension of the proximal method for convex programming, where the quadratic regularization kernel is substituted by a class of convex statistical distances, called φ-divergences, which are typically entropy-like in form. After establishing several basic properties of these quasi-distances, we present a convergence analysis of the resulting entropy-like proximal algorithm. Applying this algorithm to the dual of a convex program, we recover a wide class of nonquadratic multiplier methods and prove their convergence.
Original languageUndefined/Unknown
Pages (from-to)790-814
Number of pages25
JournalMathematics of Operations Research
Issue number4
StatePublished - 1994

Cite this