Convergence of proximal-lire algorithms

Marc Teboulle*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review


We analyze proximal methods based on entropy-like distances for the minimization of convex functions subject to nonnegativity constraints. We prove global convergence results for the methods with approximate minimization steps and an ergodic convergence result for the case of finding a zero of a maximal monotone operator. We also consider linearly constrained convex problems and establish a quadratic convergence rate result for linear programs. Our analysis allows us to simplify and extend the available convergence results for these methods.

Original languageEnglish
Pages (from-to)1069-1083
Number of pages15
JournalSIAM Journal on Optimization
Issue number4
StatePublished - Nov 1997


  • Convex optimization
  • Maximal monotone operator
  • Proximal methods


Dive into the research topics of 'Convergence of proximal-lire algorithms'. Together they form a unique fingerprint.

Cite this