On Linear Convergence of Non-Euclidean Gradient Methods without Strong Convexity and Lipschitz Gradient Continuity

Heinz H. Bauschke, Jérôme Bolte*, Jiawei Chen, Marc Teboulle, Xianfu Wang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

22 Scopus citations

Abstract

The gradient method is well known to globally converge linearly when the objective function is strongly convex and admits a Lipschitz continuous gradient. In many applications, both assumptions are often too stringent, precluding the use of gradient methods. In the early 1960s, after the amazing breakthrough of Łojasiewicz on gradient inequalities, it was observed that uniform convexity assumptions could be relaxed and replaced by these inequalities. On the other hand, very recently, it has been shown that the Lipschitz gradient continuity can be lifted and replaced by a class of functions satisfying a non-Euclidean descent property expressed in terms of a Bregman distance. In this note, we combine these two ideas to introduce a class of non-Euclidean gradient-like inequalities, allowing to prove linear convergence of a Bregman gradient method for nonconvex minimization, even when neither strong convexity nor Lipschitz gradient continuity holds.

Original languageEnglish
Pages (from-to)1068-1087
Number of pages20
JournalJournal of Optimization Theory and Applications
Volume182
Issue number3
DOIs
StatePublished - 15 Sep 2019

Keywords

  • Bregman distance
  • Descent lemma without Lipschitz gradient
  • Gradient dominated inequality
  • Linear rate of convergence
  • Lipschitz-like convexity condition
  • Non-Euclidean gradient methods
  • Nonconvex minimization
  • Łojasiewicz gradient inequality

Fingerprint

Dive into the research topics of 'On Linear Convergence of Non-Euclidean Gradient Methods without Strong Convexity and Lipschitz Gradient Continuity'. Together they form a unique fingerprint.

Cite this