Primal and dual predicted decrease approximation methods

Amir Beck*, Edouard Pauwels, Shoham Sabach

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

We introduce the notion of predicted decrease approximation (PDA) for constrained convex optimization, a flexible framework which includes as special cases known algorithms such as generalized conditional gradient, proximal gradient, greedy coordinate descent for separable constraints and working set methods for linear equality constraints with bounds. The new scheme allows the development of a unified convergence analysis for these methods. We further consider a partially strongly convex nonsmooth model and show that dual application of PDA-based methods yields new sublinear convergence rate estimates in terms of both primal and dual objectives. As an example of an application, we provide an explicit working set selection rule for SMO-type methods for training the support vector machine with an improved primal convergence analysis.

Original languageEnglish
Pages (from-to)37-73
Number of pages37
JournalMathematical Programming
Volume167
Issue number1
DOIs
StatePublished - 1 Jan 2018
Externally publishedYes

Funding

FundersFunder number
Air Force Office of Scientific Research
Air Force Materiel CommandFA9550-15-1-0500
Israel Science Foundation1821/16

    Keywords

    • Approximate linear oracles
    • Conditional gradient algorithm
    • Primal–dual methods
    • Working set methods

    Fingerprint

    Dive into the research topics of 'Primal and dual predicted decrease approximation methods'. Together they form a unique fingerprint.

    Cite this