TY - JOUR

T1 - RECAPP

T2 - 39th International Conference on Machine Learning, ICML 2022

AU - Carmon, Yair

AU - Jambulapati, Arun

AU - Jin, Yujia

AU - Sidford, Aaron

N1 - Publisher Copyright:
Copyright © 2022 by the author(s)

PY - 2022

Y1 - 2022

N2 - The accelerated proximal point algorithm (APPA), also known as “Catalyst”, is a well-established reduction from convex optimization to approximate proximal point computation (i.e., regularized minimization). This reduction is conceptually elegant and yields strong convergence rate guarantees. However, these rates feature an extraneous logarithmic term arising from the need to compute each proximal point to high accuracy. In this work, we propose a novel Relaxed Error Criterion for Accelerated Proximal Point (RECAPP) that eliminates the need for high accuracy subproblem solutions. We apply RECAPP to two canonical problems: finite-sum and max-structured minimization. For finite-sum problems, we match the best known complexity, previously obtained by carefully-designed problem-specific algorithms. For minimizing maxy f(x, y) where f is convex in x and strongly-concave in y, we improve on the best known (Catalyst-based) bound by a logarithmic factor.

AB - The accelerated proximal point algorithm (APPA), also known as “Catalyst”, is a well-established reduction from convex optimization to approximate proximal point computation (i.e., regularized minimization). This reduction is conceptually elegant and yields strong convergence rate guarantees. However, these rates feature an extraneous logarithmic term arising from the need to compute each proximal point to high accuracy. In this work, we propose a novel Relaxed Error Criterion for Accelerated Proximal Point (RECAPP) that eliminates the need for high accuracy subproblem solutions. We apply RECAPP to two canonical problems: finite-sum and max-structured minimization. For finite-sum problems, we match the best known complexity, previously obtained by carefully-designed problem-specific algorithms. For minimizing maxy f(x, y) where f is convex in x and strongly-concave in y, we improve on the best known (Catalyst-based) bound by a logarithmic factor.

UR - http://www.scopus.com/inward/record.url?scp=85137222312&partnerID=8YFLogxK

M3 - ???researchoutput.researchoutputtypes.contributiontojournal.conferencearticle???

AN - SCOPUS:85137222312

SN - 2640-3498

VL - 162

SP - 2658

EP - 2685

JO - Proceedings of Machine Learning Research

JF - Proceedings of Machine Learning Research

Y2 - 17 July 2022 through 23 July 2022

ER -