Abstract
The accelerated proximal point algorithm (APPA), also known as “Catalyst”, is a well-established reduction from convex optimization to approximate proximal point computation (i.e., regularized minimization). This reduction is conceptually elegant and yields strong convergence rate guarantees. However, these rates feature an extraneous logarithmic term arising from the need to compute each proximal point to high accuracy. In this work, we propose a novel Relaxed Error Criterion for Accelerated Proximal Point (RECAPP) that eliminates the need for high accuracy subproblem solutions. We apply RECAPP to two canonical problems: finite-sum and max-structured minimization. For finite-sum problems, we match the best known complexity, previously obtained by carefully-designed problem-specific algorithms. For minimizing maxy f(x, y) where f is convex in x and strongly-concave in y, we improve on the best known (Catalyst-based) bound by a logarithmic factor.
Original language | English |
---|---|
Pages (from-to) | 2658-2685 |
Number of pages | 28 |
Journal | Proceedings of Machine Learning Research |
Volume | 162 |
State | Published - 2022 |
Event | 39th International Conference on Machine Learning, ICML 2022 - Baltimore, United States Duration: 17 Jul 2022 → 23 Jul 2022 |
Funding
Funders | Funder number |
---|---|
Blavatnik Family Foundation | |
Israeli Science Foundation | 2486/21 |
Microsoft Research | |
NSF | |
Stanford Graduate Fellowship | |
National Science Foundation | CCF-1955039, CCF-1844855 |
Stanford University | |
Israel Science Foundation |