TY - JOUR
T1 - Accelerated Parameter-Free Stochastic Optimization
AU - Kreisler, Itai
AU - Ivgi, Maor
AU - Hinder, Oliver
AU - Carmon, Yair
N1 - Publisher Copyright:
© 2024 I. Kreisler, M. Ivgi, O. Hinder & Y. Carmon.
PY - 2024
Y1 - 2024
N2 - We propose a method that achieves near-optimal rates for smooth stochastic convex optimization and requires essentially no prior knowledge of problem parameters. This improves on prior work which requires knowing at least the initial distance to optimality d0. Our method, U-DOG, combines UniXGrad (Kavis et al. [30]) and DoG (Ivgi et al. [27]) with novel iterate stabilization techniques. It requires only loose bounds on d0 and the noise magnitude, provides high probability guarantees under sub-Gaussian noise, and is also near-optimal in the non-smooth case. Our experiments show consistent, strong performance on convex problems and mixed results on neural network training.
AB - We propose a method that achieves near-optimal rates for smooth stochastic convex optimization and requires essentially no prior knowledge of problem parameters. This improves on prior work which requires knowing at least the initial distance to optimality d0. Our method, U-DOG, combines UniXGrad (Kavis et al. [30]) and DoG (Ivgi et al. [27]) with novel iterate stabilization techniques. It requires only loose bounds on d0 and the noise magnitude, provides high probability guarantees under sub-Gaussian noise, and is also near-optimal in the non-smooth case. Our experiments show consistent, strong performance on convex problems and mixed results on neural network training.
KW - Adaptive
KW - Parameter-free
KW - Smooth optimization
KW - Stochastic convex optimization
UR - http://www.scopus.com/inward/record.url?scp=85203716272&partnerID=8YFLogxK
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.conferencearticle???
AN - SCOPUS:85203716272
SN - 2640-3498
VL - 247
SP - 3257
EP - 3324
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 37th Annual Conference on Learning Theory, COLT 2024
Y2 - 30 June 2024 through 3 July 2024
ER -