Accelerated Parameter-Free Stochastic Optimization

Itai Kreisler, Maor Ivgi, Oliver Hinder, Yair Carmon

Research output: Contribution to journalConference articlepeer-review

Abstract

We propose a method that achieves near-optimal rates for smooth stochastic convex optimization and requires essentially no prior knowledge of problem parameters. This improves on prior work which requires knowing at least the initial distance to optimality d0. Our method, U-DOG, combines UniXGrad (Kavis et al. [30]) and DoG (Ivgi et al. [27]) with novel iterate stabilization techniques. It requires only loose bounds on d0 and the noise magnitude, provides high probability guarantees under sub-Gaussian noise, and is also near-optimal in the non-smooth case. Our experiments show consistent, strong performance on convex problems and mixed results on neural network training.

Original languageEnglish
Pages (from-to)3257-3324
Number of pages68
JournalProceedings of Machine Learning Research
Volume247
StatePublished - 2024
Event37th Annual Conference on Learning Theory, COLT 2024 - Edmonton, Canada
Duration: 30 Jun 20243 Jul 2024

Keywords

  • Adaptive
  • Parameter-free
  • Smooth optimization
  • Stochastic convex optimization

Fingerprint

Dive into the research topics of 'Accelerated Parameter-Free Stochastic Optimization'. Together they form a unique fingerprint.

Cite this