DOG is SGD's Best Friend: A Parameter-Free Dynamic Step Size Schedule

Maor Ivgi*, Oliver Hinder, Yair Carmon

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

7 Scopus citations

Abstract

We propose a tuning-free dynamic SGD step size formula, which we call Distance over Gradients (DoG). The DoG step sizes depend on simple empirical quantities (distance from the initial point and norms of gradients) and have no “learning rate” parameter. Theoretically, we show that, for stochastic convex optimization, a slight variation of the DoG formula enjoys strong, high-probability parameter-free convergence guarantees and iterate movement bounds. Empirically, we consider a broad range of vision and language transfer learning tasks, and show that DoG's performance is close to that of SGD with tuned learning rate. We also propose a per-layer variant of DoG that generally outperforms tuned SGD, approaching the performance of tuned Adam. A PyTorch implementation of our algorithms is available at https://github.com/formll/dog.

Original languageEnglish
Pages (from-to)14465-14499
Number of pages35
JournalProceedings of Machine Learning Research
Volume202
StatePublished - 2023
Event40th International Conference on Machine Learning, ICML 2023 - Honolulu, United States
Duration: 23 Jul 202329 Jul 2023

Funding

FundersFunder number
NSF-BSF
Pitt Momentum Funds
Yandex Initiative for Machine Learning
National Science Foundation2239527
Air Force Office of Scientific Research9550-23-1-0242
Blavatnik Family Foundation
United States-Israel Binational Science Foundation2022663
Israel Science Foundation2486/21
Council for Higher Education

    Fingerprint

    Dive into the research topics of 'DOG is SGD's Best Friend: A Parameter-Free Dynamic Step Size Schedule'. Together they form a unique fingerprint.

    Cite this