Abstract
We propose a tuning-free dynamic SGD step size formula, which we call Distance over Gradients (DoG). The DoG step sizes depend on simple empirical quantities (distance from the initial point and norms of gradients) and have no “learning rate” parameter. Theoretically, we show that, for stochastic convex optimization, a slight variation of the DoG formula enjoys strong, high-probability parameter-free convergence guarantees and iterate movement bounds. Empirically, we consider a broad range of vision and language transfer learning tasks, and show that DoG's performance is close to that of SGD with tuned learning rate. We also propose a per-layer variant of DoG that generally outperforms tuned SGD, approaching the performance of tuned Adam. A PyTorch implementation of our algorithms is available at https://github.com/formll/dog.
Original language | English |
---|---|
Pages (from-to) | 14465-14499 |
Number of pages | 35 |
Journal | Proceedings of Machine Learning Research |
Volume | 202 |
State | Published - 2023 |
Event | 40th International Conference on Machine Learning, ICML 2023 - Honolulu, United States Duration: 23 Jul 2023 → 29 Jul 2023 |
Funding
Funders | Funder number |
---|---|
NSF-BSF | |
Pitt Momentum Funds | |
Yandex Initiative for Machine Learning | |
National Science Foundation | 2239527 |
Air Force Office of Scientific Research | 9550-23-1-0242 |
Blavatnik Family Foundation | |
United States-Israel Binational Science Foundation | 2022663 |
Israel Science Foundation | 2486/21 |
Council for Higher Education |