How Free is Parameter-Free Stochastic Optimization?

Amit Attia*, Tomer Koren

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

Abstract

We study the problem of parameter-free stochastic optimization, inquiring whether, and under what conditions, do fully parameter-free methods exist: these are methods that achieve convergence rates competitive with optimally tuned methods, without requiring significant knowledge of the true problem parameters. Existing parameter-free methods can only be considered “partially” parameter-free, as they require some non-trivial knowledge of the true problem parameters, such as a bound on the stochastic gradient norms, a bound on the distance to a minimizer, etc. In the non-convex setting, we demonstrate that a simple hyperparameter search technique results in a fully parameter-free method that outperforms more sophisticated state-of-the-art algorithms. We also provide a similar result in the convex setting with access to noisy function values under mild noise assumptions. Finally, assuming only access to stochastic gradients, we establish a lower bound that renders fully parameter-free stochastic convex optimization infeasible, and provide a method which is (partially) parameter-free up to the limit indicated by our lower bound.

Original languageEnglish
Pages (from-to)2009-2034
Number of pages26
JournalProceedings of Machine Learning Research
Volume235
StatePublished - 2024
Event41st International Conference on Machine Learning, ICML 2024 - Vienna, Austria
Duration: 21 Jul 202427 Jul 2024

Funding

FundersFunder number
Blavatnik Family Foundation
Aegis Foundation
European Research Council
Horizon 2020101078075
Israel Science Foundation2549/19

    Fingerprint

    Dive into the research topics of 'How Free is Parameter-Free Stochastic Optimization?'. Together they form a unique fingerprint.

    Cite this