The Price of Adaptivity in Stochastic Convex Optimization

  • Yair Carmon
  • , Oliver Hinder*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

We prove impossibility results for adaptivity in non-smooth stochastic convex optimization. Given a set of problem parameters we wish to adapt to, we define a “price of adaptivity” (PoA) that, roughly speaking, measures the multiplicative increase in suboptimality due to uncertainty in these parameters. When the initial distance to the optimum is unknown but a gradient norm bound is known, we show that the PoA is at least logarithmic for expected suboptimality, and double-logarithmic for median suboptimality. When there is uncertainty in both distance and gradient norm, we show that the PoA must be polynomial in the level of uncertainty. Our lower bounds nearly match existing upper bounds, and establish that there is no parameter-free lunch. En route, we also establish tight upper and lower bounds for (known-parameter) high-probability stochastic convex optimization with heavy-tailed and bounded noise, respectively.

Original languageEnglish
JournalMathematical Programming
DOIs
StateAccepted/In press - 2025
Externally publishedYes

Funding

FundersFunder number
United States-Israel Binational Science Foundation
Adelis Foundation
Pitt Momentum Funds
National Science Foundation2239527
United States - Israel Binational Science Foundation2022663
Air Force Office of Scientific Research#FA9550-23-1-0242
Israel Science Foundation2486/21

    Fingerprint

    Dive into the research topics of 'The Price of Adaptivity in Stochastic Convex Optimization'. Together they form a unique fingerprint.

    Cite this