TY - JOUR
T1 - The Price of Adaptivity in Stochastic Convex Optimization
AU - Carmon, Yair
AU - Hinder, Oliver
N1 - Publisher Copyright:
© The Author(s), under exclusive licence to the Mathematical Optimization Society 2025.
PY - 2025
Y1 - 2025
N2 - We prove impossibility results for adaptivity in non-smooth stochastic convex optimization. Given a set of problem parameters we wish to adapt to, we define a “price of adaptivity” (PoA) that, roughly speaking, measures the multiplicative increase in suboptimality due to uncertainty in these parameters. When the initial distance to the optimum is unknown but a gradient norm bound is known, we show that the PoA is at least logarithmic for expected suboptimality, and double-logarithmic for median suboptimality. When there is uncertainty in both distance and gradient norm, we show that the PoA must be polynomial in the level of uncertainty. Our lower bounds nearly match existing upper bounds, and establish that there is no parameter-free lunch. En route, we also establish tight upper and lower bounds for (known-parameter) high-probability stochastic convex optimization with heavy-tailed and bounded noise, respectively.
AB - We prove impossibility results for adaptivity in non-smooth stochastic convex optimization. Given a set of problem parameters we wish to adapt to, we define a “price of adaptivity” (PoA) that, roughly speaking, measures the multiplicative increase in suboptimality due to uncertainty in these parameters. When the initial distance to the optimum is unknown but a gradient norm bound is known, we show that the PoA is at least logarithmic for expected suboptimality, and double-logarithmic for median suboptimality. When there is uncertainty in both distance and gradient norm, we show that the PoA must be polynomial in the level of uncertainty. Our lower bounds nearly match existing upper bounds, and establish that there is no parameter-free lunch. En route, we also establish tight upper and lower bounds for (known-parameter) high-probability stochastic convex optimization with heavy-tailed and bounded noise, respectively.
UR - https://www.scopus.com/pages/publications/105015219132
U2 - 10.1007/s10107-025-02268-3
DO - 10.1007/s10107-025-02268-3
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:105015219132
SN - 0025-5610
JO - Mathematical Programming
JF - Mathematical Programming
ER -