TY - JOUR
T1 - Making SGD Parameter-Free
AU - Carmon, Yair
AU - Hinder, Oliver
N1 - Publisher Copyright:
© 2022 Y. Carmon & O. Hinder.
PY - 2022
Y1 - 2022
N2 - We develop an algorithm for parameter-free stochastic convex optimization (SCO) whose rate of convergence is only a double-logarithmic factor larger than the optimal rate for the corresponding known-parameter setting. In contrast, the best previously known rates for parameter-free SCO are based on online parameter-free regret bounds, which contain unavoidable excess logarithmic terms compared to their known-parameter counterparts. Our algorithm is conceptually simple, has high-probability guarantees, and is also partially adaptive to unknown gradient norms, smoothness, and strong convexity. At the heart of our results is a novel parameter-free certificate for SGD step size choice, and a time-uniform concentration result that assumes no a-priori bounds on SGD iterates.
AB - We develop an algorithm for parameter-free stochastic convex optimization (SCO) whose rate of convergence is only a double-logarithmic factor larger than the optimal rate for the corresponding known-parameter setting. In contrast, the best previously known rates for parameter-free SCO are based on online parameter-free regret bounds, which contain unavoidable excess logarithmic terms compared to their known-parameter counterparts. Our algorithm is conceptually simple, has high-probability guarantees, and is also partially adaptive to unknown gradient norms, smoothness, and strong convexity. At the heart of our results is a novel parameter-free certificate for SGD step size choice, and a time-uniform concentration result that assumes no a-priori bounds on SGD iterates.
UR - http://www.scopus.com/inward/record.url?scp=85164691121&partnerID=8YFLogxK
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.conferencearticle???
AN - SCOPUS:85164691121
SN - 2640-3498
VL - 178
SP - 2360
EP - 2389
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 35th Conference on Learning Theory, COLT 2022
Y2 - 2 July 2022 through 5 July 2022
ER -