Lazy OCO: Online Convex Optimization on a Switching Budget

Uri Sherman, Tomer Koren

Research output: Contribution to journalConference articlepeer-review

5 Scopus citations

Abstract

We study a variant of online convex optimization where the player is permitted to switch decisions at most (times in expectation throughout) rounds. Similar problems have been addressed in prior work for the discrete decision set setting, and more recently in the continuous setting but only with an adaptive adversary. In this work, we aim to fill the gap and present computationally efficient algorithms in the more prevalent oblivious setting, establishing a regret bound of O(T/S) for general convex losses and Õ(T/S2) for strongly convex losses. In addition, for stochastic i.i.d. losses, we present a simple algorithm that performs log) switches with only a multiplicative log) factor overhead in its regret in both the general and strongly convex settings. Finally, we complement our algorithms with lower bounds that match our upper bounds in some of the cases we consider.

Original languageEnglish
Pages (from-to)3972-3988
Number of pages17
JournalProceedings of Machine Learning Research
Volume134
StatePublished - 2021
Event34th Conference on Learning Theory, COLT 2021 - Boulder, United States
Duration: 15 Aug 202119 Aug 2021

Funding

FundersFunder number
Yandex Initiative in Machine Learning
Blavatnik Family Foundation
Israel Science Foundation2549/19
Tel Aviv University

    Fingerprint

    Dive into the research topics of 'Lazy OCO: Online Convex Optimization on a Switching Budget'. Together they form a unique fingerprint.

    Cite this