TY - GEN
T1 - Best-of-All-Worlds Bounds for Online Learning with Feedback Graphs
AU - Erez, Liad
AU - Koren, Tomer
N1 - Publisher Copyright:
© 2021 Neural information processing systems foundation. All rights reserved.
PY - 2021
Y1 - 2021
N2 - We study the online learning with feedback graphs framework introduced by Mannor and Shamir [24], in which the feedback received by the online learner is specified by a graph G over the available actions. We develop an algorithm that simultaneously achieves regret bounds of the form: (equation presented) O(√θ(G)T) with adversarial losses; O(θ(G) polylogT) with stochastic losses; and O(θ(G) polylogT +pθ(G)C) with stochastic losses subject to C adversarial corruptions. Here, θ(G) is the clique covering number of the graph G. Our algorithm is an instantiation of Follow-the-Regularized-Leader with a novel regularization that can be seen as a product of a Tsallis entropy component (inspired by Zimmert and Seldin [27]) and a Shannon entropy component (analyzed in the corrupted stochastic case by Amir et al. [3]), thus subtly interpolating between the two forms of entropies. One of our key technical contributions is in establishing the convexity of this regularizer and controlling its inverse Hessian, despite its complex product structure.
AB - We study the online learning with feedback graphs framework introduced by Mannor and Shamir [24], in which the feedback received by the online learner is specified by a graph G over the available actions. We develop an algorithm that simultaneously achieves regret bounds of the form: (equation presented) O(√θ(G)T) with adversarial losses; O(θ(G) polylogT) with stochastic losses; and O(θ(G) polylogT +pθ(G)C) with stochastic losses subject to C adversarial corruptions. Here, θ(G) is the clique covering number of the graph G. Our algorithm is an instantiation of Follow-the-Regularized-Leader with a novel regularization that can be seen as a product of a Tsallis entropy component (inspired by Zimmert and Seldin [27]) and a Shannon entropy component (analyzed in the corrupted stochastic case by Amir et al. [3]), thus subtly interpolating between the two forms of entropies. One of our key technical contributions is in establishing the convexity of this regularizer and controlling its inverse Hessian, despite its complex product structure.
UR - http://www.scopus.com/inward/record.url?scp=85131835876&partnerID=8YFLogxK
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:85131835876
T3 - Advances in Neural Information Processing Systems
SP - 28511
EP - 28521
BT - Advances in Neural Information Processing Systems 34 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021
A2 - Ranzato, Marc'Aurelio
A2 - Beygelzimer, Alina
A2 - Dauphin, Yann
A2 - Liang, Percy S.
A2 - Wortman Vaughan, Jenn
PB - Neural information processing systems foundation
T2 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021
Y2 - 6 December 2021 through 14 December 2021
ER -