Online learning with feedback graphs: Beyond bandits

Noga Alon, Nicolò Cesa-Bianchi, Ofer Dekel, Tomer Koren

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


We study a general class of online learning problems where the feedback is specified by a graph. This class includes online prediction with expert advice and the multi-armed bandit problem, but also several learning problems where the online player does not necessarily observe his own loss. We analyze how the structure of the feedback graph controls the inherent difficulty of the induced T-round learning problem. Specifically, we show that any feedback graph belongs to one of three classes: strongly observable graphs, weakly observable graphs, and unobservable graphs. We prove that the first class induces learning problems with θ1/2T1/2) minimax regret, where α is the independence number of the underlying graph; the second class induces problems with θ1/3T2/3) minimax regret, where δ is the domination number of a certain portion of the graph; and the third class induces problems with linear minimax regret. Our results subsume much of the previous work on learning with feedback graphs and reveal new connections to partial monitoring games. We also show how the regret is affected if the graphs are allowed to vary with time.

Original languageEnglish
Title of host publicationProceedings of The 28th Conference on Learning Theory
EditorsPeter Grünwald, Elad Hazan, Satyen Kale
Number of pages13
StatePublished - 2015
Event28th Conference on Learning Theory, COLT 2015 - Paris, France
Duration: 2 Jul 20156 Jul 2015

Publication series

NameProceedings of Machine Learning Research
ISSN (Electronic)2640-3498


Conference28th Conference on Learning Theory, COLT 2015


Dive into the research topics of 'Online learning with feedback graphs: Beyond bandits'. Together they form a unique fingerprint.

Cite this