Online Learning with Simple Predictors and a Combinatorial Characterization of Minimax in 0/1 Games

Steve Hanneke, Roi Livni, Shay Moran

Research output: Contribution to journalConference articlepeer-review

6 Scopus citations

Abstract

Which classes can be learned properly in the online model? — that is, by an algorithm that on each round uses a predictor from the concept class. While there are simple and natural cases where improper learning is useful and even necessary, it is natural to ask how complex must the improper predictors be in such cases. Can one always achieve nearly optimal mistake/regret bounds using “simple” predictors? In this work, we give a complete characterization of when this is possible, thus settling an open problem which has been studied since the pioneering works of Angluin (1987) and Littlestone (1988). More precisely, given any concept class C and any hypothesis class H we provide nearly tight bounds (up to a log factor) on the optimal mistake bounds for online learning C using predictors from H. Our bound yields an exponential improvement over the previously best known bound by Chase and Freitag (2020). As applications, we give constructive proofs showing that (i) in the realizable setting, a near-optimal mistake bound (up to a constant factor) can be attained by a sparse majority-vote of proper predictors, and (ii) in the agnostic setting, a near optimal regret bound (up to a log factor) can be attained by a randomized proper algorithm. The latter was proven non-constructively by Rakhlin, Sridharan, and Tewari (2015). It was also achieved by constructive but improper algorithms proposed by Ben-David, Pal, and Shalev-Shwartz (2009) and Rakhlin, Shamir, and Sridharan (2012). A technical ingredient of our proof which may be of independent interest is a generalization of the celebrated Minimax Theorem (von Neumann, 1928) for binary zero-sum games with arbitrary action-sets: a simple game which fails to satisfy Minimax is “Guess the Larger Number”. In this game, each player picks a natural number and the player who picked the larger number wins. Equivalently, the payoff matrix of this game is infinite triangular. We show that this is the only obstruction: if the payoff matrix does not contain triangular submatrices of unbounded sizes then the Minimax Theorem is satisfied. This generalizes von Neumann’s Minimax Theorem by removing requirements of finiteness (or compactness) of the action-sets, and moreover it captures precisely the types of games of interest in online learning: namely, Littlestone games.

Original languageEnglish
Pages (from-to)2289-2314
Number of pages26
JournalProceedings of Machine Learning Research
Volume134
StatePublished - 2021
Event34th Conference on Learning Theory, COLT 2021 - Boulder, United States
Duration: 15 Aug 202119 Aug 2021

Keywords

  • Equivalence Queries
  • Littlestone Dimension
  • Minimax Theorem
  • Mistake Bound
  • Online Learning
  • VC Dimension

Fingerprint

Dive into the research topics of 'Online Learning with Simple Predictors and a Combinatorial Characterization of Minimax in 0/1 Games'. Together they form a unique fingerprint.

Cite this