Equilibrium Bandits: Learning Optimal Equilibria of Unknown Dynamics

Siddharth Chandak, Ilai Bistritz, Nicholas Bambos

Research output: Contribution to journalConference articlepeer-review


Consider a decision-maker that can pick one out of K actions to control an unknown system, for T turns. The actions are interpreted as different configurations or policies. Holding the same action fixed, the system asymptotically converges to a unique equilibrium, as a function of this action. The dynamics of the system are unknown to the decision-maker, which can only observe a noisy reward at the end of every turn. The decision-maker wants to maximize its accumulated reward over the T turns. Learning what equilibria are better results in higher rewards, but waiting for the system to converge to equilibrium costs valuable time. Existing bandit algorithms, either stochastic or adversarial, achieve linear (trivial) regret for this problem. We present a novel algorithm, termed Upper Equilibrium Concentration Bound (UECB), that knows to switch an action quickly if it is not worth it to wait until the equilibrium is reached. This is enabled by employing 'convergence bounds' to determine how far the system is from equilibrium. We prove that UECB achieves a regret of O(log(T) + τc log(τc) + τc log log(T)) for this “equilibrium bandit problem” where τc is the worst case approximate convergence time to equilibrium. We then show that both epidemic control and game control are special cases of equilibrium bandits, where τc logτc typically dominates the regret. We then test UECB numerically for both of these applications.

Original languageEnglish
Pages (from-to)1336-1344
Number of pages9
JournalProceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
StatePublished - 2023
Externally publishedYes
Event22nd International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2023 - London, United Kingdom
Duration: 29 May 20232 Jun 2023


  • game theory
  • multiagent systems
  • online learning


Dive into the research topics of 'Equilibrium Bandits: Learning Optimal Equilibria of Unknown Dynamics'. Together they form a unique fingerprint.

Cite this