One for All and All for One: Distributed Learning of Fair Allocations with Multi-Player Bandits

Ilai Bistritz*, Tavor Z. Baharav, Amir Leshem, Nicholas Bambos

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review


Consider N cooperative but non-communicating players where each plays one out of M arms for T turns. Players have different utilities for each arm, represented as an N× M matrix. These utilities are unknown to the players. In each turn, players select an arm and receive a noisy observation of their utility for it. However, if any other players selected the same arm in that turn, all colliding players will receive zero utility due to the conflict. No communication between the players is possible. We propose two distributed algorithms which learn fair matchings between players and arms while minimizing the regret. We show that our first algorithm learns a max-min fairness matching with near- O\log T) regret (up to a log log T factor). However, if one has a known target Quality of Service (QoS) (which may vary between players) then we show that our second algorithm learns a matching where all players obtain an expected reward of at least their QoS with constant regret, given that such a matching exists. In particular, if the max-min value is known, a max-min fairness matching can be learned with O(1) regret.

Original languageEnglish
Article number9404291
Pages (from-to)584-598
Number of pages15
JournalIEEE Journal on Selected Areas in Information Theory
Issue number2
StatePublished - Jun 2021
Externally publishedYes


  • Multi-player bandits
  • distributed learning
  • fairness
  • online learning
  • resource allocation


Dive into the research topics of 'One for All and All for One: Distributed Learning of Fair Allocations with Multi-Player Bandits'. Together they form a unique fingerprint.

Cite this