My Fair Bandit: Distributed Learning of Max-Min Fairness with Multi-player Bandits

Part of Proceedings of the International Conference on Machine Learning 1 pre-proceedings (ICML 2020)

Bibtex »Metadata »Paper »

Bibtek download is not availble in the pre-proceeding


Ilai Bistritz, Tavor Baharav, Amir Leshem, Nicholas Bambos


<p>Consider N cooperative but non-communicating players where each plays one out of M arms for T turns. Players have different utilities for each arm, representable as an NxM matrix. However, these utilities are unknown to the players. In each turn players receive noisy observations of their utility for their selected arm. However, if any other players selected the same arm that turn, they will all receive zero utility due to the conflict. No other communication or coordination between the players is possible. Our goal is to design a distributed algorithm that learns the matching between players and arms that achieves max-min fairness while minimizing the regret. We present an algorithm and prove that it is regret optimal up to a log(log T) factor. This is the first max-min fairness multi-player bandit algorithm with (near) order optimal regret. </p>