Abstract
We investigate a nonstochastic bandit setting in which the loss of an action is not immediately charged to the player, but rather spread over at most d consecutive steps in an adversarial way. This implies that the instantaneous loss observed by the player at the end of each round is a sum of as many as d loss components of previously played actions. Hence, unlike the standard bandit setting with delayed feedback, here the player cannot observe the individual delayed losses, but only their sum. Our main contribution is a general reduction transforming a standard bandit algorithm into one that can operate in this harder setting. We also show how the regret of the transformed algorithm can be bounded in terms of the regret of the original algorithm. Our reduction cannot be improved in general: we prove a lower bound on the regret of any bandit algorithm in this setting that matches (up to log factors) the upper bound obtained via our reduction. Finally, we show how our reduction can be extended to more complex bandit settings, such as combinatorial linear bandits and online bandit convex optimization.
Original language | English |
---|---|
Pages (from-to) | 750-773 |
Number of pages | 24 |
Journal | Proceedings of Machine Learning Research |
Volume | 75 |
State | Published - 2018 |
Event | 31st Annual Conference on Learning Theory, COLT 2018 - Stockholm, Sweden Duration: 6 Jul 2018 → 9 Jul 2018 |
Funding
Funders | Funder number |
---|---|
Israel Science Foundation |
Keywords
- Nonstochastic bandits
- bandit convex optimization
- composite losses
- delayed feedback