Online Markov Decision Processes with Aggregate Bandit Feedback

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We study a novel variant of online finite-horizon Markov Decision Processes with adversarially changing loss functions and initially unknown dynamics. In each episode, the learner suffers the loss accumulated along the trajectory realized by the policy chosen for the episode, and observes aggregate bandit feedback: the trajectory is revealed along with the cumulative loss suffered, rather than the individual losses encountered along the trajectory. Our main result is a computationally efficient algorithm with O(√K) regret for this setting, where K is the number of episodes. We establish this result via an efficient reduction to a novel bandit learning setting we call Distorted Linear Bandits (DLB), which is a variant of bandit linear optimization where actions chosen by the learner are adversarially distorted before they are committed. We then develop a computationally-efficient online algorithm for DLB for which we prove an O(√T) regret bound, where T is the number of time steps. Our algorithm is based on online mirror descent with a self-concordant barrier regularization that employs a novel increasing learning rate schedule.
Original languageEnglish
Title of host publicationProceedings of Thirty Fourth Conference on Learning Theory
EditorsMikhail Belkin, Samory Kpotufe
PublisherPMLR
Pages1301-1329
Number of pages29
StatePublished - 2021
Event34th Annual Conference on Learning Theory, COLT 2021 - Boulder, United States
Duration: 15 Aug 202119 Aug 2021
Conference number: 34

Publication series

NameProceedings of Machine Learning Research
PublisherPMLR
Volume134
ISSN (Electronic)2640-3498

Conference

Conference34th Annual Conference on Learning Theory, COLT 2021
Abbreviated titleCOLT 2021
Country/TerritoryUnited States
CityBoulder
Period15/08/2119/08/21

Fingerprint

Dive into the research topics of 'Online Markov Decision Processes with Aggregate Bandit Feedback'. Together they form a unique fingerprint.

Cite this