Online stochastic shortest path with bandit feedback and unknown transition function

Aviv Rosenberg, Yishay Mansour

Research output: Contribution to journalConference articlepeer-review

Abstract

We consider online learning in episodic loop-free Markov decision processes (MDPs), where the loss function can change arbitrarily between episodes. The transition function is fixed but unknown to the learner, and the learner only observes bandit feedback (not the entire loss function). For this problem we develop no-regret algorithms that perform asymptotically as well as the best stationary policy in hindsight. Assuming that all states are reachable with probability ß > 0 under any policy, we give a regret bound of Õ(L|X|p|A|T/ß), where T is the number of episodes, X is the state space, A is the action space, and L is the length of each episode. When this assumption is removed we give a regret bound of Õ(L3/2|X||A|1/4T3/4), that holds for an arbitrary transition function. To our knowledge these are the first algorithms that in our setting handle both bandit feedback and an unknown transition function.

Original languageEnglish
JournalAdvances in Neural Information Processing Systems
Volume32
StatePublished - 2019
Event33rd Annual Conference on Neural Information Processing Systems, NeurIPS 2019 - Vancouver, Canada
Duration: 8 Dec 201914 Dec 2019

Fingerprint

Dive into the research topics of 'Online stochastic shortest path with bandit feedback and unknown transition function'. Together they form a unique fingerprint.

Cite this