Reinforcement learning in POMDPs without resets

Eyal Even-Dar, Sham M. Kakade, Yishay Mansour

Research output: Contribution to journalConference articlepeer-review

Abstract

We consider the most realistic reinforcement learning setting in which an agent starts in an unknown environment (the POMDP) and must follow one continuous and uninterrupted chain of experience with no access to "resets" or "offline" simulation. We provide algorithms for general connected POMDPs that obtain near optimal average reward. One algorithm we present has a convergence rate which depends exponentially on a certain horizon time of an optimal policy, but has no dependence on the number of (unobservable) states. The main building block of our algorithms is an implementation of an approximate reset strategy, which we show always exists in every POMDP. An interesting aspect of our algorithms is how they use this strategy when balancing exploration and exploitation.

Original languageEnglish
Pages (from-to)690-695
Number of pages6
JournalIJCAI International Joint Conference on Artificial Intelligence
StatePublished - 2005
Event19th International Joint Conference on Artificial Intelligence, IJCAI 2005 - Edinburgh, United Kingdom
Duration: 30 Jul 20055 Aug 2005

Fingerprint

Dive into the research topics of 'Reinforcement learning in POMDPs without resets'. Together they form a unique fingerprint.

Cite this