Optimistic policy optimization with bandit feedback

Yonathan Efroni*, Lior Shani, Aviv Rosenberg, Shie Mannor

*Corresponding author for this work

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    Abstract

    Policy optimization methods are one of the most widely used classes of Reinforcement Learning (RL) algorithms. Yet, so far, such methods have been mostly analyzed from an optimization perspective, without addressing the problem of exploration, or by making strong assumptions on the interaction with the environment. In this paper we consider model-based RL in the tabular finite-horizon MDP setting with unknown transitions and bandit feedback. For this setting, we propose an optimistic policy optimization algorithm for which we establish ~O( p S2AH4K) regret for stochastic rewards. Furthermore, we prove ~O ( p S2AH4K2=3) regret for adversarial rewards. Interestingly, this result matches previous bounds derived for the bandit feedback case, yet with known transitions. To the best of our knowledge, the two results are the first sub-linear regret bounds obtained for policy optimization algorithms with unknown transitions and bandit feedback.

    Original languageEnglish
    Title of host publication37th International Conference on Machine Learning, ICML 2020
    EditorsHal Daume, Aarti Singh
    PublisherInternational Machine Learning Society (IMLS)
    Pages8562-8571
    Number of pages10
    ISBN (Electronic)9781713821120
    StatePublished - 2020
    Event37th International Conference on Machine Learning, ICML 2020 - Virtual, Online
    Duration: 13 Jul 202018 Jul 2020

    Publication series

    Name37th International Conference on Machine Learning, ICML 2020
    VolumePartF168147-12

    Conference

    Conference37th International Conference on Machine Learning, ICML 2020
    CityVirtual, Online
    Period13/07/2018/07/20

    Fingerprint

    Dive into the research topics of 'Optimistic policy optimization with bandit feedback'. Together they form a unique fingerprint.

    Cite this