We study reinforcement learning with linear function approximation and adversarially changing cost functions, a setup that has mostly been considered under simplifying assumptions such as full information feedback or exploratory conditions. We present a computationally efficient policy optimization algorithm for the challenging general setting of unknown dynamics and bandit feedback, featuring a combination of mirror-descent and least squares policy evaluation in an auxiliary MDP used to compute exploration bonuses. Our algorithm obtains an Oe(K6/7) regret bound, improving significantly over previous state-of-the-art of Oe(K14/15) in this setting. In addition, we present a version of the same algorithm under the assumption a simulator of the environment is available to the learner (but otherwise no exploratory assumptions are made), and prove it obtains state-of-the-art regret of Oe(K2/3).
|Number of pages
|Proceedings of Machine Learning Research
|Published - 2023
|40th International Conference on Machine Learning, ICML 2023 - Honolulu, United States
Duration: 23 Jul 2023 → 29 Jul 2023