Abstract
We study reinforcement learning with linear function approximation and adversarially changing cost functions, a setup that has mostly been considered under simplifying assumptions such as full information feedback or exploratory conditions. We present a computationally efficient policy optimization algorithm for the challenging general setting of unknown dynamics and bandit feedback, featuring a combination of mirror-descent and least squares policy evaluation in an auxiliary MDP used to compute exploration bonuses. Our algorithm obtains an Oe(K6/7) regret bound, improving significantly over previous state-of-the-art of Oe(K14/15) in this setting. In addition, we present a version of the same algorithm under the assumption a simulator of the environment is available to the learner (but otherwise no exploratory assumptions are made), and prove it obtains state-of-the-art regret of Oe(K2/3).
Original language | English |
---|---|
Pages (from-to) | 31117-31150 |
Number of pages | 34 |
Journal | Proceedings of Machine Learning Research |
Volume | 202 |
State | Published - 2023 |
Event | 40th International Conference on Machine Learning, ICML 2023 - Honolulu, United States Duration: 23 Jul 2023 → 29 Jul 2023 |
Funding
Funders | Funder number |
---|---|
Yandex Initiative in Machine Learning | |
Horizon 2020 Framework Programme | |
Blavatnik Family Foundation | |
European Research Council | |
Israel Science Foundation | 2549/19, 993/17 |
Tel Aviv University | |
Horizon 2020 | 882396 |