Abstract
We present the E-UC3RL algorithm for regret minimization in Stochastic Contextual Markov Decision Processes (CMDPs). The algorithm operates under the minimal assumptions of realizable function class and access to offline least squares and log loss regression oracles. Our algorithm is efficient (assuming efficient offline regression oracles) and enjoys a regret guarantee of Õ(H3√T|S||A|dE(P) log(|F||P|/δ))), with T being the number of episodes, S the state space, A the action space, H the horizon, P and F are finite function classes used to approximate the context-dependent dynamics and rewards, respectively, and dE(P) is the Eluder dimension of P w.r.t the Hellinger distance. To the best of our knowledge, our algorithm is the first efficient and rate-optimal regret minimization algorithm for CMDPs that operates under the general offline function approximation setting. In addition, we extend the Eluder dimension to general bounded metrics which may be of independent interest.
Original language | English |
---|---|
Pages (from-to) | 27326-27350 |
Number of pages | 25 |
Journal | Proceedings of Machine Learning Research |
Volume | 235 |
State | Published - 2024 |
Event | 41st International Conference on Machine Learning, ICML 2024 - Vienna, Austria Duration: 21 Jul 2024 → 27 Jul 2024 |
Funding
Funders | Funder number |
---|---|
Yandex Initiative for Machine Learning | |
Blavatnik Family Foundation | |
Tel Aviv University | |
European Research Council | |
Horizon 2020 | 882396, 101078075 |
Israel Science Foundation | 2549/19, 993/17, 2250/22 |