Efficient Rate Optimal Regret for Adversarial Contextual MDPs Using Online Function Approximation

Orin Levy*, Alon Cohen*, Asaf Cassel*, Yishay Mansour*

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

Abstract

We present the OMG-CMDP! algorithm for regret minimization in adversarial Contextual MDPs. The algorithm operates under the minimal assumptions of realizable function class and access to online least squares and log loss regression oracles. Our algorithm is efficient (assuming efficient online regression oracles), simple and robust to approximation errors. It enjoys an Oe(H2.5pT|S||A|(RTH(O) + H log(δ−1))) regret guarantee, with T being the number of episodes, S the state space, A the action space, H the horizon and RTH(O) = RTH(OsqF ) + RTH(OlogP ) is the sum of the square and log-loss regression oracles' regret, used to approximate the context-dependent rewards and dynamics, respectively. To the best of our knowledge, our algorithm is the first efficient rate optimal regret minimization algorithm for adversarial CMDPs that operates under the minimal standard assumption of online function approximation.

Original languageEnglish
Pages (from-to)19287-19314
Number of pages28
JournalProceedings of Machine Learning Research
Volume202
StatePublished - 2023
Event40th International Conference on Machine Learning, ICML 2023 - Honolulu, United States
Duration: 23 Jul 202329 Jul 2023

Funding

FundersFunder number
Yandex Initiative for Machine Learning
Horizon 2020 Framework Programme
Blavatnik Family Foundation
European Research Council
Israel Science Foundation2250/22
Israel Science Foundation
Tel Aviv University
Horizon 20202549/19, 882396, 993/17
Horizon 2020

    Fingerprint

    Dive into the research topics of 'Efficient Rate Optimal Regret for Adversarial Contextual MDPs Using Online Function Approximation'. Together they form a unique fingerprint.

    Cite this