Rate-Optimal Policy Optimization for Linear Markov Decision Processes

Uri Sherman*, Alon Cohen, Tomer Koren, Yishay Mansour

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

Abstract

We study regret minimization in online episodic linear Markov Decision Processes, and propose a policy optimization algorithm that is computationally efficient, and obtains rate optimal Oe(K) regret where K denotes the number of episodes. Our work is the first to establish the optimal rate (in terms of K) of convergence in the stochastic setting with bandit feedback using a policy optimization based approach, and the first to establish the optimal rate in the adversarial setup with full information feedback, for which no algorithm with an optimal rate guarantee was previously known.

Original languageEnglish
Pages (from-to)44815-44837
Number of pages23
JournalProceedings of Machine Learning Research
Volume235
StatePublished - 2024
Event41st International Conference on Machine Learning, ICML 2024 - Vienna, Austria
Duration: 21 Jul 202427 Jul 2024

Funding

FundersFunder number
Yandex Initiative for Machine Learning
Blavatnik Family Foundation
Aegis Foundation
Tel Aviv University
European Research Council
Horizon 2020882396, 101078075
Horizon 2020
Israel Science Foundation2549/19, 2250/22
Israel Science Foundation

    Fingerprint

    Dive into the research topics of 'Rate-Optimal Policy Optimization for Linear Markov Decision Processes'. Together they form a unique fingerprint.

    Cite this