Unknown mixing times in apprenticeship and reinforcement learning

Research output: Contribution to journalConference articlepeer-review

2 Scopus citations

Abstract

We derive and analyze learning algorithms for apprenticeship learning, policy evaluation, and policy gradient for average reward criteria. Existing algorithms explicitly require an upper bound on the mixing time. In contrast, we build on ideas from Markov chain theory and derive sampling algorithms that do not require such an upper bound. For these algorithms, we provide theoretical bounds on their sample-complexity and running time.

Original languageEnglish
Pages (from-to)430-439
Number of pages10
JournalProceedings of Machine Learning Research
Volume124
StatePublished - 2020
Event36th Conference on Uncertainty in Artificial Intelligence, UAI 2020 - Virtual, Online
Duration: 3 Aug 20206 Aug 2020

Funding

FundersFunder number
Israel Science Foundation

    Fingerprint

    Dive into the research topics of 'Unknown mixing times in apprenticeship and reinforcement learning'. Together they form a unique fingerprint.

    Cite this