Reinforcement Learning Can Be More Efficient with Multiple Rewards

Christoph Dann*, Yishay Mansour, Mehryar Mohri

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

3 Scopus citations

Abstract

Reward design is one of the most critical and challenging aspects when formulating a task as a reinforcement learning (RL) problem. In practice, it often takes several attempts of reward specification and learning with it in order to find one that leads to sample-efficient learning of the desired behavior. Instead, in this work, we study whether directly incorporating multiple alternate reward formulations of the same task in a single agent can lead to faster learning. We analyze multi-reward extensions of action-elimination algorithms and prove more favorable instance-dependent regret bounds compared to their single-reward counterparts, both in multi-armed bandits and in tabular Markov decision processes. Our bounds scale for each state-action pair with the inverse of the largest gap among all reward functions. This suggests that learning with multiple rewards can indeed be more sample-efficient, as long as the rewards agree on an optimal policy. We further prove that when rewards do not agree, multi-reward action elimination in multi-armed bandits still learns a policy that is good across all reward functions.

Original languageEnglish
Pages (from-to)6948-6967
Number of pages20
JournalProceedings of Machine Learning Research
Volume202
StatePublished - 2023
Event40th International Conference on Machine Learning, ICML 2023 - Honolulu, United States
Duration: 23 Jul 202329 Jul 2023

Funding

FundersFunder number
Yandex Initiative for Machine Learning
Horizon 2020 Framework Programme
European Commission
Israel Science Foundation993/17
Tel Aviv University
Horizon 2020882396

    Fingerprint

    Dive into the research topics of 'Reinforcement Learning Can Be More Efficient with Multiple Rewards'. Together they form a unique fingerprint.

    Cite this