TY - JOUR

T1 - Planning in Hierarchical Reinforcement Learning

T2 - 31st International Conference on Algorithmic Learning Theory, ALT 2020

AU - Zahavy, Tom

AU - Hasidim, Avinatan

AU - Kaplan, Haim

AU - Mansour, Yishay

N1 - Publisher Copyright:
© 2020 T. Zahavy, A. Hasidim, H. Kaplan & Y. Mansour.

PY - 2020

Y1 - 2020

N2 - We consider a setting of hierarchical reinforcement learning, in which the reward is a sum of components. For each component, we are given a policy that maximizes it, and our goal is to assemble a policy from the individual policies that maximize the sum of the components. We provide theoretical guarantees for assembling such policies in deterministic MDPs with collectible rewards. Our approach builds on formulating this problem as a traveling salesman problem with a discounted reward. We focus on local solutions, i.e., policies that only use information from the current state; thus, they are easy to implement and do not require substantial computational resources. We propose three local stochastic policies and prove that they guarantee better performance than any deterministic local policy in the worst case; experimental results suggest that they also perform better on average.

AB - We consider a setting of hierarchical reinforcement learning, in which the reward is a sum of components. For each component, we are given a policy that maximizes it, and our goal is to assemble a policy from the individual policies that maximize the sum of the components. We provide theoretical guarantees for assembling such policies in deterministic MDPs with collectible rewards. Our approach builds on formulating this problem as a traveling salesman problem with a discounted reward. We focus on local solutions, i.e., policies that only use information from the current state; thus, they are easy to implement and do not require substantial computational resources. We propose three local stochastic policies and prove that they guarantee better performance than any deterministic local policy in the worst case; experimental results suggest that they also perform better on average.

UR - http://www.scopus.com/inward/record.url?scp=85161412150&partnerID=8YFLogxK

M3 - ???researchoutput.researchoutputtypes.contributiontojournal.conferencearticle???

AN - SCOPUS:85161412150

SN - 2640-3498

VL - 117

SP - 906

EP - 934

JO - Proceedings of Machine Learning Research

JF - Proceedings of Machine Learning Research

Y2 - 8 February 2020 through 11 February 2020

ER -