TY - GEN
T1 - Near-optimal regret bounds for stochastic shortest path
AU - Cohen, Alon
AU - Kaplan, Haim
AU - Mansour, Yishay
AU - Rosenberg, Aviv
N1 - Publisher Copyright:
Copyright © 2020 by the Authors. All rights reserved.
PY - 2020
Y1 - 2020
N2 - Stochastic shortest path (SSP) is a well-known problem in planning and control, in which an agent has to reach a goal state in minimum total expected cost. In the learning formulation of the problem, the agent is unaware of the environment dynamics (i.e., the transition function) and has to repeatedly play for a given number of episodes, while learning the problem's optimal solution. Unlike other well-studied models in reinforcement learning (RL), the length of an episode is not predetermined (or bounded) and is influenced by the agent's actions. Recently, Tarbouriech et al. (2020) studied this problem in the context of regret minimization, and provided an algorithm whose regret bound is inversely proportional to the square root of the minimum instantaneous cost. In this work we remove this dependence on the minimum cost-we give an algorithm that guarantees a regret bound of Õ(B∗|S| √|A|K), where B∗ is an upper bound on the expected cost of the optimal policy, S is the set of states, A is the set of actions and K is the number of episodes. We additionally show that any learning algorithm must have at least Ω(B∗ √ |S||A|K) regret in the worst case.
AB - Stochastic shortest path (SSP) is a well-known problem in planning and control, in which an agent has to reach a goal state in minimum total expected cost. In the learning formulation of the problem, the agent is unaware of the environment dynamics (i.e., the transition function) and has to repeatedly play for a given number of episodes, while learning the problem's optimal solution. Unlike other well-studied models in reinforcement learning (RL), the length of an episode is not predetermined (or bounded) and is influenced by the agent's actions. Recently, Tarbouriech et al. (2020) studied this problem in the context of regret minimization, and provided an algorithm whose regret bound is inversely proportional to the square root of the minimum instantaneous cost. In this work we remove this dependence on the minimum cost-we give an algorithm that guarantees a regret bound of Õ(B∗|S| √|A|K), where B∗ is an upper bound on the expected cost of the optimal policy, S is the set of states, A is the set of actions and K is the number of episodes. We additionally show that any learning algorithm must have at least Ω(B∗ √ |S||A|K) regret in the worst case.
UR - http://www.scopus.com/inward/record.url?scp=85105334740&partnerID=8YFLogxK
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:85105334740
T3 - 37th International Conference on Machine Learning, ICML 2020
SP - 8180
EP - 8189
BT - 37th International Conference on Machine Learning, ICML 2020
A2 - Daume, Hal
A2 - Singh, Aarti
PB - International Machine Learning Society (IMLS)
T2 - 37th International Conference on Machine Learning, ICML 2020
Y2 - 13 July 2020 through 18 July 2020
ER -