TY - JOUR
T1 - A General Framework for Bandit Problems Beyond Cumulative Objectives
AU - Cassel, Asaf
AU - Mannor, Shie
AU - Zeevi, Assaf
N1 - Publisher Copyright:
© 2023 INFORMS.
PY - 2023
Y1 - 2023
N2 - The stochastic multiarmed bandit (MAB) problem is a common model for sequential decision problems. In the standard setup, a decision maker has to choose at every instant between several competing arms; each of them provides a scalar random variable, referred to as a “reward.” Nearly all research on this topic considers the total cumulative reward as the criterion of interest. This work focuses on other natural objectives that cannot be cast as a sum over rewards but rather, more involved functions of the reward stream. Unlike the case of cumulative criteria, in the problems we study here, the oracle policy, which knows the problem parameters a priori and is used to “center” the regret, is not trivial. We provide a systematic approach to such problems and derive general conditions under which the oracle policy is sufficiently tractable to facilitate the design of optimism-based (upper confidence bound) learning policies. These conditions elucidate an interesting interplay between the arm reward distributions and the performance metric. Our main findings are illustrated for several commonly used objectives, such as conditional value-at-risk, mean-variance trade-offs, Sharpe ratio, and more.
AB - The stochastic multiarmed bandit (MAB) problem is a common model for sequential decision problems. In the standard setup, a decision maker has to choose at every instant between several competing arms; each of them provides a scalar random variable, referred to as a “reward.” Nearly all research on this topic considers the total cumulative reward as the criterion of interest. This work focuses on other natural objectives that cannot be cast as a sum over rewards but rather, more involved functions of the reward stream. Unlike the case of cumulative criteria, in the problems we study here, the oracle policy, which knows the problem parameters a priori and is used to “center” the regret, is not trivial. We provide a systematic approach to such problems and derive general conditions under which the oracle policy is sufficiently tractable to facilitate the design of optimism-based (upper confidence bound) learning policies. These conditions elucidate an interesting interplay between the arm reward distributions and the performance metric. Our main findings are illustrated for several commonly used objectives, such as conditional value-at-risk, mean-variance trade-offs, Sharpe ratio, and more.
KW - multiarmed bandit
KW - optimism principle
KW - planning
KW - reinforcement learning
KW - risk
KW - upper confidence bound
UR - http://www.scopus.com/inward/record.url?scp=85177483593&partnerID=8YFLogxK
U2 - 10.1287/moor.2022.1335
DO - 10.1287/moor.2022.1335
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:85177483593
SN - 0364-765X
VL - 48
SP - 2196
EP - 2232
JO - Mathematics of Operations Research
JF - Mathematics of Operations Research
IS - 4
ER -