TY - GEN
T1 - Bandits with Dynamic Arm-acquisition Costs*
AU - Kalvit, Anand
AU - Zeevi, Assaf
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - We consider a bandit problem where at any time, the decision maker can add new arms to her consideration set. A new arm is queried at a cost from an arm-reservoir containing finitely many arm-types, each characterized by a distinct mean reward. The cost of query reflects in a diminishing probability of the returned arm being optimal, unbeknown to the decision maker; this feature encapsulates defining characteristics of a broad class of operations-inspired online learning problems, e.g., those arising in markets with churn, or those involving allocations subject to costly resource acquisition. The decision maker's goal is to maximize her cumulative expected payoffs over a sequence of n pulls, oblivious to the statistical properties as well as types of the queried arms. We study two natural modes of endogeneity in the reservoir distribution, and characterize a necessary condition for achievability of sub-linear regret in the problem. We also discuss a UCB-inspired adaptive algorithm that is long-run-average optimal whenever said condition is satisfied, thereby establishing its tightness.
AB - We consider a bandit problem where at any time, the decision maker can add new arms to her consideration set. A new arm is queried at a cost from an arm-reservoir containing finitely many arm-types, each characterized by a distinct mean reward. The cost of query reflects in a diminishing probability of the returned arm being optimal, unbeknown to the decision maker; this feature encapsulates defining characteristics of a broad class of operations-inspired online learning problems, e.g., those arising in markets with churn, or those involving allocations subject to costly resource acquisition. The decision maker's goal is to maximize her cumulative expected payoffs over a sequence of n pulls, oblivious to the statistical properties as well as types of the queried arms. We study two natural modes of endogeneity in the reservoir distribution, and characterize a necessary condition for achievability of sub-linear regret in the problem. We also discuss a UCB-inspired adaptive algorithm that is long-run-average optimal whenever said condition is satisfied, thereby establishing its tightness.
KW - Many-armed bandits
KW - arm-reservoir
KW - endogeneity
KW - non-stationarity
KW - regret minimization
KW - reservoir distribution
UR - http://www.scopus.com/inward/record.url?scp=85142637776&partnerID=8YFLogxK
U2 - 10.1109/Allerton49937.2022.9929355
DO - 10.1109/Allerton49937.2022.9929355
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:85142637776
T3 - 2022 58th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2022
BT - 2022 58th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 58th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2022
Y2 - 27 September 2022 through 30 September 2022
ER -