TY - GEN
T1 - Chasing ghosts
T2 - 55th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2014
AU - Feige, Uriel
AU - Koren, Tomer
AU - Tennenholtz, Moshe
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2014/12/7
Y1 - 2014/12/7
N2 - We consider sequential decision making in a setting where regret is measured with respect to a set of stateful reference policies, and feedback is limited to observing the rewards of the actions performed (the so called 'bandit' setting). If either the reference policies are stateless rather than stateful, or the feedback includes the rewards of all actions (the so called 'expert' setting), previous work shows that the optimal regret grows like (T) in terms of the number of decision rounds T. The difficulty in our setting is that the decision maker unavoidably loses track of the internal states of the reference policies, and thus cannot reliably attribute rewards observed in a certain round to any of the reference policies. In fact, in this setting it is impossible for the algorithm to estimate which policy gives the highest (or even approximately highest) total reward. Nevertheless, we design an algorithm that achieves expected regret that is sublinear in T, of the form O(T/log1/4T). Our algorithm is based on a certain local repetition lemma that may be of independent interest. We also show that no algorithm can guarantee expected regret better than O(T/log3/2T).
AB - We consider sequential decision making in a setting where regret is measured with respect to a set of stateful reference policies, and feedback is limited to observing the rewards of the actions performed (the so called 'bandit' setting). If either the reference policies are stateless rather than stateful, or the feedback includes the rewards of all actions (the so called 'expert' setting), previous work shows that the optimal regret grows like (T) in terms of the number of decision rounds T. The difficulty in our setting is that the decision maker unavoidably loses track of the internal states of the reference policies, and thus cannot reliably attribute rewards observed in a certain round to any of the reference policies. In fact, in this setting it is impossible for the algorithm to estimate which policy gives the highest (or even approximately highest) total reward. Nevertheless, we design an algorithm that achieves expected regret that is sublinear in T, of the form O(T/log1/4T). Our algorithm is based on a certain local repetition lemma that may be of independent interest. We also show that no algorithm can guarantee expected regret better than O(T/log3/2T).
UR - http://www.scopus.com/inward/record.url?scp=84920053744&partnerID=8YFLogxK
U2 - 10.1109/FOCS.2014.19
DO - 10.1109/FOCS.2014.19
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:84920053744
T3 - Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS
SP - 100
EP - 109
BT - Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS
PB - IEEE Computer Society
Y2 - 18 October 2014 through 21 October 2014
ER -