TY - GEN

T1 - Nonparametric bandits with covariates

AU - Rigollet, Philippe

AU - Zeevi, Assaf

PY - 2010

Y1 - 2010

N2 - We consider a bandit problem which involves sequential sampling from two populations (arms). Each arm produces a noisy reward realization which depends on an observable random covariate. The goal is to maximize cumulative expected reward. We derive general lower bounds on the performance of any admissible policy, and develop an algorithm whose performance achieves the order of said lower bound up to logarithmic terms. This is done by decomposing the global problem into suitably "localized" bandit problems. Proofs blend ideas from nonparametric statistics and traditional methods used in the bandit literature.

AB - We consider a bandit problem which involves sequential sampling from two populations (arms). Each arm produces a noisy reward realization which depends on an observable random covariate. The goal is to maximize cumulative expected reward. We derive general lower bounds on the performance of any admissible policy, and develop an algorithm whose performance achieves the order of said lower bound up to logarithmic terms. This is done by decomposing the global problem into suitably "localized" bandit problems. Proofs blend ideas from nonparametric statistics and traditional methods used in the bandit literature.

UR - http://www.scopus.com/inward/record.url?scp=80053440857&partnerID=8YFLogxK

M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???

AN - SCOPUS:80053440857

SN - 9780982252925

T3 - COLT 2010 - The 23rd Conference on Learning Theory

SP - 54

EP - 66

BT - COLT 2010 - The 23rd Conference on Learning Theory

T2 - 23rd Conference on Learning Theory, COLT 2010

Y2 - 27 June 2010 through 29 June 2010

ER -