TY - GEN
T1 - Lower bounds on individual sequence regret
AU - Gofer, Eyal
AU - Mansour, Yishay
N1 - Funding Information:
This research was supported in part by the Google Inter-university center for Electronic Markets and Auctions, by a grant from the Israel Science Foundation, by a grant from United States-Israel Binational Science Foundation (BSF), by a grant from the Israeli Ministry of Science (MoS), and by The Israeli Centers of Research Excellence (I-CORE) program, (Center No. 4/11). This work is part of Ph.D. thesis research carried out by the first author at Tel Aviv University.
PY - 2012
Y1 - 2012
N2 - In this work, we lower bound the individual sequence anytime regret of a large family of online algorithms. This bound depends on the quadratic variation of the sequence, Q T, and the learning rate. Nevertheless, we show that any learning rate that guarantees a regret upper bound of O(√Q T) necessarily implies an Ω(√Q T) anytime regret on any sequence with quadratic variation Q T . The algorithms we consider are linear forecasters whose weight vector at time t + 1 is the gradient of a concave potential function of cumulative losses at time t. We show that these algorithms include all linear Regularized Follow the Leader algorithms. We prove our result for the case of potentials with negative definite Hessians, and potentials for the best expert setting satisfying some natural regularity conditions. In the best expert setting, we give our result in terms of the translation-invariant relative quadratic variation. We apply our lower bounds to Randomized Weighted Majority and to linear cost Online Gradient Descent. We show that bounds on anytime regret imply a lower bound on the price of "at the money" call options in an arbitrage-free market. Given a lower bound Q on the quadratic variation of a stock price, we give an Ω(√Q) lower bound on the option price, for Q < 0.5. This lower bound has the same asymptotic behavior as the Black-Scholes pricing and improves a previous Ω(Q) result given in [4].
AB - In this work, we lower bound the individual sequence anytime regret of a large family of online algorithms. This bound depends on the quadratic variation of the sequence, Q T, and the learning rate. Nevertheless, we show that any learning rate that guarantees a regret upper bound of O(√Q T) necessarily implies an Ω(√Q T) anytime regret on any sequence with quadratic variation Q T . The algorithms we consider are linear forecasters whose weight vector at time t + 1 is the gradient of a concave potential function of cumulative losses at time t. We show that these algorithms include all linear Regularized Follow the Leader algorithms. We prove our result for the case of potentials with negative definite Hessians, and potentials for the best expert setting satisfying some natural regularity conditions. In the best expert setting, we give our result in terms of the translation-invariant relative quadratic variation. We apply our lower bounds to Randomized Weighted Majority and to linear cost Online Gradient Descent. We show that bounds on anytime regret imply a lower bound on the price of "at the money" call options in an arbitrage-free market. Given a lower bound Q on the quadratic variation of a stock price, we give an Ω(√Q) lower bound on the option price, for Q < 0.5. This lower bound has the same asymptotic behavior as the Black-Scholes pricing and improves a previous Ω(Q) result given in [4].
UR - http://www.scopus.com/inward/record.url?scp=84867842340&partnerID=8YFLogxK
U2 - 10.1007/978-3-642-34106-9_23
DO - 10.1007/978-3-642-34106-9_23
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:84867842340
SN - 9783642341052
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 275
EP - 289
BT - Algorithmic Learning Theory - 23rd International Conference, ALT 2012, Proceedings
T2 - 23rd International Conference on Algorithmic Learning Theory, ALT 2012
Y2 - 29 October 2012 through 31 October 2012
ER -