Abstract
In this work we derive a variant of the classic Glivenko-Cantelli Theorem, which asserts uniform convergence of the empirical Cumulative Distribution Function (CDF) to the CDF of the underlying distribution. Our variant allows for tighter convergence bounds for extreme values of the CDF. We apply our bound in the context of revenue learning, which is a well-studied problem in economics and algorithmic game theory. We derive sample-complexity bounds on the uniform convergence rate of the empirical revenues to the true revenues, assuming a bound on the kth moment of the valuations, for any (possibly fractional) k > 1. For uniform convergence in the limit, we give a complete characterization and a zero-one law: if the first moment of the valuations is finite, then uniform convergence almost surely occurs; conversely, if the first moment is infinite, then uniform convergence almost never occurs.
Original language | English |
---|---|
Pages (from-to) | 1657-1666 |
Number of pages | 10 |
Journal | Advances in Neural Information Processing Systems |
Volume | 2017-December |
State | Published - 2017 |
Event | 31st Annual Conference on Neural Information Processing Systems, NIPS 2017 - Long Beach, United States Duration: 4 Dec 2017 → 9 Dec 2017 |
Funding
Funders | Funder number |
---|---|
Israel-USA bi-national Science Foundation | |
Israeli Academy of Sciences | |
National Science Foundations and the Simons Foundations | 1162/15 |
Microsoft Research | |
European Metrology Programme for Innovation and Research | 740282 |
European Research Council | |
German-Israeli Foundation for Scientific Research and Development | |
United States-Israel Binational Science Foundation | 2014389 |
Israel Academy of Sciences and Humanities | 1435/14 |
Israel Science Foundation | |
Israeli Centers for Research Excellence | 4/11 |