Abstract
Stochastic exp-concave optimization is an important primitive in machine learning that captures several fundamental problems, including linear regression, logistic regression and more. The exp-concavity property allows for fast convergence rates, as compared to general stochastic optimization. However, current algorithms that attain such rates scale poorly with the dimension n and run in time O(n4), even on very simple instances of the problem. The question we pose is whether it is possible to obtain fast rates for exp-concave functions using more computationally-efficient algorithms.
Original language | English |
---|---|
Title of host publication | Proceedings of the 26th Annual Conference on Learning Theory |
Editors | Shai Shalev-Shwartz, Ingo Steinwart |
Publisher | PMLR |
Pages | 1073-1075 |
Number of pages | 3 |
State | Published - 2013 |
Externally published | Yes |
Event | 26th Conference on Learning Theory, COLT 2013 - Princeton, NJ, United States Duration: 12 Jun 2013 → 14 Jun 2013 |
Publication series
Name | Proceedings of Machine Learning Research |
---|---|
Publisher | PMLR |
Volume | 30 |
ISSN (Electronic) | 2640-3498 |
Conference
Conference | 26th Conference on Learning Theory, COLT 2013 |
---|---|
Country/Territory | United States |
City | Princeton, NJ |
Period | 12/06/13 → 14/06/13 |