Open problem: Fast stochastic exp-concave optimization

Tomer Koren*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


Stochastic exp-concave optimization is an important primitive in machine learning that captures several fundamental problems, including linear regression, logistic regression and more. The exp-concavity property allows for fast convergence rates, as compared to general stochastic optimization. However, current algorithms that attain such rates scale poorly with the dimension n and run in time O(n4), even on very simple instances of the problem. The question we pose is whether it is possible to obtain fast rates for exp-concave functions using more computationally-efficient algorithms.

Original languageEnglish
Title of host publicationProceedings of the 26th Annual Conference on Learning Theory
EditorsShai Shalev-Shwartz, Ingo Steinwart
Number of pages3
StatePublished - 2013
Externally publishedYes
Event26th Conference on Learning Theory, COLT 2013 - Princeton, NJ, United States
Duration: 12 Jun 201314 Jun 2013

Publication series

NameProceedings of Machine Learning Research
ISSN (Electronic)2640-3498


Conference26th Conference on Learning Theory, COLT 2013
Country/TerritoryUnited States
CityPrinceton, NJ


Dive into the research topics of 'Open problem: Fast stochastic exp-concave optimization'. Together they form a unique fingerprint.

Cite this