Fast rates for exp-concave empirical risk minimization

Tomer Koren, Kfir Y. Levy

Research output: Contribution to journalConference articlepeer-review

31 Scopus citations

Abstract

We consider Empirical Risk Minimization (ERM) in the context of stochastic optimization with exp-concave and smooth losses-A general optimization framework that captures several important learning problems including linear and logistic regression, learning SVMs with the squared hinge-loss, portfolio selection and more. In this setting, we establish the first evidence that ERM is able to attain fast generalization rates, and show that the expected loss of the ERM solution in d dimensions converges to the optimal expected loss in a rate of d/n. This rate matches existing lower bounds up to constants and improves by a log n factor upon the state-of-the-art, which is only known to be attained by an online-to-batch conversion of computationally expensive online algorithms.

Original languageEnglish
Pages (from-to)1477-1485
Number of pages9
JournalAdvances in Neural Information Processing Systems
Volume2015-January
StatePublished - 2015
Externally publishedYes
Event29th Annual Conference on Neural Information Processing Systems, NIPS 2015 - Montreal, Canada
Duration: 7 Dec 201512 Dec 2015

Fingerprint

Dive into the research topics of 'Fast rates for exp-concave empirical risk minimization'. Together they form a unique fingerprint.

Cite this