Efficient and practical stochastic subgradient descent for nuclear norm regularization

Haim Avron*, Satyen Kale, Shiva Prasad Kasiviswanathan, Vikas Sindhwani

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

41 Scopus citations

Abstract

We describe novel subgradient methods for a broad class of matrix optimization problems involving nuclear norm regularization. Unlike existing approaches, our method executes very cheap iterations by combining low-rank stochastic subgradients with efficient incremental SVD updates, made possible by highly optimized and parallelizable dense linear algebra operations on small matrices. Our practical algorithms always maintain a low-rank factorization of iterates that can be conveniently held in memory and efficiently multiplied to generate predictions in matrix completion settings. Empirical comparisons confirm that our approach is highly competitive with several recently proposed state-of-the-art solvers for such problems.

Original languageEnglish
Title of host publicationProceedings of the 29th International Conference on Machine Learning, ICML 2012
Pages1231-1238
Number of pages8
StatePublished - 2012
Externally publishedYes
Event29th International Conference on Machine Learning, ICML 2012 - Edinburgh, United Kingdom
Duration: 26 Jun 20121 Jul 2012

Publication series

NameProceedings of the 29th International Conference on Machine Learning, ICML 2012
Volume2

Conference

Conference29th International Conference on Machine Learning, ICML 2012
Country/TerritoryUnited Kingdom
CityEdinburgh
Period26/06/121/07/12

Fingerprint

Dive into the research topics of 'Efficient and practical stochastic subgradient descent for nuclear norm regularization'. Together they form a unique fingerprint.

Cite this