Implicit regularization in deep learning may not be explainable by norms

Noam Razin, Nadav Cohen

Research output: Contribution to journalConference articlepeer-review

67 Scopus citations

Abstract

Mathematically characterizing the implicit regularization induced by gradient-based optimization is a longstanding pursuit in the theory of deep learning. A widespread hope is that a characterization based on minimization of norms may apply, and a standard test-bed for studying this prospect is matrix factorization (matrix completion via linear neural networks). It is an open question whether norms can explain the implicit regularization in matrix factorization. The current paper resolves this open question in the negative, by proving that there exist natural matrix factorization problems on which the implicit regularization drives all norms (and quasi-norms) towards infinity. Our results suggest that, rather than perceiving the implicit regularization via norms, a potentially more useful interpretation is minimization of rank. We demonstrate empirically that this interpretation extends to a certain class of non-linear neural networks, and hypothesize that it may be key to explaining generalization in deep learning.

Original languageEnglish
Pages (from-to)21174--21187
Number of pages14
JournalAdvances in Neural Information Processing Systems
Volume33
StatePublished - 2020
Event34th Conference on Neural Information Processing Systems, NeurIPS 2020 - Virtual, Online
Duration: 6 Dec 202012 Dec 2020

Funding

FundersFunder number
Yandex Initiative in Machine Learning
Blavatnik Family Foundation

    Fingerprint

    Dive into the research topics of 'Implicit regularization in deep learning may not be explainable by norms'. Together they form a unique fingerprint.

    Cite this