Generalization Error in Deep Learning

Daniel Jakubovitz, Raja Giryes, Miguel R.D. Rodrigues

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

Deep learning models have lately shown great performance in various fields such as computer vision, speech recognition, speech translation, and natural language processing. However, alongside their state-of-the-art performance, it is still generally unclear what is the source of their generalization ability. Thus, an important question is what makes deep neural networks able to generalize well from the training set to new data. In this chapter, we provide an overview of the existing theory and bounds for the characterization of the generalization error of deep neural networks, combining both classical and more recent theoretical and empirical results.

Original languageEnglish
Title of host publicationApplied and Numerical Harmonic Analysis
PublisherSpringer International Publishing
Pages153-193
Number of pages41
DOIs
StatePublished - 2019

Publication series

NameApplied and Numerical Harmonic Analysis
ISSN (Print)2296-5009
ISSN (Electronic)2296-5017

Fingerprint

Dive into the research topics of 'Generalization Error in Deep Learning'. Together they form a unique fingerprint.

Cite this