Kernel-Based Smoothness Analysis of Residual Networks

Tom Tirer, Joan Bruna, Raja Giryes

Research output: Contribution to journalConference articlepeer-review

6 Scopus citations

Abstract

A major factor in the success of deep neural networks is the use of sophisticated architectures rather than the classical multilayer perceptron (MLP). Residual networks (ResNets) stand out among these powerful modern architectures. Previous works focused on the optimization advantages of deep ResNets over deep MLPs. In this paper, we show another distinction between the two models, namely, a tendency of ResNets to promote smoother interpolations than MLPs. We analyze this phenomenon via the neural tangent kernel (NTK) approach. First, we compute the NTK for a considered ResNet model and prove its stability during gradient descent training. Then, we show by various evaluation methodologies that for ReLU activations the NTK of ResNet, and its kernel regression results, are smoother than the ones of MLP. The better smoothness observed in our analysis may explain the better generalization ability of ResNets and the practice of moderately attenuating the residual blocks.

Original languageEnglish
Pages (from-to)921-954
Number of pages34
JournalProceedings of Machine Learning Research
Volume145
StatePublished - 2021
Event2nd Mathematical and Scientific Machine Learning Conference, MSML 2021 - Virtual, Online
Duration: 16 Aug 202119 Aug 2021

Funding

FundersFunder number
National Science FoundationRI-1816753, CIF 1845360
Alfred P. Sloan Foundation
Samsung
European Commission757497

    Keywords

    • Neural tangent kernel
    • kernel methods
    • multilayer perceptron
    • residual networks

    Fingerprint

    Dive into the research topics of 'Kernel-Based Smoothness Analysis of Residual Networks'. Together they form a unique fingerprint.

    Cite this