A Spectral Perspective of DNN Robustness to Label Noise

Oshrat Bar, Amnon Drory, Raja Giryes

Research output: Contribution to journalConference articlepeer-review

3 Scopus citations

Abstract

Deep networks usually require a massive amount of labeled data for their training. Yet, such data may include some mistakes in the labels. Interestingly, networks have been shown to be robust to such errors. This work uses spectral analysis of their learned mapping to provide an explanation for their robustness. In particular, we relate the smoothness regularization that usually exists in conventional training to the attenuation of high frequencies, which mainly characterize noise. By using a connection between the smoothness and the spectral norm of the network weights, we suggest that one may further improve robustness via spectral normalization. Empirical experiments validate our claims and show the advantage of this normalization for classification with label noise.

Original languageEnglish
Pages (from-to)3732-3752
Number of pages21
JournalProceedings of Machine Learning Research
Volume151
StatePublished - 2022
Event25th International Conference on Artificial Intelligence and Statistics, AISTATS 2022 - Virtual, Online, Spain
Duration: 28 Mar 202230 Mar 2022

Funding

FundersFunder number
ERC-StG757497

    Fingerprint

    Dive into the research topics of 'A Spectral Perspective of DNN Robustness to Label Noise'. Together they form a unique fingerprint.

    Cite this