NOISE INJECTION NODE REGULARIZATION FOR ROBUST LEARNING

Noam Levi, Tomer Volansky, Itay M. Bloch, Marat Freytsis

Research output: Contribution to conferencePaperpeer-review

1 Scopus citations

Abstract

We introduce Noise Injection Node Regularization (NINR), a method of injecting structured noise into Deep Neural Networks (DNN) during the training stage, resulting in an emergent regularizing effect. We present theoretical and empirical evidence for substantial improvement in robustness against various test data perturbations for feed-forward DNNs when trained under NINR. The novelty in our approach comes from the interplay of adaptive noise injection and initialization conditions such that noise is the dominant driver of dynamics at the start of training. As it simply requires the addition of external nodes without altering the existing network structure or optimization algorithms, this method can be easily incorporated into many standard architectures. We find improved stability against a number of data perturbations, including domain shifts, with the most dramatic improvement obtained for unstructured noise, where our technique outperforms existing methods such as Dropout or L2 regularization, in some cases. Further, desirable generalization properties on clean data are generally maintained.

Original languageEnglish
StatePublished - 2023
Event11th International Conference on Learning Representations, ICLR 2023 - Kigali, Rwanda
Duration: 1 May 20235 May 2023

Conference

Conference11th International Conference on Learning Representations, ICLR 2023
Country/TerritoryRwanda
CityKigali
Period1/05/235/05/23

Funding

FundersFunder number
Galileo Galilei Institute
EU Horizon 2020 Programme
Tel Aviv University
European Research Council
Milner Foundation
ERC-CoG-2015682676 LDMThExp
United States-Israel Binational Science Foundation2020220
National Science FoundationPHY-1607611
DOEDE-SC0010008
Israel Science Foundation1862/21
NSFPHY1316222

    Fingerprint

    Dive into the research topics of 'NOISE INJECTION NODE REGULARIZATION FOR ROBUST LEARNING'. Together they form a unique fingerprint.

    Cite this