Peernets: Exploiting peer wisdom against adversarial attacks

Jan Svoboda, Jonathan Masci, Federico Monti, Michael M. Bronstein, Leonidas Guibas

Research output: Contribution to conferencePaperpeer-review

Abstract

Deep learning systems have become ubiquitous in many aspects of our lives. Unfortunately, it has been shown that such systems are vulnerable to adversarial attacks, making them prone to potential unlawful uses. Designing deep neural networks that are robust to adversarial attacks is a fundamental step in making such systems safer and deployable in a broader variety of applications (e.g. autonomous driving), but more importantly is a necessary step to design novel and more advanced architectures built on new computational paradigms rather than marginally building on the existing ones. In this paper we introduce PeerNets, a novel family of convolutional networks alternating classical Euclidean convolutions with graph convolutions to harness information from a graph of peer samples. This results in a form of non-local forward propagation in the model, where latent features are conditioned on the global structure induced by the graph, that is up to 3× more robust to a variety of white- and black-box adversarial attacks compared to conventional architectures with almost no drop in accuracy.

Original languageEnglish
StatePublished - 2019
Externally publishedYes
Event7th International Conference on Learning Representations, ICLR 2019 - New Orleans, United States
Duration: 6 May 20199 May 2019

Conference

Conference7th International Conference on Learning Representations, ICLR 2019
Country/TerritoryUnited States
CityNew Orleans
Period6/05/199/05/19

Funding

FundersFunder number
Google Research
TU Munich
National Science FoundationDMS-1546206
Google
Amazon Web Services
Royal Society
European Research Council724228

    Fingerprint

    Dive into the research topics of 'Peernets: Exploiting peer wisdom against adversarial attacks'. Together they form a unique fingerprint.

    Cite this