Regularizing towards permutation invariance in recurrent models

Edo Cohen-Karlik, Avichai Ben David, Amir Globerson

Research output: Contribution to journalConference articlepeer-review

6 Scopus citations

Abstract

In many machine learning problems the output should not depend on the order of the input. Such “permutation invariant” functions have been studied extensively recently. Here we argue that temporal architectures such as RNNs are highly relevant for such problems, despite the inherent dependence of RNNs on order. We show that RNNs can be regularized towards permutation invariance, and that this can result in compact models, as compared to non-recurrent architectures. We implement this idea via a novel form of stochastic regularization. Existing solutions mostly suggest restricting the learning problem to hypothesis classes which are permutation invariant by design [Zaheer et al., 2017, Lee et al., 2019, Murphy et al., 2018]. Our approach of enforcing permutation invariance via regularization gives rise to models which are semi permutation invariant (e.g. invariant to some permutations and not to others). We show that our method outperforms other permutation invariant approaches on synthetic and real world datasets.

Original languageEnglish
JournalAdvances in Neural Information Processing Systems
Volume2020-December
StatePublished - 2020
Event34th Conference on Neural Information Processing Systems, NeurIPS 2020 - Virtual, Online
Duration: 6 Dec 202012 Dec 2020

Funding

FundersFunder number
European Research Council
Horizon 2020ERC HOLI 819080

    Fingerprint

    Dive into the research topics of 'Regularizing towards permutation invariance in recurrent models'. Together they form a unique fingerprint.

    Cite this