On the modularity of hypernetworks

Tomer Galanti, Lior Wolf

Research output: Contribution to journalConference articlepeer-review

29 Scopus citations


In the context of learning to map an input I to a function hI : X ? R, two alternative methods are compared: (i) an embedding-based method, which learns a fixed function in which I is encoded as a conditioning signal e(I) and the learned function takes the form hI(x) = q(x, e(I)), and (ii) hypernetworks, in which the weights ?I of the function hI(x) = g(x; ?I) are given by a hypernetwork f as ?I = f(I). In this paper, we define the property of modularity as the ability to effectively learn a different function for each input instance I. For this purpose, we adopt an expressivity perspective of this property and extend the theory of [6] and provide a lower bound on the complexity (number of trainable parameters) of neural networks as function approximators, by eliminating the requirements for the approximation method to be robust. Our results are then used to compare the complexities of q and g, showing that under certain conditions and when letting the functions e and f be as large as we wish, g can be smaller than q by orders of magnitude. This sheds light on the modularity of hypernetworks in comparison with the embedding-based method. Besides, we show that for a structured target function, the overall number of trainable parameters in a hypernetwork is smaller by orders of magnitude than the number of trainable parameters of a standard neural network and an embedding method.

Original languageEnglish
JournalAdvances in Neural Information Processing Systems
StatePublished - 2020
Event34th Conference on Neural Information Processing Systems, NeurIPS 2020 - Virtual, Online
Duration: 6 Dec 202012 Dec 2020


FundersFunder number
Horizon 2020 Framework ProgrammeERC CoG 725974
European Research Council


    Dive into the research topics of 'On the modularity of hypernetworks'. Together they form a unique fingerprint.

    Cite this