Latent Trees for Compositional Generalization

Jonathan Herzig, Jonathan Berant, Ben Bogin

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

Despite the success of neural networks in many natural language processing tasks, recent work has shown that they often fail in compositional generalization, i.e., the ability to generalize to new structures built from components observed during training. In this chapter, we posit that this behavior, in standard architectures such as LSTMs and Transformers, stems from the fact that fragments on the output side are not explicitly tied to fragments on the input side. To address this, we introduce models that explicitly construct latent trees over the input, which are used to compositionally compute representations necessary for predicting the output. We show the compositional generalization abilities of our models exceed the abilities of pre-trained Transformer models on several datasets for both semantic parsing and grounded question answering.

Original languageEnglish
Title of host publicationFrontiers in Artificial Intelligence and Applications
EditorsPascal Hitzler, Aaron Eberhart, Md Kamruzzaman Sarker
PublisherIOS Press BV
Pages631-664
Number of pages34
DOIs
StatePublished - 21 Jul 2023

Publication series

NameFrontiers in Artificial Intelligence and Applications
Volume369
ISSN (Print)0922-6389
ISSN (Electronic)1879-8314

Fingerprint

Dive into the research topics of 'Latent Trees for Compositional Generalization'. Together they form a unique fingerprint.

Cite this