Zero-shot relation extraction via reading comprehension

Omer Levy, Minjoon Seo, Eunsol Choi, Luke Zettlemoyer

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We show that relation extraction can be reduced to answering simple reading comprehension questions, by associating one or more natural-language questions with each relation slot. This reduction has several advantages: we can (1) learn relation-extraction models by extending recent neural reading-comprehension techniques, (2) build very large training sets for those models by combining relation-specific crowd-sourced questions with distant supervision, and even (3) do zero-shot learning by extracting new relation types that are only specified at test-time, for which we have no labeled training examples. Experiments on a Wikipedia slot-filling task demonstrate that the approach can generalize to new questions for known relation types with high accuracy, and that zero-shot generalization to unseen relation types is possible, at lower accuracy levels, setting the bar for future work on this task.

Original languageEnglish
Title of host publicationCoNLL 2017 - 21st Conference on Computational Natural Language Learning, Proceedings
PublisherAssociation for Computational Linguistics (ACL)
Pages333-342
Number of pages10
ISBN (Electronic)9781945626548
DOIs
StatePublished - 2017
Externally publishedYes
Event21st Conference on Computational Natural Language Learning, CoNLL 2017 - Vancouver, Canada
Duration: 3 Aug 20174 Aug 2017

Publication series

NameCoNLL 2017 - 21st Conference on Computational Natural Language Learning, Proceedings

Conference

Conference21st Conference on Computational Natural Language Learning, CoNLL 2017
Country/TerritoryCanada
CityVancouver
Period3/08/174/08/17

Fingerprint

Dive into the research topics of 'Zero-shot relation extraction via reading comprehension'. Together they form a unique fingerprint.

Cite this