Structured prediction models via the Matrix-Tree Theorem

Terry Koo*, Amir Globerson, Xavier Carreras, Michael Collins

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

107 Scopus citations

Abstract

This paper provides an algorithmic framework for learning statistical models involving directed spanning trees, or equivalently non-projective dependency structures. We show how partition functions and marginals for directed spanning trees can be computed by an adaptation of Kirchhoff's Matrix-Tree Theorem. To demonstrate an application of the method, we perform experiments which use the algorithm in training both log-linear and max-margin dependency parsers. The new training methods give improvements in accuracy over perceptron-trained models.

Original languageEnglish
Pages141-150
Number of pages10
StatePublished - 2007
Externally publishedYes
Event2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL 2007 - Prague, Czech Republic
Duration: 28 Jun 200728 Jun 2007

Conference

Conference2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL 2007
Country/TerritoryCzech Republic
CityPrague
Period28/06/0728/06/07

Fingerprint

Dive into the research topics of 'Structured prediction models via the Matrix-Tree Theorem'. Together they form a unique fingerprint.

Cite this