Abstract
There is a tight, bidirectional connection between the formalism that defines how linguistic knowledge is stored and how this knowledge can be learned. In one direction, the formalism can be mapped onto an evaluation metric that allows the child to compare competing hypotheses given the input data. In the other direction, an evaluation metric can help the linguist to compare competing hypotheses about the formalism in which linguistic knowledge is written. In this preliminary note we explore this bidirectional connection in the domain of
quantificational determiners (e.g., ‘every’ and ‘some’). We show how fixing an explicit format for representing the semantics of such elements – specifically, a variant of semantic automata – yields an evaluation metric, based on the principle of Minimum Description Length (MDL), that can serve as the basis for an unsupervised learner for such denotations. We then show how the MDL metric may provide a handle on the comparison of semantic automata with a competing representational format.
quantificational determiners (e.g., ‘every’ and ‘some’). We show how fixing an explicit format for representing the semantics of such elements – specifically, a variant of semantic automata – yields an evaluation metric, based on the principle of Minimum Description Length (MDL), that can serve as the basis for an unsupervised learner for such denotations. We then show how the MDL metric may provide a handle on the comparison of semantic automata with a competing representational format.
Original language | English |
---|---|
Title of host publication | Proceedings of Sinn und Bedeutung 24 |
Subtitle of host publication | [September 4-7, 2019; Osnabrück University, Germany] |
Editors | Michael Franke, Nikola Kompa, Mingya Liu, Jutta L. Mueller, Juliane Schwab |
Pages | 392-410 |
Number of pages | 19 |
State | Published - 2020 |
Keywords
- Quantificational determiners
- Learning
- Minimum description length
- Semantic automata