A note on the representation and learning of quantificational determiners

Roni Katzir, Nur Lan, Noa Peled

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


There is a tight, bidirectional connection between the formalism that defines how linguistic knowledge is stored and how this knowledge can be learned. In one direction, the formalism can be mapped onto an evaluation metric that allows the child to compare competing hypotheses given the input data. In the other direction, an evaluation metric can help the linguist to compare competing hypotheses about the formalism in which linguistic knowledge is written. In this preliminary note we explore this bidirectional connection in the domain of
quantificational determiners (e.g., ‘every’ and ‘some’). We show how fixing an explicit format for representing the semantics of such elements – specifically, a variant of semantic automata – yields an evaluation metric, based on the principle of Minimum Description Length (MDL), that can serve as the basis for an unsupervised learner for such denotations. We then show how the MDL metric may provide a handle on the comparison of semantic automata with a competing representational format.
Original languageEnglish
Title of host publicationProceedings of Sinn und Bedeutung 24
Subtitle of host publication[September 4-7, 2019; Osnabrück University, Germany]
EditorsMichael Franke, Nikola Kompa, Mingya Liu, Jutta L. Mueller, Juliane Schwab
Number of pages19
StatePublished - 2020


  • Quantificational determiners
  • Learning
  • Minimum description length
  • Semantic automata


Dive into the research topics of 'A note on the representation and learning of quantificational determiners'. Together they form a unique fingerprint.

Cite this