Learning (to disagree?) in large worlds

Itzhak Gilboa, Larry Samuelson*, David Schmeidler

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Scopus citations


Beginning with Robert Aumann's 1976 “Agreeing to Disagree” result, a collection of papers have established conditions under which it is impossible for rational agents to disagree, or bet against each other, or speculate in markets. The subsequent literature has provided many explanations for disagreement and trade, typically exploiting differences in prior beliefs or information processing. We view such differences as arising most naturally in a “large worlds” setting, where there is no commonly-accepted understanding of the underlying uncertainty. This paper develops a large-worlds model of reasoning and examines how agents learn in such a setting, with particular interest in whether accumulated experience will lead them to common beliefs (and hence to agree, and to cease trading). No learning rule invariably ensures learning, leaving ample room for persistent disagreement and trade. However, there are intuitive learning rules that lead people with different models of the underlying uncertainty to a common view of the world if the data generating process is sufficiently structured.

Original languageEnglish
Article number105166
JournalJournal of Economic Theory
StatePublished - Jan 2022


FundersFunder number
National Science FoundationSES-1459158
Israel Science Foundation1443/20, 1077/17
Tel Aviv University
Higher Education Commission, Pakistan


    • Disagreement
    • Large worlds
    • Learning
    • No-trade
    • Non-Bayesian
    • Trade


    Dive into the research topics of 'Learning (to disagree?) in large worlds'. Together they form a unique fingerprint.

    Cite this