Contracting with a Learning Agent

Guru Guruganesh*, Yoav Kolumbus, Jon Schneider*, Inbal Talgam-Cohen, Emmanouil Vasileios Vlatakis-Gkaragkounis, Joshua R. Wang*, S. Matthew Weinberg

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

1 Scopus citations

Abstract

Real-life contractual relations typically involve repeated interactions between the principal and agent, where, despite theoretical appeal, players rarely use complex dynamic strategies and instead manage uncertainty through learning algorithms. In this paper, we initiate the study of repeated contracts with learning agents, focusing on those achieving no-regret outcomes. For the canonical setting where the agent's actions result in success or failure, we present a simple, optimal solution for the principal: Initially provide a linear contract with scalar α > 0, then switch to a zero-scalar contract. This shift causes the agent to “free-fall” through their action space, yielding non-zero rewards for the principal at zero cost. Interestingly, despite the apparent exploitation, there are instances where our dynamic contract can make both players better off compared to the best static contract. We then broaden the scope of our results to general linearly-scaled contracts, and, finally, to the best of our knowledge, we provide the first analysis of optimization against learning agents with uncertainty about the time horizon.

Original languageEnglish
JournalAdvances in Neural Information Processing Systems
Volume37
StatePublished - 2024
Event38th Conference on Neural Information Processing Systems, NeurIPS 2024 - Vancouver, Canada
Duration: 9 Dec 202415 Dec 2024

Funding

FundersFunder number
Google
European Research Council
Horizon 2020101077862
Israel Science Foundation3331/24
NSF-BSF2021680
National Science FoundationCCF-1942497

    Fingerprint

    Dive into the research topics of 'Contracting with a Learning Agent'. Together they form a unique fingerprint.

    Cite this