Relative entropy in sequential decision problems

Ehud Lehrer, Rann Smorodinsky

Research output: Contribution to journalArticlepeer-review


Consider an agent who faces a sequential decision problem. At each stage the agent takes an action and observes a stochastic outcome (e.g., daily prices, weather conditions, opponents' actions in a repeated game, etc.). The agent's stage-utility depends on his action, the observed outcome and on previous outcomes. We assume the agent is Bayesian and is endowed with a subjective belief over the distribution of outcomes. The agent's initial belief is typically inaccurate. Therefore, his subjectively optimal strategy is initially suboptimal. As time passes information about the true dynamics is accumulated and, depending on the compatibility of the belief with respect to the truth, the agent may eventually learn to optimize. We introduce the notion of relative entropy, which is a natural adaptation of the entropy of a stochastic process to the subjective set-up. We present conditions, expressed in terms of relative entropy, that determine whether the agent will eventually learn to optimize. It is shown that low entropy yields asymptotic optimal behavior. In addition, we present a notion of pointwise merging and link it with relative entropy.

Original languageEnglish
Pages (from-to)425-439
Number of pages15
JournalJournal of Mathematical Economics
Issue number4
StatePublished - May 2000


  • Optimization
  • Relative entropy
  • Sequential decision problems


Dive into the research topics of 'Relative entropy in sequential decision problems'. Together they form a unique fingerprint.

Cite this