Using partially observed Markov processes to select optimal termination time of TV shows

Moshe Givon, Abraham Grosfeld-Nir*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

This paper presents a method for optimal control of a running television show. The problem is formulated as a partially observed Markov decision process (POMDP). A show can be in a "good" state, i.e., it should be continued, or it can be in a "bad" state and therefore it should be changed. The ratings of a show are modeled as a stochastic process that depends on the show's state. An optimal rule for a continue/change decision, which maximizes the expected present value of profits from selling advertising time, is expressed in terms of the prior probability of the show being in the good state. The optimal rule depends on the size of the investment in changing a show, the difference in revenues between a "good" and a "bad" show and the number of time periods remaining until the end of the planning horizon. The application of the method is illustrated with simulated ratings as well as real data.

Original languageEnglish
Pages (from-to)477-485
Number of pages9
JournalOmega
Volume36
Issue number3
DOIs
StatePublished - Jun 2008

Keywords

  • Dynamic programming
  • Markov chain
  • POMDP
  • Planning and control
  • Simulation
  • TV shows

Fingerprint

Dive into the research topics of 'Using partially observed Markov processes to select optimal termination time of TV shows'. Together they form a unique fingerprint.

Cite this