Abstract
We consider a dynamic learning problem where a decision maker sequentially selects a control and observes a response variable that depends on chosen control and an unknown sensitivity parameter. After every observation, the decision maker updates his or her estimate of the unknown parameter and uses a certainty-equivalence decision rule to determine subsequent controls based on this estimate. We show that under this certainty-equivalence learning policy the parameter estimates converge with positive probability to an uninformative fixed point that can differ from the true value of the unknown parameter; a phenomenon that will be referred to as incomplete learning. In stark contrast, it will be shown that this certainty-equivalence policy may avoid incomplete learning if the parameter value of interest "drifts away" from the uninformative fixed point at a critical rate. Finally, we prove that one can adaptively limit the learning memory to improve the accuracy of the certainty-equivalence policy in both static (estimation), as well as slowly varying (tracking) environments, without relying on forced exploration.
Original language | English |
---|---|
Pages (from-to) | 1136-1167 |
Number of pages | 32 |
Journal | Operations Research |
Volume | 66 |
Issue number | 4 |
DOIs | |
State | Published - 1 Jul 2018 |
Externally published | Yes |
Keywords
- Certainty equivalence
- Dynamic control
- Incomplete learning
- Sequential estimation