Abstract
We consider the problem of computing optimal policies of finite-state finite-action Markov decision processes (MDPs). A reduction to a continuum of constrained MDPs (CMDPs) is presented such that the optimal policies for these CMDPs constitute a path in a graph defined over the deterministic policies. This path contains, in particular, an optimal policy of the original MDP. We present an algorithm based on this new approach that finds this path, and thus an optimal policy. In the general case, this path might be exponentially long in the number of states and actions. We prove that the length of this path is polynomial if the MDP satisfies a coupling property. Thus we obtain a strongly polynomial algorithm for MDPs that satisfies the coupling property. We prove that discrete time versions of controlled M/M/1 queues induce MDPs that satisfy the coupling property. The only previously known polynomial algorithm for controlled M/M/1 queues in the expected average cost model is based on linear programming (and is not known to be strongly polynomial). Our algorithm works both for the discounted and expected average cost models, and the running time does not depend on the discount factor.
Original language | English |
---|---|
Pages (from-to) | 992-1007 |
Number of pages | 16 |
Journal | Mathematics of Operations Research |
Volume | 34 |
Issue number | 4 |
DOIs | |
State | Published - Nov 2009 |
Keywords
- Constrained Markov decision process
- Controlled queues
- Linear programming
- M/M/1 queue
- Markov decision process
- Optimization