TY - JOUR

T1 - Sequential DOE via dynamic programming

AU - Ben-Gal, Irad

AU - Caramanis, Michael

PY - 2002/12

Y1 - 2002/12

N2 - The paper considers a sequential Design of Experiments (DOE) scheme. Our objective is to maximize both information and economic measures over a feasible set of experiments. Optimal DOE strategies are developed by introducing information criteria based on measures adopted from information theory. The evolution of acquired information along various stages of experimentation is analyzed for linear models with a Gaussian noise term. We show that for particular cases, although the amount of information is unbounded, the desired rate of acquiring information decreases with the number of experiments. This observation implies that at a certain point in time it is no longer efficient to continue experimenting. Accordingly, we investigate methods of stochastic dynamic programming under imperfect state information as appropriate means to obtain optimal experimentation policies. We propose cost-to-go functions that model the trade-off between the cost of additional experiments and the benefit of incremental information. We formulate a general stochastic dynamic programming framework for design of experiments and illustrate it by analytic and numerical implementation examples.

AB - The paper considers a sequential Design of Experiments (DOE) scheme. Our objective is to maximize both information and economic measures over a feasible set of experiments. Optimal DOE strategies are developed by introducing information criteria based on measures adopted from information theory. The evolution of acquired information along various stages of experimentation is analyzed for linear models with a Gaussian noise term. We show that for particular cases, although the amount of information is unbounded, the desired rate of acquiring information decreases with the number of experiments. This observation implies that at a certain point in time it is no longer efficient to continue experimenting. Accordingly, we investigate methods of stochastic dynamic programming under imperfect state information as appropriate means to obtain optimal experimentation policies. We propose cost-to-go functions that model the trade-off between the cost of additional experiments and the benefit of incremental information. We formulate a general stochastic dynamic programming framework for design of experiments and illustrate it by analytic and numerical implementation examples.

UR - http://www.scopus.com/inward/record.url?scp=0036887761&partnerID=8YFLogxK

U2 - 10.1023/A:1019670414725

DO - 10.1023/A:1019670414725

M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???

AN - SCOPUS:0036887761

SN - 0740-817X

VL - 34

SP - 1087

EP - 1100

JO - IIE Transactions (Institute of Industrial Engineers)

JF - IIE Transactions (Institute of Industrial Engineers)

IS - 12

ER -