TY - JOUR
T1 - High-Order Analysis of the Efficiency Gap for Maximum Likelihood Estimation in Nonlinear Gaussian Models
AU - Yeredor, Arie
AU - Weiss, Amir
AU - Weiss, Anthony J.
N1 - Publisher Copyright:
© 1991-2012 IEEE.
PY - 2018/9/15
Y1 - 2018/9/15
N2 - In Gaussian measurement models, the measurements are given by a known function of the unknown parameter vector, contaminated by additive zero-mean Gaussian noise. When the function is linear, the resulting maximum likelihood estimate (MLE) is well-known to be efficient [unbiased, with a mean square estimation error (MSE) matrix attaining the Cramér-Rao lower bound (CRLB)]. However, when the function is nonlinear, the MLE is only asymptotically efficient. The classical derivation of its asymptotic efficiency uses a first-order perturbation analysis, relying on a "small-errors" assumption, which under subasymptotic conditions turns inaccurate, rendering the MLE generally biased and inefficient. Although a more accurate (higher-order) performance analysis for such cases is of considerable interest, the associated derivations are rather involved, requiring cumbersome notations and indexing. Building on the recent assimilation of tensor computations into signal processing literature, we exploit the tensor formulation of higher-order derivatives to derive a tractable formulation of a higher (up to third-) order perturbation analysis, predicting the bias and MSE matrix of the MLE of parameter vectors in general nonlinear models under subasymptotic conditions. We provide explicit expressions depending on the first three derivatives of the nonlinear measurement function, and demonstrate the resulting ability to predict the "efficiency gap" (relative excessMSE beyond the CRLB) in simulation experiments.We also provide MATLAB code for easy computation of our resulting expressions.
AB - In Gaussian measurement models, the measurements are given by a known function of the unknown parameter vector, contaminated by additive zero-mean Gaussian noise. When the function is linear, the resulting maximum likelihood estimate (MLE) is well-known to be efficient [unbiased, with a mean square estimation error (MSE) matrix attaining the Cramér-Rao lower bound (CRLB)]. However, when the function is nonlinear, the MLE is only asymptotically efficient. The classical derivation of its asymptotic efficiency uses a first-order perturbation analysis, relying on a "small-errors" assumption, which under subasymptotic conditions turns inaccurate, rendering the MLE generally biased and inefficient. Although a more accurate (higher-order) performance analysis for such cases is of considerable interest, the associated derivations are rather involved, requiring cumbersome notations and indexing. Building on the recent assimilation of tensor computations into signal processing literature, we exploit the tensor formulation of higher-order derivatives to derive a tractable formulation of a higher (up to third-) order perturbation analysis, predicting the bias and MSE matrix of the MLE of parameter vectors in general nonlinear models under subasymptotic conditions. We provide explicit expressions depending on the first three derivatives of the nonlinear measurement function, and demonstrate the resulting ability to predict the "efficiency gap" (relative excessMSE beyond the CRLB) in simulation experiments.We also provide MATLAB code for easy computation of our resulting expressions.
KW - Maximum likelihood estimation (MLE)
KW - efficiency
KW - high-order performance analysis
KW - tensor calculus
UR - http://www.scopus.com/inward/record.url?scp=85050766127&partnerID=8YFLogxK
U2 - 10.1109/TSP.2018.2860570
DO - 10.1109/TSP.2018.2860570
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:85050766127
SN - 1053-587X
VL - 66
SP - 4782
EP - 4795
JO - IEEE Transactions on Signal Processing
JF - IEEE Transactions on Signal Processing
IS - 18
M1 - 8423486
ER -