We prove the following generalization of the Entropy Power Inequality: [formula omitted] where h(.) denotes (joint-) differential-entropy, x = x1…xnis a random vector with independent components, [formula omitted] is a Gaussian vector with independent components such that h(xi~) = h(xi), i = 1…n, and A is any matrix. This generalization of the entropy-power inequality is applied to show that a non-Gaussian vector with independent components becomes “closer” to Gaussianity after a linear transformation, where the distance to Gaussianity is measured by the information divergence. Another application is a lower bound, greater than zero, for the mutual-information between nonoverlapping spectral components of a non-Gaussian white process. Finally, we describe a dual generalization of the Fisher Information Inequality.
- Entropy power inequality
- Fisher information inequality