Maximum entropy approach to probability density estimation

Gad Miller*, David Horn

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

8 Scopus citations

Abstract

We propose a method for estimating probability density functions (pdf) and conditional density functions (cdf) by training on data produced by such distributions. The algorithm employs new stochastic variables that amount to coding of the input, using a principle of entropy maximization. It is shown to be closely related to the maximum likelihood approach. The encoding step of the algorithm provides an estimate of the probability distribution. The decoding step serves as a generative mode, producing an ensemble of data with the desired distribution. The algorithm is readily implemented by neural networks, using stochastic gradient ascent to achieve entropy maximization.

Original languageEnglish
Pages225-230
Number of pages6
StatePublished - 1998
EventProceedings of the 1998 2nd International Conference on knowledge-Based Intelligent Electronic Systems (KES '98) - Adelaide, Aust
Duration: 21 Apr 199823 Apr 1998

Conference

ConferenceProceedings of the 1998 2nd International Conference on knowledge-Based Intelligent Electronic Systems (KES '98)
CityAdelaide, Aust
Period21/04/9823/04/98

Fingerprint

Dive into the research topics of 'Maximum entropy approach to probability density estimation'. Together they form a unique fingerprint.

Cite this