Neural Joint Entropy Estimation

Yuval Shalev*, Amichai Painsky, Irad Ben-Gal

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Estimating the entropy of a discrete random variable is a fundamental problem in information theory and related fields. This problem has many applications in various domains, including machine learning, statistics, and data compression. Over the years, a variety of estimation schemes have been suggested. However, despite significant progress, most methods still struggle when the sample is small, compared to the variable's alphabet size. In this work, we introduce a practical solution to this problem, which extends the work of McAllester and Statos. The proposed scheme uses the generalization abilities of cross-entropy estimation in deep neural networks (DNNs) to introduce improved entropy estimation accuracy. Furthermore, we introduce a family of estimators for related information-theoretic measures, such as conditional entropy and mutual information (MI). We show that these estimators are strongly consistent and demonstrate their performance in a variety of use cases. First, we consider large alphabet entropy estimation. Then, we extend the scope to MI estimation. Next, we apply the proposed scheme to conditional MI estimation, as we focus on independence testing tasks. Finally, we study a transfer entropy (TE) estimation problem. The proposed estimators demonstrate improved performance compared to existing methods in all of these setups.

Original languageEnglish
Pages (from-to)5488-5500
Number of pages13
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume35
Issue number4
DOIs
StatePublished - 1 Apr 2024

Keywords

  • Cross-entropy
  • joint entropy
  • mutual information (MI)
  • neural networks
  • transfer entropy (TE)

Fingerprint

Dive into the research topics of 'Neural Joint Entropy Estimation'. Together they form a unique fingerprint.

Cite this