Abstract
We are interested in distributions which are derived as a maximum entropy distribution given a set of constraints. More specifically, we are interested in the case where the constraints are the expectation of individual and pairs of attributes. For such a given maximum entropy distribution we develop an efficient learning algorithm for read-once DNF. We also show how to extend our results to monotone read-k DNF, following the techniques of [HM91].
| Original language | English |
|---|---|
| Pages | 201-209 |
| Number of pages | 9 |
| DOIs | |
| State | Published - 1997 |
| Event | Proceedings of the 1997 10th Annual Conference on Computational Learning Theory - Nashville, TN, USA Duration: 6 Jul 1997 → 9 Jul 1997 |
Conference
| Conference | Proceedings of the 1997 10th Annual Conference on Computational Learning Theory |
|---|---|
| City | Nashville, TN, USA |
| Period | 6/07/97 → 9/07/97 |
Fingerprint
Dive into the research topics of 'Learning with maximum-entropy distributions'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver