TY - JOUR
T1 - On the Role of Channel Capacity in Learning Gaussian Mixture Models
AU - Romanov, Elad
AU - Bendory, Tamir
AU - Ordentlich, Or
N1 - Publisher Copyright:
© 2022 E. Romanov, T. Bendory & O. Ordentlich.
PY - 2022
Y1 - 2022
N2 - This paper studies the sample complexity of learning the k unknown centers of a balanced Gaussian mixture model (GMM) in Rd with spherical covariance matrix σ2I. In particular, we are interested in the following question: what is the maximal noise level σ2, for which the sample complexity is essentially the same as when estimating the centers from labeled measurements? To that end, we restrict attention to a Bayesian formulation of the problem, where the centers are uniformly distributed on the sphere √dSd−1. Our main results characterize the exact noise threshold σ2 below which the GMM learning problem, in the large system limit d, k → ∞, is as easy as learning from labeled observations, and above which it is substantially harder. The threshold occurs at log k = 12 log (1 + σ12 ), which is the capacity of the additive white Gaussian noise (AWGN) chand nel. Thinking of the set of k centers as a code, this noise threshold can be interpreted as the largest noise level for which the error probability of the code over the AWGN channel is small. Previous works on the GMM learning problem have identified the minimum distance between the centers as a key parameter in determining the statistical difficulty of learning the corresponding GMM. While our results are only proved for GMMs whose centers are uniformly distributed over the sphere, they hint that perhaps it is the decoding error probability associated with the center constellation as a channel code that determines the statistical difficulty of learning the corresponding GMM, rather than just the minimum distance.
AB - This paper studies the sample complexity of learning the k unknown centers of a balanced Gaussian mixture model (GMM) in Rd with spherical covariance matrix σ2I. In particular, we are interested in the following question: what is the maximal noise level σ2, for which the sample complexity is essentially the same as when estimating the centers from labeled measurements? To that end, we restrict attention to a Bayesian formulation of the problem, where the centers are uniformly distributed on the sphere √dSd−1. Our main results characterize the exact noise threshold σ2 below which the GMM learning problem, in the large system limit d, k → ∞, is as easy as learning from labeled observations, and above which it is substantially harder. The threshold occurs at log k = 12 log (1 + σ12 ), which is the capacity of the additive white Gaussian noise (AWGN) chand nel. Thinking of the set of k centers as a code, this noise threshold can be interpreted as the largest noise level for which the error probability of the code over the AWGN channel is small. Previous works on the GMM learning problem have identified the minimum distance between the centers as a key parameter in determining the statistical difficulty of learning the corresponding GMM. While our results are only proved for GMMs whose centers are uniformly distributed over the sphere, they hint that perhaps it is the decoding error probability associated with the center constellation as a channel code that determines the statistical difficulty of learning the corresponding GMM, rather than just the minimum distance.
UR - http://www.scopus.com/inward/record.url?scp=85164696411&partnerID=8YFLogxK
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.conferencearticle???
AN - SCOPUS:85164696411
SN - 2640-3498
VL - 178
SP - 4110
EP - 4159
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 35th Conference on Learning Theory, COLT 2022
Y2 - 2 July 2022 through 5 July 2022
ER -