We study linear regression and classification in a setting where the learning algorithm is allowed to access only a limited number of attributes per example, known as the limited attribute observation model. In this well-studied model, we provide the first lower bounds giving a limit on the precision attainable by any algorithm for several variants of regression, notably linear regression with the absolute loss and the squared loss, as well as for classification with the hinge loss. We complement these lower bounds with a general purpose algorithm that gives an upper bound on the achievable precision limit in the setting of learning with missing data.
|Number of pages||9|
|Journal||Advances in Neural Information Processing Systems|
|State||Published - 2016|
|Event||30th Annual Conference on Neural Information Processing Systems, NIPS 2016 - Barcelona, Spain|
Duration: 5 Dec 2016 → 10 Dec 2016