TY - GEN
T1 - On agnostic boosting and parity learning
AU - Kalai, Adam Tauman
AU - Mansour, Yishay
AU - Verbin, Elad
PY - 2008
Y1 - 2008
N2 - The motivating problem is agnostically learning parity functions, i.e., parity with arbitrary or adversarial noise. Specifically, given random labeled examples from an arbitrary distribution, we would like to produce an hypothesis whose accuracy nearly matches the accuracy of the best parity function. Our algorithm runs in time 2̊ (n/log n), which matches the best known for the easier cases of learning parities with random classification noise (Blum et al, 2003) and for agnostically learning parities over the uniform distribution on inputs (Feklman et al, 2006). Our approach is as follows. We give an agnostic boosting theorem that is capable of nearly achieving optimal accuracy, improving upon earlier studies (starting with Ben David et al, 2001). To achieve this, we circumvent previous lower bounds by altering the boosting model. We then show that the (random noise) parity learning algorithm of Blum et al (2000) fits our new model of agnostic weak learner. Our agnostic boosting framework is completely general and may be applied to other agnostic learning problems. Hence, it also sheds light on the actual difficulty of agnostic learning by showing that full agnostic boosting is indeed possible.
AB - The motivating problem is agnostically learning parity functions, i.e., parity with arbitrary or adversarial noise. Specifically, given random labeled examples from an arbitrary distribution, we would like to produce an hypothesis whose accuracy nearly matches the accuracy of the best parity function. Our algorithm runs in time 2̊ (n/log n), which matches the best known for the easier cases of learning parities with random classification noise (Blum et al, 2003) and for agnostically learning parities over the uniform distribution on inputs (Feklman et al, 2006). Our approach is as follows. We give an agnostic boosting theorem that is capable of nearly achieving optimal accuracy, improving upon earlier studies (starting with Ben David et al, 2001). To achieve this, we circumvent previous lower bounds by altering the boosting model. We then show that the (random noise) parity learning algorithm of Blum et al (2000) fits our new model of agnostic weak learner. Our agnostic boosting framework is completely general and may be applied to other agnostic learning problems. Hence, it also sheds light on the actual difficulty of agnostic learning by showing that full agnostic boosting is indeed possible.
KW - Agnostic boosting
KW - Agnostic learning
KW - Learning parity with noise
KW - Sub-exponential algorithms
UR - http://www.scopus.com/inward/record.url?scp=57049138338&partnerID=8YFLogxK
U2 - 10.1145/1374376.1374466
DO - 10.1145/1374376.1374466
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:57049138338
SN - 9781605580470
T3 - Proceedings of the Annual ACM Symposium on Theory of Computing
SP - 629
EP - 638
BT - STOC'08
PB - Association for Computing Machinery (ACM)
T2 - 40th Annual ACM Symposium on Theory of Computing, STOC 2008
Y2 - 17 May 2008 through 20 May 2008
ER -