TY - GEN
T1 - Littlestone Classes are Privately Online Learnable
AU - Golowich, Noah
AU - Livni, Roi
N1 - Publisher Copyright:
© 2021 Neural information processing systems foundation. All rights reserved.
PY - 2021
Y1 - 2021
N2 - We consider the problem of online classification under a privacy constraint. In this setting a learner observes sequentially a stream of labelled examples (xt, yt), for 1 ≤ t ≤ T, and returns at each iteration t a hypothesis ℎt which is used to predict the label of each new example xt. The learner's performance is measured by her regret against a known hypothesis class H. We require that the algorithm satisfies the following privacy constraint: the sequence ℎ1,..., ℎT of hypotheses output by the algorithm needs to be an (ǫ, δ)-differentially private function of the whole input sequence (x1, y1),..., (xT, yT ). We provide the first non-trivial regret bound for the realizable setting. Specifically, we show that if the class H has constant Littlestone dimension then, given an oblivious sequence of labelled examples, there is a private learner that makes in expectation at most O(logT) mistakes - comparable to the optimal mistake bound in the non-private case, up to a logarithmic factor. Moreover, for general values of the Littlestone dimension d, the same mistake bound holds but with a doubly-exponential in d factor. A recent line of work has demonstrated a strong connection between classes that are online learnable and those that are differentially-private learnable. Our results strengthen this connection and show that an online learning algorithm can in fact be directly privatized (in the realizable setting). We also discuss an adaptive setting and provide a sublinear regret bound of O(√T).
AB - We consider the problem of online classification under a privacy constraint. In this setting a learner observes sequentially a stream of labelled examples (xt, yt), for 1 ≤ t ≤ T, and returns at each iteration t a hypothesis ℎt which is used to predict the label of each new example xt. The learner's performance is measured by her regret against a known hypothesis class H. We require that the algorithm satisfies the following privacy constraint: the sequence ℎ1,..., ℎT of hypotheses output by the algorithm needs to be an (ǫ, δ)-differentially private function of the whole input sequence (x1, y1),..., (xT, yT ). We provide the first non-trivial regret bound for the realizable setting. Specifically, we show that if the class H has constant Littlestone dimension then, given an oblivious sequence of labelled examples, there is a private learner that makes in expectation at most O(logT) mistakes - comparable to the optimal mistake bound in the non-private case, up to a logarithmic factor. Moreover, for general values of the Littlestone dimension d, the same mistake bound holds but with a doubly-exponential in d factor. A recent line of work has demonstrated a strong connection between classes that are online learnable and those that are differentially-private learnable. Our results strengthen this connection and show that an online learning algorithm can in fact be directly privatized (in the realizable setting). We also discuss an adaptive setting and provide a sublinear regret bound of O(√T).
UR - http://www.scopus.com/inward/record.url?scp=85131815060&partnerID=8YFLogxK
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:85131815060
T3 - Advances in Neural Information Processing Systems
SP - 11462
EP - 11473
BT - Advances in Neural Information Processing Systems 34 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021
A2 - Ranzato, Marc'Aurelio
A2 - Beygelzimer, Alina
A2 - Dauphin, Yann
A2 - Liang, Percy S.
A2 - Wortman Vaughan, Jenn
PB - Neural information processing systems foundation
T2 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021
Y2 - 6 December 2021 through 14 December 2021
ER -