TY - JOUR
T1 - Private Online Prediction from Experts
T2 - 36th Annual Conference on Learning Theory, COLT 2023
AU - Asi, Hilal
AU - Feldman, Vitaly
AU - Koren, Tomer
AU - Talwar, Kunal
N1 - Publisher Copyright:
© 2023 H. Asi, V. Feldman, T. Koren & K. Talwar.
PY - 2023
Y1 - 2023
N2 - Online prediction from experts is a fundamental problem in machine learning and several works have studied this problem under privacy constraints. We propose and analyze new algorithms for this problem that improve over the regret bounds of the best existing algorithms for non-adaptive adversaries. For approximate differential privacy, our algorithms achieve regret bounds of Oe(√T log d + log d/ε) for the stochastic setting and Oe(√T log d + T1/3 log d/ε) for oblivious adversaries (where d is the number of experts). For pure DP, our algorithms are the first to obtain sub-linear regret for oblivious adversaries in the high-dimensional regime d ≥ T. Moreover, we prove new lower bounds for adaptive adversaries. Our results imply that unlike the non-private setting, there is a strong separation between the optimal regret for adaptive and non-adaptive adversaries for this problem. Our lower bounds also show a separation between pure and approximate differential privacy for adaptive adversaries where the latter is necessary to achieve the non-private O(√T) regret.
AB - Online prediction from experts is a fundamental problem in machine learning and several works have studied this problem under privacy constraints. We propose and analyze new algorithms for this problem that improve over the regret bounds of the best existing algorithms for non-adaptive adversaries. For approximate differential privacy, our algorithms achieve regret bounds of Oe(√T log d + log d/ε) for the stochastic setting and Oe(√T log d + T1/3 log d/ε) for oblivious adversaries (where d is the number of experts). For pure DP, our algorithms are the first to obtain sub-linear regret for oblivious adversaries in the high-dimensional regime d ≥ T. Moreover, we prove new lower bounds for adaptive adversaries. Our results imply that unlike the non-private setting, there is a strong separation between the optimal regret for adaptive and non-adaptive adversaries for this problem. Our lower bounds also show a separation between pure and approximate differential privacy for adaptive adversaries where the latter is necessary to achieve the non-private O(√T) regret.
UR - http://www.scopus.com/inward/record.url?scp=85171577704&partnerID=8YFLogxK
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.conferencearticle???
AN - SCOPUS:85171577704
SN - 2640-3498
VL - 195
SP - 674
EP - 699
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
Y2 - 12 July 2023 through 15 July 2023
ER -