TY - JOUR
T1 - Lower Bounds for Differential Privacy Under Continual Observation and Online Threshold Queries
AU - Cohen, Edith
AU - Lyu, Xin
AU - Nelson, Jelani
AU - Sarlós, Tamás
AU - Stemmer, Uri
N1 - Publisher Copyright:
© 2024 E. Cohen, X. Lyu, J. Nelson, T. Sarlós & U. Stemmer.
PY - 2024
Y1 - 2024
N2 - One of the most basic problems for studying the “price of privacy over time” is the so called private counter problem, introduced by Dwork et al. (2010) and Chan et al. (2011). In this problem, we aim to track the number of events that occur over time, while hiding the existence of every single event. More specifically, in every time step t ∈ [T] we learn (in an online fashion) that ∆t ≥ 0 new events have occurred, and must respond with an estimate nt ≈-tj=1 ∆j. The privacy requirement is that all of the outputs together, across all time steps, satisfy event level differential privacy. The main question here is how our error needs to depend on the total number of time steps T and the total number of events n. Dwork et al. (2015) showed an upper bound of O(log(T) + log2(n)), and Henzinger et al. (2023) showed a lower bound of Ω (min{log n, log T}). We show a new lower bound of Ω (min{n, log T}), which is tight w.r.t. the dependence on T, and is tight in the sparse case where log2 n = O(log T). Our lower bound has the following implications: • We show that our lower bound extends to the online thresholds problem, where the goal is to privately answer many “quantile queries” when these queries are presented one-by-one. This resolves an open question of Bun et al. (2017). • Our lower bound implies, for the first time, a separation between the number of mistakes obtainable by a private online learner and a non-private online learner. This partially resolves a COLT'22 open question published by Sanyal and Ramponi. • Our lower bound also yields the first separation between the standard model of private online learning and a recently proposed relaxed variant of it, called private online prediction.
AB - One of the most basic problems for studying the “price of privacy over time” is the so called private counter problem, introduced by Dwork et al. (2010) and Chan et al. (2011). In this problem, we aim to track the number of events that occur over time, while hiding the existence of every single event. More specifically, in every time step t ∈ [T] we learn (in an online fashion) that ∆t ≥ 0 new events have occurred, and must respond with an estimate nt ≈-tj=1 ∆j. The privacy requirement is that all of the outputs together, across all time steps, satisfy event level differential privacy. The main question here is how our error needs to depend on the total number of time steps T and the total number of events n. Dwork et al. (2015) showed an upper bound of O(log(T) + log2(n)), and Henzinger et al. (2023) showed a lower bound of Ω (min{log n, log T}). We show a new lower bound of Ω (min{n, log T}), which is tight w.r.t. the dependence on T, and is tight in the sparse case where log2 n = O(log T). Our lower bound has the following implications: • We show that our lower bound extends to the online thresholds problem, where the goal is to privately answer many “quantile queries” when these queries are presented one-by-one. This resolves an open question of Bun et al. (2017). • Our lower bound implies, for the first time, a separation between the number of mistakes obtainable by a private online learner and a non-private online learner. This partially resolves a COLT'22 open question published by Sanyal and Ramponi. • Our lower bound also yields the first separation between the standard model of private online learning and a recently proposed relaxed variant of it, called private online prediction.
UR - http://www.scopus.com/inward/record.url?scp=85203677458&partnerID=8YFLogxK
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.conferencearticle???
AN - SCOPUS:85203677458
SN - 2640-3498
VL - 247
SP - 1200
EP - 1222
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 37th Annual Conference on Learning Theory, COLT 2024
Y2 - 30 June 2024 through 3 July 2024
ER -