TY - JOUR

T1 - Concurrent Shuffle Differential Privacy Under Continual Observation

AU - Tenenbaum, Jay

AU - Kaplan, Haim

AU - Mansour, Yishay

AU - Stemmer, Uri

N1 - Publisher Copyright:
© 2023 Proceedings of Machine Learning Research. All rights reserved.

PY - 2023

Y1 - 2023

N2 - We introduce the concurrent shuffle model of differential privacy. In this model we have multiple concurrent shufflers permuting messages from different, possibly overlapping, batches of users. Similarly to the standard (single) shuffle model, the privacy requirement is that the concatenation of all shuffled messages should be differentially private. We study the private continual summation problem (a.k.a. the counter problem) and show that the concurrent shuffle model allows for significantly improved error compared to a standard (single) shuffle model. Specifically, we give a summation algorithm with error Õ(n1/(2k+1)) with k concurrent shufflers on a sequence of length n. Furthermore, we prove that this bound is tight for any k, even if the algorithm can choose the sizes of the batches adaptively. For k = log n shufflers, the resulting error is polylogarithmic, much better than Θ̃(n1/3) which we show is the smallest possible with a single shuffler. We use our online summation algorithm to get algorithms with improved regret bounds for the contextual linear bandit problem. In particular we get optimal Õ(√n) regret with k = Ω̃(log n) concurrent shufflers.

AB - We introduce the concurrent shuffle model of differential privacy. In this model we have multiple concurrent shufflers permuting messages from different, possibly overlapping, batches of users. Similarly to the standard (single) shuffle model, the privacy requirement is that the concatenation of all shuffled messages should be differentially private. We study the private continual summation problem (a.k.a. the counter problem) and show that the concurrent shuffle model allows for significantly improved error compared to a standard (single) shuffle model. Specifically, we give a summation algorithm with error Õ(n1/(2k+1)) with k concurrent shufflers on a sequence of length n. Furthermore, we prove that this bound is tight for any k, even if the algorithm can choose the sizes of the batches adaptively. For k = log n shufflers, the resulting error is polylogarithmic, much better than Θ̃(n1/3) which we show is the smallest possible with a single shuffler. We use our online summation algorithm to get algorithms with improved regret bounds for the contextual linear bandit problem. In particular we get optimal Õ(√n) regret with k = Ω̃(log n) concurrent shufflers.

UR - http://www.scopus.com/inward/record.url?scp=85174412998&partnerID=8YFLogxK

M3 - ???researchoutput.researchoutputtypes.contributiontojournal.conferencearticle???

AN - SCOPUS:85174412998

SN - 2640-3498

VL - 202

SP - 33961

EP - 33982

JO - Proceedings of Machine Learning Research

JF - Proceedings of Machine Learning Research

T2 - 40th International Conference on Machine Learning, ICML 2023

Y2 - 23 July 2023 through 29 July 2023

ER -