TY - GEN

T1 - Private stochastic convex optimization

T2 - 52nd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2020

AU - Feldman, Vitaly

AU - Koren, Tomer

AU - Talwar, Kunal

N1 - Publisher Copyright:
© 2020 Owner/Author.

PY - 2020/6/8

Y1 - 2020/6/8

N2 - We study differentially private (DP) algorithms for stochastic convex optimization: the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions. A recent work of Bassily et al. (2019) has established the optimal bound on the excess population loss achievable given n samples. Unfortunately, their algorithm achieving this bound is relatively inefficient: it requires O(min{n3/2, n5/2/d}) gradient computations, where d is the dimension of the optimization problem. We describe two new techniques for deriving DP convex optimization algorithms both achieving the optimal bound on excess loss and using O(min{n, n2/d}) gradient computations. In particular, the algorithms match the running time of the optimal non-private algorithms. The first approach relies on the use of variable batch sizes and is analyzed using the privacy amplification by iteration technique of Feldman et al. (2018). The second approach is based on a general reduction to the problem of localizing an approximately optimal solution with differential privacy. Such localization, in turn, can be achieved using existing (non-private) uniformly stable optimization algorithms. As in the earlier work, our algorithms require a mild smoothness assumption. We also give a linear-time optimal algorithm for the strongly convex case, as well as a faster algorithm for the non-smooth case.

AB - We study differentially private (DP) algorithms for stochastic convex optimization: the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions. A recent work of Bassily et al. (2019) has established the optimal bound on the excess population loss achievable given n samples. Unfortunately, their algorithm achieving this bound is relatively inefficient: it requires O(min{n3/2, n5/2/d}) gradient computations, where d is the dimension of the optimization problem. We describe two new techniques for deriving DP convex optimization algorithms both achieving the optimal bound on excess loss and using O(min{n, n2/d}) gradient computations. In particular, the algorithms match the running time of the optimal non-private algorithms. The first approach relies on the use of variable batch sizes and is analyzed using the privacy amplification by iteration technique of Feldman et al. (2018). The second approach is based on a general reduction to the problem of localizing an approximately optimal solution with differential privacy. Such localization, in turn, can be achieved using existing (non-private) uniformly stable optimization algorithms. As in the earlier work, our algorithms require a mild smoothness assumption. We also give a linear-time optimal algorithm for the strongly convex case, as well as a faster algorithm for the non-smooth case.

KW - Differential Privacy

KW - Stochastic Convex Optimization

KW - Stochastic Gradient Descent

UR - http://www.scopus.com/inward/record.url?scp=85086764029&partnerID=8YFLogxK

U2 - 10.1145/3357713.3384335

DO - 10.1145/3357713.3384335

M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???

AN - SCOPUS:85086764029

T3 - Proceedings of the Annual ACM Symposium on Theory of Computing

SP - 439

EP - 449

BT - STOC 2020 - Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing

A2 - Makarychev, Konstantin

A2 - Makarychev, Yury

A2 - Tulsiani, Madhur

A2 - Kamath, Gautam

A2 - Chuzhoy, Julia

PB - Association for Computing Machinery

Y2 - 22 June 2020 through 26 June 2020

ER -