TY - GEN
T1 - Asynchronous Stochastic Optimization Robust to Arbitrary Delays
AU - Cohen, Alon
AU - Daniely, Amit
AU - Drori, Yoel
AU - Koren, Tomer
AU - Schain, Mariano
N1 - Publisher Copyright:
© 2021 Neural information processing systems foundation. All rights reserved.
PY - 2021
Y1 - 2021
N2 - We consider stochastic optimization with delayed gradients where, at each time step t, the algorithm makes an update using a stale stochastic gradient from step t − dt for some arbitrary delay dt. This setting abstracts asynchronous distributed optimization where a central server receives gradient updates computed by worker machines. These machines can experience computation and communication loads that might vary significantly over time. In the general non-convex smooth optimization setting, we give a simple and efficient algorithm that requires O(σ2/ϵ4 + τ/ϵ2) steps for finding an ϵ-stationary point x, where τ is the average delayT1∑Tt=1 dt and σ2 is the variance of the stochastic gradients. This improves over previous work, which showed that stochastic gradient decent achieves the same rate but with respect to the maximal delay maxt dt, that can be significantly larger than the average delay especially in heterogeneous distributed systems. Our experiments demonstrate the efficacy and robustness of our algorithm in cases where the delay distribution is skewed or heavy-tailed.
AB - We consider stochastic optimization with delayed gradients where, at each time step t, the algorithm makes an update using a stale stochastic gradient from step t − dt for some arbitrary delay dt. This setting abstracts asynchronous distributed optimization where a central server receives gradient updates computed by worker machines. These machines can experience computation and communication loads that might vary significantly over time. In the general non-convex smooth optimization setting, we give a simple and efficient algorithm that requires O(σ2/ϵ4 + τ/ϵ2) steps for finding an ϵ-stationary point x, where τ is the average delayT1∑Tt=1 dt and σ2 is the variance of the stochastic gradients. This improves over previous work, which showed that stochastic gradient decent achieves the same rate but with respect to the maximal delay maxt dt, that can be significantly larger than the average delay especially in heterogeneous distributed systems. Our experiments demonstrate the efficacy and robustness of our algorithm in cases where the delay distribution is skewed or heavy-tailed.
UR - http://www.scopus.com/inward/record.url?scp=85131885352&partnerID=8YFLogxK
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:85131885352
T3 - Advances in Neural Information Processing Systems
SP - 9024
EP - 9035
BT - Advances in Neural Information Processing Systems 34 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021
A2 - Ranzato, Marc'Aurelio
A2 - Beygelzimer, Alina
A2 - Dauphin, Yann
A2 - Liang, Percy S.
A2 - Wortman Vaughan, Jenn
PB - Neural information processing systems foundation
T2 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021
Y2 - 6 December 2021 through 14 December 2021
ER -