Stochastic optimization with laggard data pipelines

Naman Agarwal, Rohan Anil, Tomer Koren, Kunal Talwar, Cyril Zhang

Research output: Contribution to journalConference articlepeer-review

10 Scopus citations

Abstract

State-of-the-art optimization is steadily shifting towards massively parallel pipelines with extremely large batch sizes. As a consequence, CPU-bound preprocessing and disk/memory/network operations have emerged as new performance bottlenecks, as opposed to hardware-accelerated gradient computations. In this regime, a recently proposed approach is data echoing (Choi et al., 2019), which takes repeated gradient steps on the same batch while waiting for fresh data to arrive from upstream. We provide the first convergence analyses of “data-echoed” extensions of common optimization methods, showing that they exhibit provable improvements over their synchronous counterparts. Specifically, we show that in convex optimization with stochastic minibatches, data echoing affords speedups on the curvature-dominated part of the convergence rate, while maintaining the optimal statistical rate.

Original languageEnglish
JournalAdvances in Neural Information Processing Systems
Volume2020-December
StatePublished - 2020
Event34th Conference on Neural Information Processing Systems, NeurIPS 2020 - Virtual, Online
Duration: 6 Dec 202012 Dec 2020

Funding

FundersFunder number
National Science FoundationIIS-1523815, CCF1704860
National Science Foundation

    Fingerprint

    Dive into the research topics of 'Stochastic optimization with laggard data pipelines'. Together they form a unique fingerprint.

    Cite this