Large-scale methods for distributionally robust optimization

Daniel Levy*, Yair Carmon*, John C. Duchi, Aaron Sidford

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

95 Scopus citations

Abstract

We propose and analyze algorithms for distributionally robust optimization of convex losses with conditional value at risk (CVaR) and ?2 divergence uncertainty sets. We prove that our algorithms require a number of gradient evaluations independent of training set size and number of parameters, making them suitable for large-scale applications. For ?2 uncertainty sets these are the first such guarantees in the literature, and for CVaR our guarantees scale linearly in the uncertainty level rather than quadratically as in previous work. We also provide lower bounds proving the worst-case optimality of our algorithms for CVaR and a penalized version of the ?2 problem. Our primary technical contributions are novel bounds on the bias of batch robust risk estimation and the variance of a multilevel Monte Carlo gradient estimator due to Blanchet and Glynn [8]. Experiments on MNIST and ImageNet confirm the theoretical scaling of our algorithms, which are 9–36 times more efficient than full-batch methods.

Original languageEnglish
Pages (from-to)8847-8860
Number of pages14
JournalAdvances in Neural Information Processing Systems
Volume33
StatePublished - 2020
Externally publishedYes
Event34th Conference on Neural Information Processing Systems, NeurIPS 2020 - Virtual, Online
Duration: 6 Dec 202012 Dec 2020

Funding

FundersFunder number
National Science FoundationHDR 1934578, CCF-1553086
Office of Naval ResearchN00014-19-2288
Microsoft ResearchCCF-1955039, CCF-1844855

    Fingerprint

    Dive into the research topics of 'Large-scale methods for distributionally robust optimization'. Together they form a unique fingerprint.

    Cite this