Variance reduction for matrix games

Yair Carmon, Yujia Jin, Aaron Sidford, Kevin Tian

Research output: Contribution to journalConference articlepeer-review

36 Scopus citations

Abstract

We present a randomized primal-dual algorithm that solves the problem minx maxy y>Ax to additive error e in time nnz(A) + pnnz(A)n/e, for matrix A with larger dimension n and nnz(A) nonzero entries. This improves the best known exact gradient methods by a factor of pnnz(A)/n and is faster than fully stochastic gradient methods in the accurate and/or sparse regime e = pn/nnz(A). Our results hold for x, y in the simplex (matrix games, linear programming) and for x in an `2 ball and y in the simplex (perceptron / SVM, minimum enclosing ball). Our algorithm combines the Nemirovski's “conceptual prox-method” and a novel reduced-variance gradient estimator based on “sampling from the difference” between the current iterate and a reference point.

Original languageEnglish
JournalAdvances in Neural Information Processing Systems
Volume32
StatePublished - 2019
Externally publishedYes
Event33rd Annual Conference on Neural Information Processing Systems, NeurIPS 2019 - Vancouver, Canada
Duration: 8 Dec 201914 Dec 2019

Funding

FundersFunder number
National Science FoundationDGE1656518, CCF-1844855

    Fingerprint

    Dive into the research topics of 'Variance reduction for matrix games'. Together they form a unique fingerprint.

    Cite this