SGD Generalizes Better Than GD (And Regularization Doesn't Help)

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We give a new separation result between the generalization performance of stochastic gradient descent (SGD) and of full-batch gradient descent (GD) in the fundamental stochastic convex optimization model. While for SGD it is well-known that O(1/ϵ2) iterations suffice for obtaining a solution with ϵ excess expected risk, we show that with the same number of steps GD may overfit and emit a solution with Ω(1) generalization error. Moreover, we show that in fact Ω(1/ϵ4) iterations are necessary for GD to match the generalization performance of SGD, which is also tight due to recent work by Bassily et al. (2020). We further discuss how regularizing the empirical risk minimized by GD essentially does not change the above result, and revisit the concepts of stability, implicit bias and the role of the learning algorithm in generalization.
Original languageEnglish
Title of host publicationProceedings of Thirty Fourth Conference on Learning Theory
EditorsMikhail Belkin, Samory Kpotufe
PublisherPMLR
Pages63-92
Number of pages30
StatePublished - 2021
Event34th Annual Conference on Learning Theory, COLT 2021 - Boulder, United States
Duration: 15 Aug 202119 Aug 2021
Conference number: 34

Publication series

NameProceedings of Machine Learning Research
PublisherPMLR
Volume134
ISSN (Electronic)2640-3498

Conference

Conference34th Annual Conference on Learning Theory, COLT 2021
Abbreviated titleCOLT 2021
Country/TerritoryUnited States
CityBoulder
Period15/08/2119/08/21

Fingerprint

Dive into the research topics of 'SGD Generalizes Better Than GD (And Regularization Doesn't Help)'. Together they form a unique fingerprint.

Cite this