Abstract
We give a new separation result between the generalization performance of stochastic gradient descent (SGD) and of full-batch gradient descent (GD) in the fundamental stochastic convex optimization model. While for SGD it is well-known that O(1/ϵ2) iterations suffice for obtaining a solution with ϵ excess expected risk, we show that with the same number of steps GD may overfit and emit a solution with Ω(1) generalization error. Moreover, we show that in fact Ω(1/ϵ4) iterations are necessary for GD to match the generalization performance of SGD, which is also tight due to recent work by Bassily et al. (2020). We further discuss how regularizing the empirical risk minimized by GD essentially does not change the above result, and revisit the concepts of stability, implicit bias and the role of the learning algorithm in generalization.
Original language | English |
---|---|
Title of host publication | Proceedings of Thirty Fourth Conference on Learning Theory |
Editors | Mikhail Belkin, Samory Kpotufe |
Publisher | PMLR |
Pages | 63-92 |
Number of pages | 30 |
State | Published - 2021 |
Event | 34th Annual Conference on Learning Theory, COLT 2021 - Boulder, United States Duration: 15 Aug 2021 → 19 Aug 2021 Conference number: 34 |
Publication series
Name | Proceedings of Machine Learning Research |
---|---|
Publisher | PMLR |
Volume | 134 |
ISSN (Electronic) | 2640-3498 |
Conference
Conference | 34th Annual Conference on Learning Theory, COLT 2021 |
---|---|
Abbreviated title | COLT 2021 |
Country/Territory | United States |
City | Boulder |
Period | 15/08/21 → 19/08/21 |