Gradient Descent Monotonically Decreases the Sharpness of Gradient Flow Solutions in Scalar Networks and Beyond

Itai Kreisler*, Mor Shpigel Nacson*, Daniel Soudry, Yair Carmon

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

3 Scopus citations

Abstract

Recent research shows that when Gradient Descent (GD) is applied to neural networks, the loss almost never decreases monotonically. Instead, the loss oscillates as gradient descent converges to its “Edge of Stability” (EoS). Here, we find a quantity that does decrease monotonically throughout GD training: the sharpness attained by the gradient flow solution (GFS)-the solution that would be obtained if, from now until convergence, we train with an infinitesimal step size. Theoretically, we analyze scalar neural networks with the squared loss, perhaps the simplest setting where the EoS phenomena still occur. In this model, we prove that the GFS sharpness decreases monotonically. Using this result, we characterize settings where GD provably converges to the EoS in scalar networks. Empirically, we show that GD monotonically decreases the GFS sharpness in a squared regression model as well as practical neural network architectures.

Original languageEnglish
Pages (from-to)17684-17744
Number of pages61
JournalProceedings of Machine Learning Research
Volume202
StatePublished - 2023
Event40th International Conference on Machine Learning, ICML 2023 - Honolulu, United States
Duration: 23 Jul 202329 Jul 2023

Funding

FundersFunder number
Blavatnik Family Foundation
European Research Executive Agency
European Commission
European Research Council101039436
European Research Council
Israel Science Foundation2486/21
Israel Science Foundation

    Fingerprint

    Dive into the research topics of 'Gradient Descent Monotonically Decreases the Sharpness of Gradient Flow Solutions in Scalar Networks and Beyond'. Together they form a unique fingerprint.

    Cite this