TY - JOUR
T1 - On the Implicit Bias of Gradient Descent for Temporal Extrapolation
AU - Cohen-Karlik, Edo
AU - David, Avichai Ben
AU - Cohen, Nadav
AU - Globerson, Amir
N1 - Publisher Copyright:
Copyright © 2022 by the author(s)
PY - 2022
Y1 - 2022
N2 - When using recurrent neural networks (RNNs) it is common practice to apply trained models to sequences longer than those seen in training. This “extrapolating” usage deviates from the traditional statistical learning setup where guarantees are provided under the assumption that train and test distributions are identical. Here we set out to understand when RNNs can extrapolate, focusing on a simple case where the data generating distribution is memoryless. We first show that even with infinite training data, there exist RNN models that interpolate perfectly (i.e., they fit the training data) yet extrapolate poorly to longer sequences. We then show that if gradient descent is used for training, learning will converge to perfect extrapolation under certain assumptions on initialization. Our results complement recent studies on the implicit bias of gradient descent, showing that it plays a key role in extrapolation when learning temporal prediction models.
AB - When using recurrent neural networks (RNNs) it is common practice to apply trained models to sequences longer than those seen in training. This “extrapolating” usage deviates from the traditional statistical learning setup where guarantees are provided under the assumption that train and test distributions are identical. Here we set out to understand when RNNs can extrapolate, focusing on a simple case where the data generating distribution is memoryless. We first show that even with infinite training data, there exist RNN models that interpolate perfectly (i.e., they fit the training data) yet extrapolate poorly to longer sequences. We then show that if gradient descent is used for training, learning will converge to perfect extrapolation under certain assumptions on initialization. Our results complement recent studies on the implicit bias of gradient descent, showing that it plays a key role in extrapolation when learning temporal prediction models.
UR - http://www.scopus.com/inward/record.url?scp=85163130695&partnerID=8YFLogxK
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.conferencearticle???
AN - SCOPUS:85163130695
SN - 2640-3498
VL - 151
SP - 10966
EP - 10981
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 25th International Conference on Artificial Intelligence and Statistics, AISTATS 2022
Y2 - 28 March 2022 through 30 March 2022
ER -