Skip to main content

Global optimality of Elman-type RNNs in the mean-field regime

Publication ,  Conference
Agazzi, A; Lu, J; Mukherjee, S
Published in: Proceedings of Machine Learning Research
January 1, 2023

We analyze Elman-type Recurrent Reural Networks (RNNs) and their training in the mean-field regime. Specifically, we show convergence of gradient descent training dynamics of the RNN to the corresponding mean-field formulation in the large width limit. We also show that the fixed points of the limiting infinite-width dynamics are globally optimal, under some assumptions on the initialization of the weights. Our results establish optimality for feature-learning with wide RNNs in the mean-field regime.

Duke Scholars

Published In

Proceedings of Machine Learning Research

EISSN

2640-3498

Publication Date

January 1, 2023

Volume

202

Start / End Page

196 / 227
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Agazzi, A., Lu, J., & Mukherjee, S. (2023). Global optimality of Elman-type RNNs in the mean-field regime. In Proceedings of Machine Learning Research (Vol. 202, pp. 196–227).
Agazzi, A., J. Lu, and S. Mukherjee. “Global optimality of Elman-type RNNs in the mean-field regime.” In Proceedings of Machine Learning Research, 202:196–227, 2023.
Agazzi A, Lu J, Mukherjee S. Global optimality of Elman-type RNNs in the mean-field regime. In: Proceedings of Machine Learning Research. 2023. p. 196–227.
Agazzi, A., et al. “Global optimality of Elman-type RNNs in the mean-field regime.” Proceedings of Machine Learning Research, vol. 202, 2023, pp. 196–227.
Agazzi A, Lu J, Mukherjee S. Global optimality of Elman-type RNNs in the mean-field regime. Proceedings of Machine Learning Research. 2023. p. 196–227.

Published In

Proceedings of Machine Learning Research

EISSN

2640-3498

Publication Date

January 1, 2023

Volume

202

Start / End Page

196 / 227