Disentangling Trainability and Generalization in Deep Neural Networks

Part of Proceedings of the International Conference on Machine Learning 1 pre-proceedings (ICML 2020)

Bibtex »Metadata »Paper »Supplemental »

Bibtek download is not availble in the pre-proceeding


Authors

Lechao Xiao, Jeffrey Pennington, Samuel Schoenholz

Abstract

<p>A longstanding goal in the theory of deep learn-ing is to characterize the conditions under whicha given neural network architecture will be train-able, and if so, how well it might generalize tounseen data. In this work, we provide such a char-acterization in the limit of very wide and verydeep networks, for which the analysis simplifiesconsiderably. For wide networks, the trajectoryunder gradient descent is governed by the NeuralTangent Kernel (NTK), and for deep networks,the NTK itself maintains only weak data depen-dence. By analyzing the spectrum of the NTK,we formulate necessary conditions for trainabilityand generalization across a range of architectures,including Fully Connected Networks (FCNs) andConvolutional Neural Networks (CNNs). Weidentify large regions of hyperparameter spacefor which networks can memorize the training setbut completely fail to generalize. We find thatCNNs without global average pooling behave al-most identically to FCNs, but that CNNs withpooling have markedly different and often bettergeneralization performance. A thorough empiri-cal investigation of these theoretical results showsexcellent agreement on real datasets.</p>