Acceleration through spectral density estimation

Part of Proceedings of the International Conference on Machine Learning 1 pre-proceedings (ICML 2020)

Bibtex »Metadata »Paper »Supplemental »

Bibtek download is not availble in the pre-proceeding


Authors

Fabian Pedregosa, Damien Scieur

Abstract

<p>We develop a framework for designing optimal optimization methods in terms of their average-case runtime. This yields a new class of methods that achieve acceleration through a model of the Hessian's expected spectral density. We develop explicit algorithms for the uniform, Marchenko-Pastur and exponential distribution. These methods are momentum-based gradient algorithms whose hyper-parameters can be estimated cheaply using only the norm and the trace of the Hessian, in stark contrast with classical accelerated methods like Nesterov acceleration and Polyak momentum that require knowledge of the Hessian's largest and smallest singular value. Empirical results on quadratic, logistic regression and neural network show the proposed methods always match and in many cases significantly improve upon classical accelerated methods.</p>