Estimating the Error of Randomized Newton Methods: A Bootstrap Approach

Part of Proceedings of the International Conference on Machine Learning 1 pre-proceedings (ICML 2020)

Bibtex »Metadata »Paper »

Bibtek download is not availble in the pre-proceeding


Authors

Miles Lopes, Jessie X.T. Chen

Abstract

<p>Randomized Newton methods have recently become the focus of intense research activity in large-scale and distributed optimization. Generally, these methods are based on a "computation-accuracy trade-off", which allows the user to gain scalability in exchange for error in the solution. However, the user does not know how much error is created by the randomization, which can be detrimental in two ways: On one hand, the user may try to manage the unknown error with theoretical worst-case error bounds, but this approach is impractical when the bounds involve unknown constants, and it typically leads to excessive computation. On the other hand, the user may select tuning parameters or stopping criteria in a heuristic manner, but this is generally unreliable. Motivated by these difficulties, we develop a bootstrap method for directly estimating the unknown error, which avoids excessive computation and offers greater reliability. Also, we provide non-asymptotic theoretical guarantees to show that the error estimates are valid for several error metrics and algorithms (including GIANT and Newton Sketch). Lastly, we show that the proposed method adds relatively little cost to existing randomized Newton methods, and that it performs well in a range of experimental conditions.</p>