Improved Bounds on Minimax Regret under Logarithmic Loss via Self-Concordance

Part of Proceedings of the International Conference on Machine Learning 1 pre-proceedings (ICML 2020)

Bibtex »Metadata »Paper »Supplemental »

Bibtek download is not availble in the pre-proceeding


Authors

Blair Bilodeau, Dylan Foster, Daniel Roy

Abstract

We study the classical problem of forecasting under logarithmic loss while competing against an arbitrary class of experts. We present a novel approach to bounding the minimax regret that exploits the self-concordance property of logarithmic loss. Our regret bound depends on the metric entropy of the expert class and matches previous best known results for arbitrary expert classes. We improve the dependence on the time horizon for classes with metric entropy under the supremum norm of order $\Omega(\gamma^{-p})$ when $p>1$, which includes, for example, Lipschitz functions of dimension greater than 1.