Accountable Off-Policy Evaluation via a Kernelized Bellman Statistics

Part of Proceedings of the International Conference on Machine Learning 1 pre-proceedings (ICML 2020)

Bibtex »Metadata »Paper »Supplemental »

Bibtek download is not availble in the pre-proceeding


Authors

Yihao Feng, Tongzheng Ren, Ziyang Tang, Qiang Liu

Abstract

<p>Off-policy evaluation plays an important role in modern reinforcement learning. However, most of the existing off-policy evaluation only focus on the value estimation, without providing an accountable confidence interval, that can reflect the uncertainty caused by limited observed data and algorithmic errors. Recently, Feng et al. (2019) proposed a novel kernel loss for learning value functions, which can also be used to test whether the learned value function satisfies the Bellman equation. In this work, we investigate the statistical properties of the kernel loss, which allows us to find a feasible set that contains the true value function with high probability. We further utilize this set to construct an accountable confidence interval for off-policy value estimation, and a post-hoc diagnosis for existing estimators. Empirical results show that our methods yield a tight yet accountable confidence interval in different settings, which demonstrate the effectiveness of our method.</p>