Statistical Bias in Dataset Replication

Part of Proceedings of the International Conference on Machine Learning 1 pre-proceedings (ICML 2020)

Bibtex »Metadata »Paper »Supplemental »

Bibtek download is not availble in the pre-proceeding


Authors

Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Jacob Steinhardt, Aleksander Madry

Abstract

<p>Dataset replication is a useful tool for assessing whether models have overfit to a specific validation set or the exact circumstances under which it was generated. In this paper, we highlight the importance of statistical modeling in dataset replication: we present unintuitive yet pervasive ways in which statistical bias, when left unmitigated, can skew results. Specifically, we examine ImageNet-v2, a replication of the ImageNet dataset that induces a significant drop in model accuracy, presumed to be caused by a benign distribution shift between the datasets. We show, however, that by identifying and accounting for the aforementioned bias, we can explain the vast majority of this accuracy drop. We conclude with concrete recommendations for recognizing and avoiding bias in dataset replication.</p>