The Effect of Natural Distribution Shift on Question Answering Models

Part of Proceedings of the International Conference on Machine Learning 1 pre-proceedings (ICML 2020)

Bibtex »Metadata »Paper »Supplemental »

Bibtek download is not availble in the pre-proceeding


John Miller, Karl Krauth, Benjamin Recht, Ludwig Schmidt


<p>We build four new test sets for the Stanford Question Answering Dataset (SQuAD) and evaluate the ability of question-answering systems to generalize to new data. In the original Wikipedia domain, we find no evidence of adaptive overfitting despite several years of test-set reuse. On datasets derived from New York Times articles, Reddit posts, and Amazon product reviews, we observe average performance drops of 3.0, 12.6, and 14.0 F1, respectively, across a broad range of models. In contrast, a strong human baseline matches or exceeds the performance of SQuAD models on the original domain and exhibits little to no drop in new domains. Taken together, our results confirm the surprising resilience of the holdout method and emphasize the need to move towards evaluation metrics that incorporate robustness to natural distribution shifts.</p>