Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates

Part of Proceedings of the International Conference on Machine Learning 1 pre-proceedings (ICML 2020)

Bibtex »Metadata »Paper »Supplemental »

Bibtek download is not availble in the pre-proceeding


Authors

Yang Liu, Hongyi Guo

Abstract

<p>Learning with noisy labels is a common problem in supervised learning. Existing approaches require practitioners to specify \emph{noise rates}, i.e., a set of parameters controlling the severity of label noises in the problem. The specifications are either assumed to be given or estimated using additional approaches. In this work, we introduce a new family of loss functions that we name as \emph{peer loss} functions, which enables learning from noisy labels that does not require a priori specification of the noise rates.Our approach uses a standard empirical risk minimization (ERM) framework with peer loss functions. Peer loss functions associate each training sample with a certain form of ``peer" samples, which evaluate a classifier' predictions jointly. </p> <p>We show that, under mild conditions, performing ERM with peer loss functions on the noisy dataset leads to the optimal or a near optimal classifier as if performing ERM over the clean training data, which we do not have access to. We pair our results with an extensive set of experiments, where we compare with state-of-the-art techniques of learning with noisy labels. Our results show that peer loss functions based method consistently outperforms the baseline benchmarks, as well as some recent new results. Peer loss provides a way to simplify model development when facing potentially noisy training labels, and can be promoted as a robust candidate loss function in such situations.</p>