Distance Metric Learning with Joint Representation Diversification

Part of Proceedings of the International Conference on Machine Learning 1 pre-proceedings (ICML 2020)

Bibtex »Metadata »Paper »Supplemental »

Authors

Xu Chu, Yang Lin, Yasha Wang, Xiting Wang, Hailong Yu, Xin Gao, Qi Tong

Abstract

Distance metric learning (DML) is to learn a representation space equipped with a metric, such that examples from the same class are closer than examples from different classes with respect to the metric. The recent success of deep neural networks motivates many DML losses that encourage the intra-class compactness and inter-class separability. However, overemphasizing intra-class compactness may potentially cause the neural networks to filter out information that contributes to discriminating examples from unseen classes, resulting in a less generalizable representation. In contrast, we propose not to penalize intra-class distances explicitly and use a Joint Representation Similarity (JRS) regularizer that focuses on penalizing inter-class distributional similarities in a DML framework. The proposed JRS regularizer diversifies the joint distributions of representations from different classes in multiple neural layers based on cross-covariance operators in Reproducing Kernel Hilbert Space (RKHS). Experiments on three well-known benchmark datasets (Cub-200-2011, Cars-196, and Stanford Online Products) demonstrate the effectiveness of the proposed approach.