InfoGAN-CR: Disentangling Generative Adversarial Networks with Contrastive Regularizers

Part of Proceedings of the International Conference on Machine Learning 1 pre-proceedings (ICML 2020)

Bibtex »Metadata »Paper »Supplemental »

Bibtek download is not availble in the pre-proceeding


Zinan Lin, Kiran Thekumparampil, Giulia Fanti, Sewoong Oh


<p>Standard deep generative models have latent codes that can be arbitrarily rotated, and a specific coordinate has no meaning. For manipulation and exploration of the samples, we seek a disentangled latent code where each coordinate is associated with a distinct property of the target distribution. Recent advances has been dominated by Variational Autoencoder (VAE)-based methods, while training disentangled generative adversarial networks (GANs) remains challenging. To this end, we make two contributions: a novel approach for training disentangled GANs and a novel approach for selecting the best disentangled model. First, we propose a regularizer that achieves higher disentanglement scores than state-of-the-art VAE- and GAN-based approaches. This contrastive regularizer is inspired by a natural notion of disentanglement: latent traversal. Latent traversal refers to generating images by varying one latent code while fixing the rest. We turn this intuition into a regularizer by adding a discriminator that detects how the latent codes are coupled together, in paired examples. Next, one major weakness of all disentanglement benchmark tests is that all reported scores are based on hyperparameters tuned with a predefined disentangled representations on synthetic datasets. This is neither fair nor realistic, as one can arbitrarily improve the performance with more hyperparameter tuning and real datasets do not come with such supervision. We propose an unsupervised model selection scheme based on medoids. Numerical experiments confirm that thus selected models improve upon the state-of-the-art models selected with supervised hyperparameter tuning. </p>