Multilinear Latent Conditioning for Generating Unseen Attribute Combinations

Part of Proceedings of the International Conference on Machine Learning 1 pre-proceedings (ICML 2020)

Bibtex »Metadata »Paper »Supplemental »

Bibtek download is not availble in the pre-proceeding


Authors

Markos Georgopoulos, Grigorios Chrysos, Maja Pantic, Yannis Panagakis

Abstract

<p>Empirical studies have shown that deep generative models demonstrate inductive bias. Although this bias is crucial in problems with high dimensional data, like images, generative models lack the generalization ability that occurs naturally in human perception. For example, humans can visualize a woman smiling after only seeing a smiling man. On the contrary, the standard conditional variational auto-encoder (cVAE) is unable to generate unseen attribute combinations. To this end, we extend the cVAE by introducing a multilinear latent conditioning framework. We implement two variants of our model and demonstrate their efficacy on MNIST, Fashion-MNIST and CelebA. Altogether, we design a novel conditioning framework that can be used with any architecture to synthesize unseen attribute combinations.</p>