Meta-Learning with Shared Amortized Variational Inference

Part of Proceedings of the International Conference on Machine Learning 1 pre-proceedings (ICML 2020)

Bibtex »Metadata »Paper »Supplemental »

Bibtek download is not availble in the pre-proceeding


Authors

Ekaterina Iakovleva, Jakob Verbeek, Karteek Alahari

Abstract

<p>In the context of an empirical Bayes model for meta-learning where a subset of model parameters is treated as latent variables, we propose a novel scheme for amortized variational inference. This approach is based on the conditional variational autoencoder framework, which allows to learn the conditional prior distribution over model parameters given limited training data. In our model, we share the same amortized inference network between the prior and posterior distributions over the model parameters. While the posterior inference leverages both the test and the train data, including the labels, the prior inference is based on the train data only. We show that in earlier approaches based on Monte-Carlo approximation the prior collapses to a Dirac delta function. In contrast, our variational approach prevents this collapse and preserves uncertainty over the model parameters. We evaluate our approach on standard benchmark datasets, including miniImageNet, and obtain results demonstrating the advantage of our approach over previous work.</p>