Hallucinative Topological Memory for Zero-Shot Visual Planning

Part of Proceedings of the International Conference on Machine Learning 1 pre-proceedings (ICML 2020)

Bibtex »Metadata »Paper »Supplemental »

Bibtek download is not availble in the pre-proceeding


Authors

Kara Liu, Thanard Kurutach, Christine Tung, Pieter Abbeel, Aviv Tamar

Abstract

<p>In visual planning (VP), an agent learns to plan goal-directed behavior from observations of a dynamical system obtained offline, e.g., images obtained from self-supervised robot interaction. Bearing similarity with batch reinforcement learning (RL), VP algorithms essentially combine data-driven perception and planning. Most previous works on VP approached the problem by planning in a learned latent space, resulting in low-quality visual plans, and difficult training algorithms. Here, instead, we propose a simple VP method that plans directly in image space and displays competitive performance. We build on the semi-parametric topological memory (SPTM) method: image samples are treated as nodes in a graph, the graph connectivity is learned from image sequence data, and planning can be performed using conventional graph search methods. We make two modifications to SPTM, to make it suitable for VP. First, we propose an energy-based graph connectivity function that admits stable training using contrastive predictive coding. Second, to allow zero-shot planning in new domains, we learn a conditional VAE model that generates images given a context of the domain, and use these hallucinated samples for building the connectivity graph and planning. We show that this simple approach is competitive with SOTA VP methods, in terms of both image fidelity and success rate when using the plan to guide a trajectory-following controller.</p>