Learning Reasoning Strategies in End-to-End Differentiable Proving

Part of Proceedings of the International Conference on Machine Learning 1 pre-proceedings (ICML 2020)

Bibtex »Metadata »Paper »

Bibtek download is not availble in the pre-proceeding


Authors

Pasquale Minervini, Sebastian Riedel, Pontus Stenetorp, Edward Grefenstette, Tim Rocktäschel

Abstract

<p>Attempts to render deep learning models interpretable, data-efficient, and robust has seen some success through hybridisation with rule-based systems like Neural Theorem Provers (NTPs). These neuro-symbolic reasoning models can induce interpretable rules and learn representations from data via back-propagation, while providing logical explanations for their predictions. However, they are restricted by their computational complexity, as they need to consider all possible proof paths for explaining a goal, thus rendering them unfit for large-scale applications. We present Conditional Theorem Provers (CTPs), an extension to NTPs that learns an optimal rule selection strategy via gradient-based optimisation. We show that CTPs are scalable and yield state-of-the-art results on the CLUTRR dataset, which tests systematic generalisation of neural models. CTPs generalise better than a wide range of baselines when tested on larger graphs than trained on, while producing competitive results on standard link prediction benchmarks.</p>