Skip to main content

Complex Query Answering with Neural Link Predictors, and Neuro-Symbolic Reasoning

Pasquale Minervini ( UCL )
Neural link predictors are immensely useful for identifying missing edges in large scale Knowledge Graphs. However, it is still not clear how to use these models for answering more complex queries that arise in a number of domains, such as queries using logical conjunctions, disjunctions, and existential quantifiers, while accounting for missing edges. In this work, we propose a framework for efficiently answering complex queries on incomplete Knowledge Graphs. We translate each query into an end-to-end differentiable objective, where the truth value of each atom is computed by a pre-trained neural link predictor; we then analyse two solutions to the optimisation problem, including gradient-based and combinatorial search. The proposed approach produces more accurate results than state-of-the-art methods --- black-box neural models trained on millions of generated queries --- without the need for training on a large and diverse set of complex queries. Using orders of magnitude less training data, we obtain relative improvements ranging from 8% up to 40% in Hits@3 across different Knowledge Graphs containing factual information. Finally, we demonstrate that it is possible to explain the outcome of our model in terms of the intermediate solutions identified for each of the complex query atoms. This work was presented at ICLR 2021, where it was awarded an Outstanding Paper Award. We then will discuss how can we extend this framework to develop end-to-end differentiable reasoning systems, that can learn symbolic rules via back-propagation, use them for tasks that require deductive reasoning, and use the resulting proof paths for producing explanations to its users.

 

 

Share this: