Skip to main content

Research aims to enhance large-scale probabilistic databases

Posted:

A department team has begun work on a project which aims to enhance large-scale probabilistic databases (and so unlock their full data modelling potential) by including more realistic data models, while preserving their computational properties.

Systems that crawl the web encountering new sources and adding facts to their databases have a huge amount of potential uses. However, a lack of common-sense knowledge about their stored data is currently limiting their potential in practice. Oxford researchers are working to overcome these constraints.

As part of this EPSRC-funded research, the team (Professor Thomas Lukasiewicz as principal investigator with İsmail İlkan Ceylan and Professors Georg Gottlob and Dan Olteanu as co-investigators) is planning to develop different semantics for the resulting probabilistic databases and analyse their computational properties and sources of intractability.

Over the three-and-a-half years of the project, they are also planning to design practical scalable query-answering algorithms for databases, especially algorithms based on knowledge compilation techniques. They will extend existing knowledge compilation approaches and elaborate new ones, based on tensor factorisation and neural-symbolic knowledge compilation.

Once designed, the team plans to produce a prototype implementation and experimentally evaluate the proposed algorithms. These prototypes should help demonstrate the full potential of large-scale probabilistic knowledge.

Read more about the project at: https://www.cs.ox.ac.uk/innovation/research-impact/case-probabilistic-knowledge-base.html