Skip to main content

A posteriori verification or a priori design? Navigating deep learning with logical requirements

Eleonora Giunchiglia (TU-Wien) ( TU -Wien )

Abstract: 

Thanks to their outstanding ability to find hidden patterns in data, deep learning models have been extensively applied in various domains. However, recent research reveals a critical drawback: neural networks often fall short of compliance with requirements that express background knowledge about the problem at hand. This poses a major problem, as requirements compliance is typically deemed a necessary condition in safety critical applications and more in general in standard software engineering. To face such a challenge, a possible approach is represented by the a posteriori verification/testing of deep learning models' behaviour.

In this talk, we show how it is possible to adopt a complementary approach in which the requirements are embedded in the topology of neural networks themselves, thus making them compliant by-design with the specified requirements. Our method consists in compiling all the given requirements in a single layer that (i) can be added at training time to any neural network and (ii) allows the gradients to seamlessly backpropagate through it. Thanks to such properties, we are also able to show that these networks can learn from the given background knowledge, thus leading to improved results compared to their standard counterparts. We begin with an exploration of simple hierarchical requirements (i.e., requirements of the form, e.g., $A \to B$ stating that if the class $A$ is predicted then class $B$ should be too), continue with the examination of requirements expressed as a CNF formula, and conclude with a discussion on requirements expressed as linear inequalities.

Speaker bio

Eleonora Giunchiglia is currently a post-doctoral researcher at the Institute of Logic and Computation at TU-Wien, having transitioned there from the University of Oxford after the completion of her PhD. Eleonora's research is centred on Neural-symbolic AI, with a specific focus on how to create safer deep learning models that are compliant by-design with a set of user-defined requirements. She is also an Editorial Board Member of the Neuro-symbolic Artificial Intelligence Journal and a member of the AI Existential Safety Community. Finally, Eleonora has organised multiple workshops and events, including the ROAD-R 2023 challenge co-located at NeurIPS 2023.

 

 

Share this: