Skip to main content

Independence is not an issue in Neurosymbolic AI

Hakan Faronius ( Orebro University )

The goal of neurosymbolic AI is to combine the expressive power of neural networks with the structured reasoning capabilities of logic. A popular approach to neurosymbolic AI is to take the output of the last layer of a neural network, e.g. a softmax activation, and pass it through a sparse computation graph encoding certain logical constraints one wishes to enforce. First, we will show how this flavor of neurosymbolic AI relates to supervised learning with disjunctive supervision. Secondly, we show how common assumptions made in neurosymbolic AI can alleviate some of the pitfalls encountered in disjunctive supervision. Lastly, we also contradict some recent observations in the literature questioning the validity of neurosymbolic AI and show that these mischaracterizations can be traced back to an improper application of neurosymbolic learning.