Linear and Non-linear Models of Natural Language Semantics
Meaning representation is at the heart of every natural language processing task. In this talk I will give an overview of various classes of models aiming at representing the meaning of sentences by combining the two prominent paradigms in natural language semantics, namely compositional semantics with distributional models of meaning. I focus on two broad categories: Tensor-based models come equipped with rigorous mathematical foundations, share certain properties with quantum mechanics, and provide a test-bed for studying compositional aspects of language at a level deeper than most practically-oriented approaches would allow; on the other hand, non-linear models following neural network-based architectures are very powerful tools with state-of-the-art performance, gaining recently increasing attention. Using examples from personal work I present an overview of these two approaches, discuss their pros and cons, and point to open issues and work in progress.