Skip to main content

Evaluating Automatically Generated Fact Checking Explanations

Isabelle Augenstein ( University of Copenhagen )
The past decade has seen a substantial rise in the amount of mis- and disinformation online, from targeted disinformation campaigns to influence politics, to the unintentional spreading of misinformation about public health. This development has spurred research in the area of automatic fact checking, from approaches to detect check-worthy claims and determining the stance of tweets towards claims, to methods to determine the veracity of claims given evidence documents.
These automatic methods are often content-based, using natural language processing methods, which in turn utilise deep neural networks to learn higher-order features from text in order to make predictions. As deep neural networks are black-box models, their inner workings cannot be easily explained. At the same time, it is desirable to explain how they arrive at certain decisions, especially if they are to be used for decision making. While this has been known for some time, the issues this raises have been exacerbated by models increasing in size, and by EU legislation requiring models to be used for decision making to provide explanations, and, very recently, by legislation requiring online platforms operating in the EU to provide transparent reporting on their services. Despite this, current solutions for explainability are still largely lacking in the area of fact checking.
Moreover, it is important to validate the generated explanations. A key challenge is that disagreements between explanations, whether they are manually or automatically generated, do not necessarily indicate factual errors. Rather, different explanations can be right for different reasons. As such, research on how to automatically evaluate explanations is needed.
This talk provides a brief introduction to the area of automatic fact checking, including claim check-worthiness detection, stance detection and veracity prediction. It then presents some first solutions on generating explanations for fact checking, with a focus on how to automatically evaluate the generated explanations.
𝗛𝗼𝘄 𝗰𝗮𝗻 𝘆𝗼𝘂 𝗷𝗼𝗶𝗻?
Register here.
(Registration closes 2 hours before the beginning of the seminar).

Speaker bio

Isabelle Augenstein is an associate professor at the University of Copenhagen, Department of Computer Science, where she heads the Copenhagen Natural Language Understanding research group as well as the Natural Language Processing section. She also co-heads the research team at CheckStep Ltd, a content moderation start-up. Her main research interests are fact checking, low-resource learning and explainability. Before starting a faculty position, she was a postdoctoral research associate at UCL, mainly investigating machine reading from scientific articles. She has a PhD in Computer Science from the University of Sheffield. She currently hold a prestigious DFF Sapere Aude Research Leader fellowship on 'Learning to Explain Attitudes on Social Media'. Isabelle Augenstein is the current president of the ACL Special Interest Group on Representation Learning (SIGREP), as well a co-founder of the Widening NLP (WiNLP) initiative.

 

 

Share this: