Skip to main content

e−ViL: A Dataset and Benchmark for Natural Language Explanations in Vision−Language Tasks

Maxime Kayser‚ Oana−Maria Camburu‚ Leonard Salewski‚ Cornelius Emde‚ Virginie Do‚ Zeynep Akata and Thomas Lukasiewicz

Abstract

An increasing number of recent works introduced models capable of generating natural language explanations (NLEs) for their predictions on vision-language (VL) tasks. Such models are appealing, because they can provide human-friendly and comprehensive explanations. However, there is still a lack of unified evaluation approaches for the explanations generated by these models. Moreover, there are currently only few datasets of NLEs for VL tasks. In this work, we introduce e-ViL, a benchmark for explainable vision-language tasks that establishes a unified evaluation framework and provides the first comprehensive comparison of existing approaches that generate NLEs for VL tasks. e-ViL spans across four models and three datasets. Both automatic metrics and human evaluation are used to assess model-generated explanations. We also introduce e-SNLI-VE, the largest existing VL dataset with NLEs (over 430k instances). Finally, we propose a new model that combines UNITER, which learns joint embeddings of images and text, and GPT-2, a pre-trained language model that is well-suited for text generation. It surpasses the previous state of the art by a large margin across all datasets.

Book Title
Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision‚ ICCV 2021‚ Virtual Conference‚ October 11–17‚ 2021
Month
October
Year
2021