Explaining Chest X−ray Pathologies in Natural Language
Maxime Kayser‚ Cornelius Emde‚ Oana Camburu‚ Guy Parsons‚ Bartlomiej Papiez and Thomas Lukasiewicz
Most deep learning algorithms lack transparency in their decision-making, which limits their deployment in clinical practice. Approaches to improve transparency, especially in medical imaging, have often been shown to convey little information, be overly reassuring, or lack robustness. In this work, we introduce the task of generating natural language explanations (NLEs) to justify predictions made on medical images. NLEs are human-friendly and comprehensive, and they enable the training of intrinsically explainable models. As a first step, we create MIMIC-NLE, the first, large-scale, medical imaging dataset with radiological NLEs. It contains over 38,000 NLEs, which explain the presence of various thoracic pathologies and chest X-ray findings. We then propose a general approach to solve the task and evaluate several architectures on this dataset, including via clinician assessment.