Skip to main content

Hybrid Reinforced Medical Report Generation with M−Linear Attention and Repetition Penalty

Zhenghua Xu‚ Wenting Xu‚ Ruizhi Wang‚ Junyang Chen‚ Chang Qi and Thomas Lukasiewicz

Abstract

To reduce doctors' workload, deep-learning-based automatic medical report generation has recently attracted more and more research efforts, where deep convolutional neural networks (CNNs) are employed to encode the input images, and recurrent neural networks (RNNs) are used to decode the visual features into medical reports automatically. However, these state-of-the-art methods mainly suffer from three shortcomings: (i) incomprehensive optimization, (ii) low-order and unidimensional attention mechanisms, and (iii) repeated generation. In this article, we propose a hybrid reinforced medical report generation method with m-linear attention and repetition penalty mechanism (HReMRG-MR) to overcome these problems. Specifically, a hybrid reward with different weights is employed to remedy the limitations of single-metric-based rewards. We also propose a search algorithm with linear complexity to approximate the best weight combination. Furthermore, we use m-linear attention modules to explore high-order feature interactions and to achieve multi-modal reasoning, while a repetition penalty applies penalties to repeated terms during the model's training process. Extensive experimental studies on two public datasets show that HReMRG-MR greatly outperforms the state-of-the-art baselines in terms of all metrics. We also conducted a series of ablation experiments to prove the effectiveness of all our proposed components. We also performed a reward search toy experiment to give evidence that our proposed search approach can significantly reduce the search time while approximating the best performance.

Journal
IEEE Transactions on Neural Networks and Learning Systems
Note
Accepted for publication
Year
2023