Skip to main content

Collaborative Attention Guided Multi−Scale Feature Fusion Network for Medical Image Segmentation

Zhenghua Xu‚ Biao Tian‚ Shijie Liu‚ Xiangtao Wang‚ Di Yuan‚ Junhua Gu‚ Junyang Chen‚ Thomas Lukasiewicz and Victor C. M. Leung

Abstract

Medical image segmentation is an important and complex task in clinical practices, but the widely used U-Net usually cannot achieve satisfactory performances in some clinical challenging cases. Therefore, some advanced variants of UNet are proposed using multi-scale and attention mechanisms. Different from the existing works where multi-scale and attention are usually used independently, in this work, we integrate them together and propose a collaborative attention guided multi-scale feature fusion with enhanced convolution based U-Net (EC-CaMUNet) model for more accurate medical image segmentation, where a novel collaborative attention guided multi-scale feature fusion (CoAG-MuSFu) module is proposed to highlight important (but small and unremarkable) multi-scale features and suppress irrelevant ones in model learning. Specifically, CoAG-MuSF uses a multi-dimensional collaborative attention (CoA) block to estimate the local and global self-attention, which is then deeply fused with the multi-scale feature maps generated by a multi-scale (MuS) block to better highlight the important multi-scale features and suppress the irrelevant ones. Furthermore, an additional supervision path and enhanced convolution blocks are used to enhance the deep model’s feature learning ability in both deep and shallow features, respectively. Experimental results on three public medical image datasets show that EC-CaM-UNet greatly outperforms the state-of-the-art medical image segmentation baselines. The codes will be released after acceptance.

Journal
IEEE Transactions on Network Science and Engineering
Number
2
Pages
1857–1871
Volume
11
Year
2023