Skip to main content

Multi−ConDoS: Multimodal Contrastive Domain Sharing Generative Adversarial Networks for Self−Supervised Medical Image Segmentation

Jiaojiao Zhang‚ Shuo Zhang‚ Xiaoqian Shen‚ Thomas Lukasiewicz and Zhenghua Xu

Abstract

Existing self-supervised medical image segmentation usually encounters the domain shift problem (i.e., the input distribution of pretraining is different from that of fine-tuning) and/or the multimodality problem (i.e., it is based on single-modal data only and cannot utilize the fruitful multimodal information of medical images). To solve these problems, in this work, we propose multimodal contrastive domain sharing (Multi-ConDoS) generative adversarial networks to achieve effective multimodal contrastive self-supervised medical image segmentation. Compared to the existing self-supervised approaches, Multi-ConDoS has the following three advantages: (i) it utilizes multi-modal medical images to learn more comprehensive object features via multimodal contrastive learning; (ii) domain translation is achieved by integrating the cyclic learning strategy of CycleGAN and the cross domain translation loss of Pix2Pix; (iii) novel domain sharing layers are introduced to learn not only domain-specific but also domain-sharing information from the multimodal medical images. Extensive experiments on two public multimodal medical image segmentation datasets prove that (i) Multi-ConDoS greatly outperforms the state-of-the-art self-supervised and semi-supervised medical image segmentation baselines, and (ii) the above three improvements are all effective and essential for Multi-ConDoS to achieve this very superior performance.

Journal
IEEE Transactions on Medical Imaging
Year
2023