Currently, although semi-supervised image segmentation has achieved significant success in many aspects, further improvement in segmentation accuracy is necessary for practical applications. Additionally, there are fewer networks specifically designed for segmenting 3D images compared to those for 2D images, and their performance is notably inferior. To enhance the efficiency of network training, various attention mechanisms have been integrated into network models. However, these networks have not effectively extracted all the useful spatial or channel information. Particularly for 3D medical images, which contain rich spatial and channel information with tightly interconnected relationships between them, there remains a wealth of spatial and channel-specific information waiting to be explored and utilized. This paper proposes a bidirectional and efficient attention parallel network (BEAP-Net). Specifically, we introduce two modules: Supreme Channel Attention (SCA) and Parallel Spatial Attention (PSA). These modules aim to extract more spatial and channel-specific feature information and effectively utilize it. We combine the principles of consistency training and entropy regularization to enable mutual learning among sub-models. We evaluate the proposed BEAP-Net on two public 3D medical datasets, LA and Pancreas. The network outperforms the current state of the art in eight algorithms and is better suited for 3D medical images. It achieves the new best semi-supervised segmentation performance on the LA database. Ablation studies further validate the effectiveness of each component of the proposed model. Moreover, the SCA and PSA modules proposed can be seamlessly integrated into other 3D medical image segmentation networks to yield significant performance gains.
Read full abstract