Ultrasound images are widely used in the detection of breast lesions, thyroid nodules, renal tumors and other medical conditions due to their cost-effectiveness and convenience. Automatic segmentation of lesions in ultrasound images is crucial for clinical diagnosis. U-shaped networks with encoder–decoder architectures and skip connections have achieved significant success in medical image segmentation. However, challenges arise in segmentation due to the inherent blurriness of ultrasound image boundaries and the irregular shapes and volumes of lesions. Despite the capability of transformers to capture global information, we posit that graph neural network (GNN) offers a more flexible approach to capturing irregular features. GNN constructs connections for each part, leveraging global information in the process. In this study, we propose GNN-Enhanced Dual-Branch Network (GED-Net), a robust framework for lesion segmentation in ultrasound images. The Network consists of an encoder with two branches: one for extracting local features using convolutional neural network (CNN) and the other for extracting global features using GNN, along with a fusion module and a decoder. We evaluated the model on four publicly available medical ultrasound datasets including BUSI, Dataset B, DDTI and OASBUD, and demonstrated superior performance compared to most classical U-Net networks and state-of-the-art U-shaped networks. The source code is available at https://github.com/Yakiw/GED-Net.
Read full abstract