Abstract

Skin cancer diagnosis often relies on image segmentation as a crucial aid, and a high-performance segmentation can lower misdiagnosis risks. Part of the medical devices often have limited computing power for deploying image segmentation algorithms. However, existing high-performance algorithms for image segmentation primarily rely on computationally intensive large models, making it challenging to meet the lightweight deployment requirement of medical devices. State-of-the-art lightweight models are not able to capture both local and global feature information of lesion edges due to their model structures, result in pixel loss of lesion edge. To tackle this problem, we propose LeaNet, a novel U-shaped network for high-performance yet lightweight skin cancer image segmentation. Specifically, LeaNet employs multiple attention blocks in a lightweight symmetric U-shaped design. Each blocks contains a dilated efficient channel attention (DECA) module for global and local contour information and an inverted external attention (IEA) module to improve information correlation between data samples. Additionally, LeaNet uses an attention bridge (AB) module to connect the left and right sides of the U-shaped architecture, thereby enhancing the model's multi-level feature extraction capability. We tested our model on ISIC2017 and ISIC2018 datasets. Compared with large models like ResUNet, LeaNet improved the ACC, SEN, and SPEC metrics by 1.09 %, 2.58 %, and 1.6 %, respectively, while reducing the model's parameter number and computational complexity by 570x and 1182x. Compared with lightweight models like MALUNet, LeaNet achieved improvements of 2.07 %, 4.26 %, and 3.11 % in ACC, SEN, and SPEC, respectively, reducing the parameter number and computational complexity by 1.54x and 1.04x.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call