Abstract
This study evaluates the performance of the MaskFormer model for segmenting and classifying breast lesions using ultrasound images, addressing ultrasound's limitations. Ultrasound used for breast cancer detection faces challenges like low image contrast and difficulty in the detection of small or multiple lesions, further complicated by variability based on operator skill. Initial experiments with U-Net and other CNN-based models revealed constraints, such as early plateauing in model loss, which indicated suboptimal learning and performance. In contrast, MaskFormer demonstrated continuous improvement, achieving higher precision in breast lesion segmentation and significantly reducing both false positives and false negatives. Comparative analysis showed MaskFormer's superior performance, with the highest precision and recall rates for malignant lesions and an overall mean average precision (mAP) of 0.943. The model's ability to detect a diverse range of breast lesions, including those potentially missed by the human eye, especially by less experienced practitioners, underscores its potential. These findings suggest that integrating AI models like MaskFormer could greatly enhance ultrasound performance for breast cancer detection, providing reliable, operator-independent image analysis and potentially improving patient outcomes on a global scale.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.