Abstract
This study introduces a novel approach to improve breast tumor classification by integrating advanced image processing techniques and a state-of-the-art Vision Transformer (ViT) model. Our methodology involves transforming B-mode ultrasound images into Cross-Correlated Weighted Contourlet Multi-Parametric Multi-channel (CWC-MP-MC) images. This process includes applying Nakagami, Normal Inverse Gaussian (NIG), and Rician Inverse Gaussian (RiIG) statistical modeling to generate three distinct channels representing different statistical properties of the ultrasound data. These ultrasound data undergo a multi-resolution transform domain like contourlet transform and are weighted by cross-correlation to produce the CWC-MP-MC image. This composite image encapsulates comprehensive information about breast tissue characteristics, offering a robust representation for tumor classification. For classification, we utilize a optimized Vision Transformer (ViT) architecture specially designed to be lightweight, fine-tuned, and suitable for operating in a low-configuration GPU environment. Our experiments, conducted on three publicly available datasets (Mendeley, UDIAT, and BUSI), demonstrate that our proposed methodology achieves accuracy, sensitivity, specificity, NPV, PPV, and F1 scores exceeding 98% when employing CWC-MP-MC images.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.