Abstract

AbstractBreast cancer has a significant mortality rate and is widespread among women. Detecting it early is crucial for effective treatment. While mammograms provide detailed anatomical visuals, their accuracy is hindered by low sensitivity and high background noise, especially in dense breast tissue. This study introduces a new network architecture integrating generative adversarial networks (GANs) and vision transformers to improve reference‐based super‐resolution. The proposed model combines image generation and classification in a unified framework, eliminating the need for a separate classifier during training. To enhance GAN robustness, a unique two‐channel approach is employed, and transformer and residual learning techniques improve overall efficiency. The study introduces a novel synthetic image‐based model to improve feature detection for breast cancer classification. The proposed model shows the ability to understand information variability across different representations, enhancing accuracy and reducing computational time. Using the INbreast dataset, the results indicate the model's capability to generate high‐quality images with sensitivity and accuracy values of 0.932 and 0.988, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call