Abstract

In the realm of image processing, enhancing the quality of the images is known as a super-resolution problem (SR). Among SR methods, a super-resolution generative adversarial network, or SRGAN, has been introduced to generate SR images from low-resolution images. As it is of the utmost importance to keep the size and the shape of the images, while enlarging the medical images, we propose a novel super-resolution model with a generative adversarial network to generate SR images with finer details and higher quality to encourage less blurring. By widening residual blocks and using a self-attention layer, our model becomes robust and generalizable as it is able to extract the most important part of the images before up-sampling. We named our proposed model as wide-attention SRGAN (WA-SRGAN). Moreover, we have applied improved Wasserstein with a Gradient penalty to stabilize the model while training. To train our model, we have applied images from Camylon 16 database and enlarged them by $2\times $ , $4\times $ , $8\times $ , and $16\times $ upscale factors with the ground truth of the size of $256\times 256\times 3$ . Furthermore, two normalization methods, including batch normalization, and weight normalization have been applied and we observed that weight normalization is an enabling factor to improve metric performance in terms of SSIM. Moreover, several evaluation metrics, such as PSNR, MSE, SSIM, MS-SSIM, and QILV have been applied for having a comprehensive objective comparison with other methods, including SRGAN, A-SRGAN, and bicubial. Also, we performed the job of classification by using a deep learning model called ResNeXt-101 ( $32\times 8\text{d}$ ) for super-resolution, high-resolution, and low-resolution images and compared the outcomes in terms of accuracy score. Finally, the results on breast cancer histopathology images show the superiority of our model by using weight normalization and a batch size of one in terms of restoration of the color and the texture details.

Highlights

  • Due to the cost of hardware and storage to acquire the high resolution (HR) images from low resolution (LR) medical images are advisable

  • We focused on the Structural Similarity Index (SSIM) metric as the higher score we have gained from this metric, the better result we have obtained in terms of preserving context and color information

  • The comparison of the results shows the positive impact of our proposed method, WA-SUPER-RESOLUTION GENERATIVE ADVERSARIAL NETWORK (SRGAN), in the preprocessing phase as accuracy for the classification through the superresolution problem (SR) images is almost the same as the results obtained for HR images

Read more

Summary

Introduction

Due to the cost of hardware and storage to acquire the high resolution (HR) images from low resolution (LR) medical images are advisable. To tackle the cost of hardware, image super-resolution methods are drawing attention to the reconstruction of poor-quality images with missing pixels on them. Using the quantitative performance of the applied SR methods, generative adversarial networks are gaining attention from researchers as they are able to reconstruct the images in a realistic manner [1]. In this paper, we propose an SR solution method based on generative adversarial networks (SRGAN) with the aim of reconstructing histopathology breast cancer images. We used a wide residual block instead of residual blocks. In both generator and discriminator models, we utilized a self-attention layer to catch the most important features

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.