Abstract

High quality medical images are not only an important basis for doctors to carry out clinical diagnosis and treatment, but also conducive to downstream tasks such as image analysis. Although many medical image enhancement methods have achieved good results, some of them still have shortcomings in homogenizing illumination distribution and maintaining texture details, and even introduce boundary artifact noise. In order to deal with these problems, this paper proposes a multi-scale attention generative adversarial network (MAGAN) for medical image enhancement, which is suitable for unpaired images. Our MAGAN is trained in the confrontation between two generators and two discriminators. It tries to fuse multi-scale information in feature extraction by establishing feature pyramid, and filters irrelevant activation to highlight important regions based on attention distribution, which is positive for imaging. Moreover, MAGAN strengthens the constraints on the quality of enhanced image from the perspectives of illumination distribution, texture details, deep semantic features and smoothness, so as to improve the enhancement effect. Compared with six state-of-the-art methods, the experimental results show that MAGAN has the most significant image enhancement effect, and also performs best in the downstream task of image segmentation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.