Abstract
Medical images are an important basis for doctors to diagnose diseases, but some medical images have low resolution due to hardware technology and cost constraints. Super-resolution technology can reconstruct low-resolution medical images into high-resolution images and enhance the quality of low-resolution images, thus assisting doctors in diagnosing diseases. However, traditional super-resolution methods mainly learn the mapping relationships among modal pixels from low resolution to high resolution, lacking the learning of high-level semantic features, resulting in a lack of understanding and utilization of semantic information, such as reconstructed objects, object attributes, and spatial relationships between two objects. In this paper, we propose a medical image super-resolution method based on semantic perception transfer learning. First, we propose a novel semantic perception super-resolution method that empowers super-resolution models to perceive high-level semantics by transferring features of the image description generation network in natural language processing. Second, we construct a semantic feature extraction network and an image description generation network and comprehensively utilized image and text modal data to learn transferable, high-level semantic features. Third, we train an end-to-end, semantic perception super-resolution model by fusing dynamic perceptual convolution, a semantic extraction network, and distillation polarization self-attention. Experiments show that semantic perception transfer learning can effectively improve the quality of super-resolution reconstruction.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE/ACM Transactions on Computational Biology and Bioinformatics
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.