Abstract

In recent years, the field of image super-resolution has mainly focused on the single-image super-resolution (SISR) task, which is to estimate an HR image from a single LR input. Due to the ill-posed ness of the SISR problem, these methods are limited to increasing the high-frequency details of the image by learning the a priori of the image. And multi-frame super-resolution (MFSR) provides the possibility to reconstruct rich details using the spatial and temporal difference information between images. With the increasing popularity of array camera technology, this key advantage makes MFSR an important issue for practical applications. We propose a new structure to complete the task of multi-frame image super-resolution. Our network takes multiple noisy images as input and generates a denoised, super-resolution RGB image as output. First, we align the multi-frame images by estimating the dense pixel optical flow between the images, and construct an adaptive fusion module to fuse the information of all frames. Then we build a feature fusion network to simultaneously fuse the depth feature information of multiple LR images and the internal features of the initial high-resolution image. In order to evaluate real-world data, We use the BurstSR data set, which includes real images of smartphones and highresolution SLR cameras, to prove the effectiveness of the proposed multiframe image super-resolution algorithm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.