Abstract

This paper presents a generative adversarial network (GAN) with patch match algorithm to realize a high-quality digital zooming using two camera modules with different focal lengths. In dual camera system, shorter focal length module produces the wide-view image with the low resolution. On the other hand, the longer focal length module produces the tele-view image via optical zooming. The long-focal image contains more details than short-focal image and can be used to guide short-focal image to reconstruct high frequency part. Firstly, a feature extraction block (FEB) is advanced to extract feature of long-focal image and short focal-image to reconstruct a wide-view image with different resolutions. Next, a patch match algorithm is integrated into convolution neural networks (CNN) to fuse information of long-focal with short-focal image and generate a new fused image. Finally, the fused image and short-focal image are merged with a feature fusion block (FFB) to predict high-resolution images. In addition, generative adversarial networks are used for filtering information integrated by previous network and output the zoomed image. Extensive experiments on benchmark datasets show that our algorithm achieves favorable performance against state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.