Abstract

Optical zooming is an important feature of imaging systems. In this paper, we investigate a low-cost signal processing alternative to optical zooming-synthetic zooming by super-resolution (SR) techniques. Synthetic zooming is achieved by registering a sequence of low-resolution (LR) images acquired at varying focal lengths and reconstructing the SR image at a larger focal length or increased spatial resolution. Under the assumptions of constant scene depth and zooming speed, we argue that the motion trajectories of all physical points are related to each other by a unique vanishing point and present a robust technique for estimating its 3D coordinate. Such a line-geometry-based registration is the foundation of SR for synthetic zooming. We address the issue of data inconsistency arising from the varying focal length of optical lens during the zooming process. To overcome the difficulty of data inconsistency, we propose a two-stage Delaunay-triangulation-based interpolation for fusing the LR image data. We also present a PDE-based nonlinear deblurring to accommodate the blindness and variation of sensor point spread functions. Simulation results with real-world images have verified the effectiveness of the proposed SR techniques for synthetic zooming.

Highlights

  • Image resolution is a critical factor affecting the quality of image and video

  • Under the assumption that the region of interest has the same scene depth and the zooming speed is approximately constant, we argue that all images are linked by a simple line-geometric model— the projections of any point in the physical scene at different focal lengths lie along a ray, and the rays corresponding to different physical points intersect at a unique point called “vanishing point” (VP)

  • One particular challenge with SR for synthetic zooming is that the sensor point spread function (PSF) is unknown and varying along the temporal axis. Such observation gives rise to the issue of data consistency in SR image reconstruction; that is, if LR image data correspond to different PSFs, how do we fuse them together? We propose to divide the collection of LR frames into consecutive groups and employ Delaunaytriangulation (DT)-based interpolation [14] to fuse the data for each group separately

Read more

Summary

INTRODUCTION

Image resolution is a critical factor affecting the quality of image and video. To increase the spatial resolution, we can increase the sampling density or the focal length of CCD sensors [1]. Under the assumption that the region of interest has the same scene depth (e.g., a flat surface parallel to the imaging plane) and the zooming speed is approximately constant, we argue that all images are linked by a simple line-geometric model— the projections of any point in the physical scene at different focal lengths lie along a ray, and the rays corresponding to different physical points intersect at a unique point called “vanishing point” (VP) Such observation motivates us to EURASIP Journal on Applied Signal Processing solve the registration problem for synthetic zooming by estimating the 3D coordinate of VP.

PROBLEM STATEMENT
ESTIMATING VANISHING POINT OF MOTION TRAJECTORIES
Multi-frame motion estimation for zooming
Error analysis and implications
SR IMAGE RECONSTRUCTION FOR SYNTHETIC ZOOMING
Two-stage interpolation via Delaunay triangulation
PDE-based nonlinear deblurring
SIMULATION RESULTS
Vanishing point estimation
Super-resolution for synthetic zooming
CONCLUSIONS AND PERSPECTIVES

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.