Abstract

Event cameras are bio-inspired cameras that can measure the intensity change asynchronously with high temporal resolution. One of the advantages of event cameras is that they suffer less from motion blur than traditional frame cameras when recording daily scenes with fast-moving objects. In this paper, we formulate the deblurring task on traditional cameras directed by events to be a residual learning one, and propose corresponding network architectures for effective learning of deblurring and high frame rate video generation tasks. We first train a modified U-Net network to restore a sharp image from a blurry image using the corresponding events. Then we train another similar network by replacing the downsampling blocks with blocks of the convolutional long short-term memory (Conv-LSTM) to recurrently generate high frame rate video using the restored sharp image and part of the events. Benefitting from the blur-free events and the proposed learning strategy, the experimental results show that the proposed method outperforms state-of-the-art methods for generating sharp images and high frame rate videos.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.