Abstract

For several decades, the camera spatial resolution is gradually increasing with the CMOS technology evolution. The image sensors provide more and more pixels, generating new constraints for suitable optics. As an alternative, promising solutions propose super-resolution (SR) techniques to reconstruct high-resolution images or video without modifying the sensor architecture. However, most of the SR implementations are far from reaching real-time performance on a low-budget hardware platform. Moreover, convincing state-of-the-art studies reveal that artifacts can be observed in highly textured areas of the image. In this paper, we propose a local adaptive spatial super resolution (LASSR) method to fix this limitation. LASSR is a two-step SR method including a machine learning-based texture analysis and a fast interpolation method that performs a pixel-by-pixel SR. Multiple evaluations of our method are also provided using standard image metrics for quantitative evaluation and also a psycho-visual assessment for a perceptual evaluation. A first FPGA-based implementation of the proposed method is then presented. It enables high-quality 2–4 k super-resolution videos to be performed at 16 fps, using only 13% of the FPGA capacity, opening the way to reach more than 60 fps by executing several parallel instances of the LASSR code on the FPGA.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call