Abstract

Gradual advances have occurred in the design of micro-artificial intelligence for resource-limited hardware. A high-resolution (HR) image reconstruction module is indispensable for video analytics chips or devices at the edge. This paper proposes a low-cost and learning-based interpolation method for HR image reconstruction. The proposed method generates reconstructed pixels by processing reference pixels with optimal weights, which are pre-trained by solving the minimum mean square error problem for real images. To reduce the number of computation units and the usage of learned weights, a cross-directional interpolation architecture, which includes a vertical kernel and a horizontal kernel, is adopted. Moreover, a one-dimensional feature discriminator is proposed to improve the quality of up-scaled images efficiently. The main benefit of the proposed method is that it requires a small number of computation units but can still produce high-quality images. The hardware architecture of the proposed method was implemented on a field-programmable gate array (FPGA) by using Xilinx UltraScale+ ZCU102 and an application-specific integrated circuit (ASIC) by using TSMC’s 0.13-μm technology. On the ASIC, the proposed hardware required only approximately 60K gate counts and 50 KBytes of memory. The experimental results indicate that the average peak signal-to-noise ratio of the up-scaled images reached 35.92 dB for the Set-5 dataset. The throughput of the proposed hardware was at least 1000 Mpixels/s on the FPGA and 1200 Mpixels/s on the ASIC, which indicates that the proposed hardware can handle a target resolution higher than 4K in real time.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call