Abstract
Video analyses based on edge computing typically need high-resolution video images, while, in practice, the resolutions of captured video images may not high enough. Thus, super-resolution techniques are possible solutions for such purpose with input low-resolution images. Sparse-coding-based super-resolution methods are well known for their efficiency. However, the current sparse-coding-based methods suffer from two major problems. First, the sparse coefficient of a low-resolution patch is assumed to be same as the sparse coefficient of its corresponding high-resolution patch, which is too strict to deal with various patterns; and second, the current methods only learn one pair of high-resolution and low-resolution dictionaries, while, since patches in image are of diversity in real world, it is difficult to use only one pair of dictionaries to cover all possible patches. In this paper, to overcome these two issues, we propose a super-resolution method to: 1) relax the assumption by linearizing the sparse coefficient of a low-resolution patch to that of high-resolution patch and 2) minimize super-resolution errors by jointly partitioning training patches into several clusters and learning dictionaries. Experimental results validate that our algorithm achieve more faithful reconstructions.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have