Abstract

In this paper, we propose an unsupervised learning-based multi-frame video super-resolution (SR) approach via decision trees model (DTSRV). This novel approach utilizes the temporal redundancy and coherence in consecutive video frames. Motion estimation is applied between consecutive frames to form concatenated motion compensated patches. The low resolution (LR) - high resolution (HR) pairs are then formed to be the training input of the decision trees. After the classification process with decision trees, a linear regression model is learnt to map the relationship between the concatenated LR patches and the HR patches. Results of our experiments show that the approach outperforms state-of-the-art model-based algorithms with an average of 0.97 dB PSNR increase and a much faster speed. It also achieves a 1.4 dB better results for large video sizes than the frame-by-frame image SR using decision trees learning techniques. This is the first time reporting in the literature to use comprehensive random trees/forests structures for video SR. Now the scheme only utilizes two neighbor frames and can already have a good result, which proves its efficiency in real-time application. Our analysis also proves it to have more promising possibilities and advantages for future development.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call