Abstract

In this paper, we propose an adaptive approach for modeling video signals through localized learning in the spatio- temporal domain. Unlike existing models based on explicit motion estimation, ours exploits the temporal redundancy by a Least- Square based filter whose coefficients are trained from a local spatio-temporal window. Both filter support and training window can be made adaptive to the motion characteristics of video. Such spatio-temporal adaptive localized learning (STALL) can be viewed as an implicit motion estimation procedure and is particularly suitable for modeling the class of video material with slow and rigid motion. Under the new framework, we consider the applications of STALL into video denoising, video super- resolution and video coding. Preliminary experimental results are highly encouraging, which demonstrate the potential of the new model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call