Abstract

Video motion magnification provides information to understand the subtle changes present in objects for applications like industrial, healthcare, sports, etc. Most state-of- the-art (SOTA) methods use hand-crafted bandpass filters, which require prior information for the motion magnification, produces ringing artifacts, and small magnification etc. While others use deep-learning based techniques for higher magnification, but their output suffers from artificially induced motion, distortions, blurriness, etc. Further, SOTA methods are computationally complex, which makes them less suitable for real-time applications. To address these problems, we proposed deep learning based simple yet effective solution for motion magnification. The proposed method uses a feature sharing and appearance encoder for better motion magnification with fewer distortions, artifacts etc. Additionally, for reducing magnification of noise and other unwanted changes, proxy-model based training is proposed. A computationally lightweight model (~0.12 M parameters) is proposed along with the base model. The performance of the proposed models is tested qualitatively and quantitatively, with the SOTA methods. Results demonstrate the effectiveness of the proposed lightweight and base model over the existing SOTA methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call