Abstract

Video motion magnification is the task of making subtle minute motions visible. Many times subtle motion occurs while being invisible to the naked eye, e.g., slight deformations in muscles of an athlete, small vibrations in the objects, microexpression, and chest movement while breathing. Magnification of such small motions has resulted in various applications like posture deformities detection, microexpression recognition, and studying the structural properties. State-of-the-art (SOTA) methods have fixed computational complexity, which makes them less suitable for applications requiring different time constraints, e.g., real-time respiratory rate measurement and microexpression classification. To solve this problem, we propose a knowledge distillation-based latency aware-differentiable architecture search (KL-DNAS) method for video motion magnification. To reduce memory requirements and to improve denoising characteristics, we use a teacher network to search the network by parts using knowledge distillation (KD). Furthermore, search among different receptive fields and multifeature connections are applied for individual layers. Also, a novel latency loss is proposed to jointly optimize the target latency constraint and output quality. We are able to find 2.8 × smaller model than the SOTA method and better motion magnification with lesser distortions. https://github.com/jasdeep-singh-007/KL-DNAS.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.