Abstract

Nowadays, many application tools provide the video matting function. The accuracy of the results of video matting is of great importance in practical applications. Existing video matting methods view video as multiple consecutive frames. The matting for video is also a continuous single-frame matting, and the synthesized video will have obvious flickering problems. We introduce a human video matting method that can address this problem well. We use the temporal information existing in the video to perform video matting. Our method uses a recurrent structure to exploit the temporal information in videos, resulting in improvements in both temporal coherence and matting quality. We train the segmentation and matting on the network at the same time, and take the results of semantic segmentation as input. The method does not require any auxiliary inputs, such as trimap or pre-captured background images, and can be widely applied to existing human matting applications. A large number of experimental results show that our model is superior to MODNet in terms of evaluation metrics, where the lift value is 2.73 on MAD(Mean Absolute Difference), 1.83 on MSE(Mean Squared Error), 0.46 on Grad(Spatial Gradient), 0.3 on Conn(Connectivity), and 0.49 on dtSSD. We also designed a simple, real-time, visual, user-friendly and understandable video matting system, which is convenient for users to achieve video matting.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.