Abstract

Video decolorization is to filter out the color information while preserving the perceivable content in the video as much and correct as possible. Existing methods mainly apply image decolorization strategies on videos, which may be slow and produce incoherent results. In this paper, we propose a video decolorization framework that considers frame coherence and saves decolorization time by referring to the decolorized frames. It has three main contributions. First, we define decolorization proximity to measure the similarity of adjacent frames. Second, we propose three decolorization strategies for frames with low, medium, and high proximities, to preserve the quality of these three types of frames. Third, we propose a novel decolorization Gaussian mixture model to classify the frames and assign appropriate decolorization strategies to them based on their decolorization proximity. To evaluate our results, we measure them from three aspects: 1) qualitative; 2) quantitative; and 3) user study. We apply color contrast preserving ratio and C2G-SSIM to evaluate the quality of single frame decolorization. We propose a novel temporal coherence degree metric to evaluate the temporal coherence of the decolorized video. Compared with current methods, the proposed approach shows all around better performance in time efficiency, temporal coherence, and quality preservation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.