Abstract

In this paper, we proposed a backlight dimming algorithm for videos, aiming to achieve better video quality measured in video quality metric (VQM) with controlled power consumption. In order to avoid the complex computation of VQM in the testing process, a training procedure is performed to find prediction model. For each training video, a VQM result curve corresponding to different clipping points is first built, to facilitate the consequent search for optimal clipping point. To build a prediction model for optimal clipping point to be used in testing stage, the optimal clipping point of each video is associated with the spatial information (SI) and temporal information (TI) of the video; the association is modeled by a 2-D LOWESS (LOcally WEighted Scatterplot Smoothing) surface. Therefore in the testing phase, for every video, the SI and TI are first computed, and mapped into the LOWESS model to find the predicted optimal clipping point, which is applied to all frames in the video. A generalized version of the proposed method is designed to have even better power reduction performance. The experiment results show that the generalized proposed work can achieve the best power reduction (17.9%), compared with state-of-the-art methods I2GEC (10.5%), MGEC4 (11.2%), MGEC16 (8.7%), and SPBD (8.7%). And the video quality of the generalized proposed method can be maintained, showing almost no visual difference with the original video. Thus, it is demonstrated that the proposed work can achieve good balance between video quality and power consumption.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call