Abstract
Due to the problems of power-hungry displays and limited battery life in electronic devices, the concept of “green computing,” which entails a reduction in power consumption, is proposed. One often seen green computing is the power-constrained contrast enhancement (PCCE), yet it is much more challenging because of the noticeable local intensity suppressions in images. This paper aims at developing an image-quality-lossless end-to-end learning network called deep battery saver to achieve power savings in emissive displays, i.e., produce power-saved images with high perceptual quality and less power consumption. Built upon the end-to-end network of the displayed image, we propose a variational loss function for enhancing the visual quality and suppressing the power consumption, simultaneously. The basic idea is to integrate both high-level perceptual losses and low-level pixel losses by a deep residual convolutional neural network (CNN) over a devised variational loss function with strong human perceptual consistency. Such deep residual CNN network leads to a visually pleasing image representation during the suppression of power consumption. Experimental results demonstrated the superiority of our deep battery saver to existing PCCE methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.