Due to the problems of power-hungry displays and limited battery life in electronic devices, the concept of “green computing,” which entails a reduction in power consumption, is proposed. One often seen green computing is the power-constrained contrast enhancement (PCCE), yet it is much more challenging because of the noticeable local intensity suppressions in images. This paper aims at developing an image-quality-lossless end-to-end learning network called deep battery saver to achieve power savings in emissive displays, i.e., produce power-saved images with high perceptual quality and less power consumption. Built upon the end-to-end network of the displayed image, we propose a variational loss function for enhancing the visual quality and suppressing the power consumption, simultaneously. The basic idea is to integrate both high-level perceptual losses and low-level pixel losses by a deep residual convolutional neural network (CNN) over a devised variational loss function with strong human perceptual consistency. Such deep residual CNN network leads to a visually pleasing image representation during the suppression of power consumption. Experimental results demonstrated the superiority of our deep battery saver to existing PCCE methods.
Read full abstract