Abstract

We propose an end-to-end model for high-resolution photographic enhancement in a self-supervised learning approach. Bad quality photos are created from an unlabelled dataset of good quality images. Our deep network model is presented with pairs of bad and good images, and learns the parameters of 6 photographic filters that improve the bad photos by trying to make them similar to the high-quality references. Custom rendering layers apply the photographic filters and compute their derivatives during the forward training pass, so loss attribution can be performed during the backward pass. Our experiments confirm that loss functions based on feature-extraction networks achieve better quality than pixel-comparison metrics. To mimic professional editing applications, our filters are based on curve mapping and alpha blending, and they are rendered using a linear RGB colorspace for mathematical accuracy. At inference time, the custom rendering layers are removed so the model’s output is just the set of filter parameters that best improve the input image. We achieve high-resolution results by applying the predicted filters to the photo captured by the user, even though training and prediction take place using downscaled thumbnails. Our approach has been validated in a professional mobile application.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call