Abstract

Modeling image and video distortions is an important, but difficult problem of great consequence to numerous and diverse image processing and computer vision applications. While many statistical models have been proposed to synthesize different types of image noise, real-world distortions are far more difficult to emulate. Toward advancing progress on this interesting problem, we consider distortion generation as an image-to-image transformation problem, and solve it via a data-driven approach. Specifically, we use a conditional generative adversarial network (cGAN) which we train to learn four kinds of realistic distortions. We experimentally demonstrate that the learned model can produce the perceptual characteristics of several types of distortion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call