Abstract

Denoising is a traditional but still challenging problem in signal processing. To reduce the noise in images and videos captured by a digital sensor receives more and more attention also due to the shrinking size of today’s image sensors and striving for even higher resolutions. A vast amount of research has been conducted to solve the complex problem of separating noise from the true signal. The widespread assumption of additive white Gaussian noise (AWGN) in readily processed image data, however, has led to algorithms that fail on real camera data. This shows how crucial the underlying assumptions and the considered quality metrics are to reach results that are convincing on real data and for real people. In this chapter, we will discuss the properties of real camera noise from sensor data up to human perception. First, we will address how test data is generated and review the noise characteristics of a real single sensor camera. Real camera noise is fundamentally different from AWGN: it is spatially and chromatically correlated, signal dependent, and its probability distribution is not necessarily Gaussian. Second, the challenging aspects of evaluating denoising results based on metrics will be addressed. Instead of rating an algorithm based on a metric like PSNR, which is still the metric the latest benchmarks are based on, a more meaningful metric is required. We show our results of different perception tests that investigated the visibility of spatiotemporal noise as it occurs in digital video. Including these results into a perceptual metric could enable a reliable denoising evaluation with respect to the human perception of visual quality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call