Abstract

The well-known method of median filtering is used both in a wide range of application frameworks and as a standalone filter. Small-window median filters can highly reduce the power of salt and pepper or additive Gaussian noise and minimize edge blurring. Large-window filters are also used, for example to estimate the background of images. Currently, the window size should not be an issue as a constant time algorithm and several implementations, including GPU-based codes, have been proposed in recent years. Unfortunately, none of these constant time implementations manage to fully exploit the capabilities of modern GPUs, and thus the throughputs of large-window median filters remain far below the peak throughputs allowed by recent GPU models. This paper aims at showing that a separable approximation of a 2D median filtering is often stronger than its full 2D implementation. Statistical and theoretical analysis are conducted to show and explain this fact which had so far remained unobserved. It is confirmed by experimentations on a dataset composed of 10,000 images, corrupted by different levels of salt and pepper noise. Separable and full 2D median filter algorithms are compared with several metrics, notably PSNR and MSSIM. In addition, a GPU implementation of 2D separable median filters is also proposed. This implementation is able to output up to 125 billion pixels per second on a recent Volta V100, which significantly outperforms existing implementations like Nvidia’s NPP library or Green’s code, resulting in the fastest median filtering solution to date.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call