Abstract

The paper proposes a novel frame-wise filtering method based on Convolutional Neural Networks (CNNs) for enhancing the quality of HEVC decoded videos. A novel deep neural network architecture is proposed for post-filtering the entire intra-coded videos. A novel scheme utilizing frame-size patches is employed for training the network. The proposed method filters the luminance channel separately from the pair of chrominance channels. A novel patch generation paradigm is proposed where, for each color channel, the corresponding mode map is generated based on the HEVC intra-prediction mode index and block segmentation. The proposed CNN-based filtering method is an alternative to the traditional HEVC built-in in-loop filtering module for intra-coded frames. Experimental results on standard test sequences show that the proposed method outperforms the HEVC standard with average BD-rate savings of 11.1% and an average BD-PSNR improvement of 0.602 dB. The average relative improvement in ΔPSNR is around 105% at QP = 42 and around 85% at QP = 32 compared with state-of-the-art machine-learning-based methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call