Abstract

In cinema it is standard practice to improve the appearance of images by adding noise that simulates film grain. This is computationally very costly, so it is only done in post-production and not on the set. It is also limiting because the artists are not able to really experiment with the noise nor introduce novel looks. Furthermore, video compression requires a higher bit rate when the source material has film grain or any other type of high frequency texture. In this work, we introduce a method for adding texture to digital cinema that aims to solve these problems. The proposed algorithm is based on modeling retinal noise, with which the images processed by our method have a natural appearance. This “retinal grain” serves a double purpose. One is aesthetic, as it has parameters that allow to vary widely the resulting texture appearance, which make it an artistic tool for cinematographers. Results are validated through psychophysical experiments in which observers, including cinema professionals, prefer our method over film grain synthesis methods from academia and the industry. The other purpose of the retinal noise emulation method is to improve the quality of compressed video by masking compression artifacts, which allows to lower the encoding bit rate while preserving image quality, and to improve image quality while keeping the bit rate fixed. The effectiveness of our approach for improving coding efficiency, with average bit rate savings of 22.5%, has been validated through psychophysical experiments using professional cinema content shot in 4K, color-graded and where the amount of retinal noise was selected by a motion picture specialist based solely on aesthetic preference.

Highlights

  • After the digital cinema revolution, many directors and cinematographers are becoming increasingly frustrated by some artistic limitations that the digital medium imposes

  • We performed psychophysical experiments using color-graded professional cinema content shot in 4K, where the amount of retinal noise was selected by a motion picture specialist based

  • Given that the applications discussed in this paper are all based in the perceived appearance of images and videos, the use of a deep neural network (DNN) for these tasks would first require the ability of said DNN to represent aesthetic preference, and while there are some recent works in this regard, e.g. [14], [15], they have been shown to be unsuitable in the professional media production scenarios [16], [17] for which the method introduced in the current paper is intended

Read more

Summary

INTRODUCTION

After the digital cinema revolution, many directors and cinematographers are becoming increasingly frustrated by some artistic limitations that the digital medium imposes. Results are validated through psychophysical experiments in which observers, including cinema professionals, prefer our method over film grain emulation alternatives from academia and the industry Another contribution of our work is to show that the retinal noise emulation can be used to improve the quality of compressed video by masking compression artifacts. The extra data that is required at reception to introduce the retinal noise is negligible, as it only consists of the values for the user parameters (up to 5 floating point numbers per frame) This is completely novel because in the literature, as mentioned above, the grain is roughly estimated via a denoising process (which is an open problem), parametric models of film grain provide coarse approximations, and those works have a limited application because they are intended just for films with grain, whereas our approach can be used with any kind of content. Given that the applications discussed in this paper are all based in the perceived (aesthetic) appearance of images and videos, the use of a DNN for these tasks would first require the ability of said DNN to represent aesthetic preference, and while there are some recent works in this regard, e.g. [14], [15], they have been shown to be unsuitable in the professional media production scenarios [16], [17] for which the method introduced in the current paper is intended

SOME VISION FACTS AND MODELS
THE ALGORITHM
USER PARAMETERS
RETINAL NOISE EMULATION FOR IMPROVING COMPRESSED VIDEO QUALITY
TEST MATERIAL
Findings
CONCLUSIONS AND FUTURE WORK
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call