Abstract

Almost all existing 3D visual discomfort prediction models are based, at least in part, on features that are extracted from computed disparity maps. These include such estimated quantities such as the maximum disparity, disparity range, disparity energy and other measures of the disparity distribution. A common first step when implementing a 3D visual discomfort model is some form of disparity calculation, whence the accuracy of prediction largely depends on the accuracy of the disparity result. Unfortunately, most algorithms that compute disparity maps are expensive, and are not guaranteed to deliver sufficiently accurate or perceptually relevant disparity data. This raises the question of whether it is possible to build a 3D discomfort prediction model without explicit disparity calculation. Towards this possibility, we have developed a new feature map, called the percentage of un-linked pixels (PUP), that is descriptive of the presence of disparity, and which can be used to accurately predict experienced 3D visual discomfort without the need for actually calculating disparity values. Instead, PUP features are extracted by predicting the percentage of un-linked pixels in corresponding retinal patches of image pairs. The un-linked pixels are determined by feature classification on orientation and luminance distributions. Calculation of PUP maps is much faster than traditional disparity computation, and the experimental results demonstrate that the predictive power attained using the PUP map is highly competitive with prior models that rely on computed disparity maps.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call