Abstract

The visual cortex is able to extract disparity information through the use of binocular cells. This process is reflected by the Disparity Energy Model, which describes the role and functioning of simple and complex binocular neuron populations, and how they are able to extract disparity. This model uses explicit cell parameters to mathematically determine preferred cell disparities, like spatial frequencies, orientations, binocular phases and receptive field positions. However, the brain cannot access such explicit cell parameters; it must rely on cell responses. In this article, we implemented a trained binocular neuronal population, which encodes disparity information implicitly. This allows the population to learn how to decode disparities, in a similar way to how our visual system could have developed this ability during evolution. At the same time, responses of monocular simple and complex cells can also encode line and edge information, which is useful for refining disparities at object borders. The brain should then be able, starting from a low-level disparity draft, to integrate all information, including colour and viewpoint perspective, in order to propagate better estimates to higher cortical areas.

Highlights

  • Disparity plays an important role in our perception of the environment, giving us precious information for survival

  • The obtained results for this method were first published in Martins et al [7], where we tested the Luminance Disparity Energy Model (L-DEM) on various reference stereograms from the Middlebury stereo evaluation set

  • This was expected, because L-DEM and LCV-DEM struggle at border transitions, which is why the Line and Edge Disparity Model (LEDM) model is used to improve the LCV-DEM; it improves results but without yet achieving outstanding results—still, the error for regions near depth discontinuities decreases more than a factor of two in the venus case

Read more

Summary

Introduction

Disparity plays an important role in our perception of the environment, giving us precious information for survival. G., [9] combining geometric information and local edge features, [10] using multiscale lines and edges to retrieve a disparity wireframe model of the scene—the Line and Edge Disparity Model (LEDM) which is further explored in this paper in §5.1—and du Buf et al [11], employing the phase differences of simple cell responses to the left and right views The latter model is often applied to real-world problems, it has been shown to be very imprecise in terms of localisation of depth transitions. This second population implements a template-matching process similar to those of [16] and Read [6] This initial DEM model (disparity gist) is integrated with colour and different viewpoints (§4), and with object border information retrieved from the multi-scale line and edge disparity model (LEDM) [10] and lowlevel processes from object salience research [17] (§5). Our main contributions in this paper are: (a) Improving previous DEM results in real-world images. (b) The integration of the DEM model with luminance, colour information and viewpoint perspective correction. (c) The integration of two disparity models DEM and LEDM, to improve object boundary precision of the DEM. (d) The integration of different layers of disparity cell maps, with each layer improving the results from layer to layer. (e) The quantitative evaluation of results with real-world scenes, showing that the model can compete with state-ofthe-art computer vision algorithms

Disparity-sensitive cells
Luminance Disparity-Energy Model
Disparity encoding population
Disparity decoding population
Experimental results
Boundary enhanced LCVB-DEM
Line and Edge Disparity Model
Line and Edge region enhancement
Object Boundary enhancement
LCVB-DEM Experimental Results
Results
Discussion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.