Abstract
Textural segmentation plays an important role in the figure-ground discrimination process. Evidence from neuroscience and psychophysics suggests that the segregation of texture patterns composed of oriented line segments is strongly influenced by the orientation contrast between the patterns (Nothdurft, Vision Res. 31, 1073-1078, 1991). In contrast to models available in the literature, this paper presents a neural network architecture for textural segmentation that can adaptively delimit the boundaries of uniformly textured regions. Cells with adaptive receptive fields encode uniformly textured regions by diffusively interpolating the estimates of feature orientations across the image. Orientation contrast boundaries are detected at gradients across the interpolated regions to produce the final segmentation. Consistent with neurophysiological data from simple and complex cells sensitive to static and moving textural patterns (Hammond and MacKay, Exp. Brain Res. 30, 275-296, 1977), the present model suggests that preattentive textural segregation can be performed by early visual processes localized to areas V1, V2, and perhaps V3 or V5. The computational results further support the idea that textural segmentation should not be thought of as a 'static' process, but as a system that employs cells with context-dependent response characteristics (Gilbert and Wiesel, Vision Res. 30, 1689-1701, 1990), here modeled as adaptive receptive fields.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.