Abstract

Material categorization from natural texture images proceeds quickly and accurately, supporting a number of visual and motor behaviors. In real-world settings, mechanisms for material categorization must function effectively based on the input from foveal vision, where image representation is high fidelity, and the input from peripheral vision, which is comparatively impoverished. What features support successful material categorization in the visual periphery, given the known reductions in acuity, contrast sensitivity, and other lossy transforms that reduce the fidelity of image representations? In general, the visual features that support material categorization remain largely unknown, but recent work suggests that observers’ abilities in a number of tasks that depend on peripheral vision can be accounted for by assuming that the visual system has access to only summary statistics (texture-like descriptors) of image structure. We therefore hypothesized that a model of peripheral vision based on the Portilla-Simoncelli texture synthesis algorithm might account for material categorization abilities in the visual periphery. Using natural texture images and synthetic images made from these stimuli, we compared performance across material categories to determine whether observer performance with natural inputs could be predicted by their performance with synthetic images that reflect the constraints of a texture code.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call