Abstract

Texture synthesis models have become a popular tool for studying the representations supporting texture processing in human vision. In particular, the summary statistics implemented in the Portilla–Simoncelli (P–S) model support high-quality synthesis of natural textures, account for performance in crowding and search tasks, and may account for the response properties of V2 neurons. We chose to investigate whether or not these summary statistics are also sufficient to support texture discrimination in a task that required illumination invariance. Our observers performed a match-to-sample task using natural textures photographed with either diffuse overhead lighting or lighting from the side. Following a briefly presented sample texture, participants identified which of two test images depicted the same texture. In the illumination change condition, illumination differed between the sample and the matching test image. In the no change condition, sample textures and matching test images were identical. Critically, we generated synthetic versions of these images using the P–S model and also tested participants with these. If the statistics in the P–S model are sufficient for invariant texture perception, performance with synthetic images should not differ from performance in the original task. Instead, we found a significant cost of applying texture synthesis in both lighting conditions. We also observed this effect when power-spectra were matched across images (Experiment 2) and when sample and test images were drawn from unique locations in the parent textures to minimize the contribution of image-based processing (Experiment 3). Invariant texture processing thus depends upon measurements not implemented in the P–S algorithm.

Highlights

  • Natural visual stimuli can vary substantially in appearance as a function of illumination conditions, the observer's distance to the stimulus, viewpoint or pose relative to the observer, and planar rotation

  • The fact that we observed a significant cost of synthetic appearance when illumination invariance was required, and a larger cost than when it was not suggests that this set of texture descriptors lacks information that is useful for matching textures across a lighting change

  • We observed that the cost was especially high when illumination invariance was required, suggesting that our synthetic textures were of sufficiently high quality to support texture matching under some conditions

Read more

Summary

Introduction

Natural visual stimuli can vary substantially in appearance as a function of illumination conditions, the observer's distance to the stimulus, viewpoint or pose relative to the observer, and planar rotation. Observers are typically able to cope with appearance variation reasonably well, achieving useful (if limited) levels of perceptual constancy with complex stimuli like familiar faces (Burton et al, 1999), real and nonce objects (Bulthoff & Edelman, 1992), and scenes (Xiao et al, 2010). Variation in viewpoint appear to affect roughness judgments in a similar fashion (Ho, Maloney & Landy, 2007), which may suggest that whatever degree of perceptual constancy the visual system is able to achieve for textures may be constrained by some set of features (what Ho et al refer to as pseudocues) that do not provide perfect information for invariant recognition

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call