Abstract

It has been suggested that numerosity is an elementary quality of perception, similar to colour. If so (and despite considerable investigation), its mechanism remains unknown. Here, we show that observers require on average a massive difference of approximately 40% to detect a change in the number of objects that vary irrelevantly in blur, contrast and spatial separation, and that some naive observers require even more than this. We suggest that relative numerosity is a type of texture discrimination and that a simple model computing the contrast energy at fine spatial scales in the image can perform at least as well as human observers. Like some human observers, this mechanism finds it harder to discriminate relative numerosity in two patterns with different degrees of blur, but it still outpaces the human. We propose energy discrimination as a benchmark model against which more complex models and new data can be tested.

Highlights

  • If the dots in figure 1 were fruits on a tree, there would be obvious advantages to a foraging animal in perceiving at a glance which tree had the most fruits

  • We suggest that debates and Gedankenexperimente on this issue are pointless in the absence of a computable model of relative numerosity discrimination against which data can be tested

  • We can conclude that whatever the mechanism used by the real observer for relative numerosity, it is no worse or better than if it randomly selected 50% of the dots and counted them accurately. As this number in the present case is 32, we can decisively rule out the ‘subitizing’ explanation of relative numerosity accuracy. These experiments were not designed to rule out the existence of a mechanism for discrete numerosity discrimination, nor could any finite set of experiments prove a negative

Read more

Summary

Introduction

If the dots in figure 1 were fruits on a tree, there would be obvious advantages to a foraging animal in perceiving at a glance which tree had the most fruits. The amount of contour can be estimated from the combined output of ‘edge detectors’ that respond to local changes in luminance To make these detectors sensitive to the difference between one object and two occupying the same area, and to be insensitive to their spacing, we want the detectors to be as small as possible. In physiological terms, this means using small ‘receptive fields’; in Fourier-optical terms, it means measuring the energy at high spatial frequencies. We measure the energy in our images at high spatial frequencies and use this as a proxy for numerosity We expect this model to make mistakes if we vary object attributes such as their size, density and spatial-frequency content. Because numerosity just-noticeable differences (JNDs) tend to follow Weber’s law of proportionality, we expressed discrimination ability as the Weber Fraction (JND Â 100/64)

Results
Discussion
Findings
Gaussian blurring function

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.