Abstract
Recent computational models of cognition have made good progress in accounting for the visual processes needed to encode external stimuli. However, these models typically incorporate simplified models of visual processing that assume a constant encoding time for all visual objects and do not distinguish between eye movements and shifts of attention. This paper presents a domain-independent computational model, EMMA, that provides a more rigorous account of eye movements and visual encoding and their interaction with a cognitive processor. The visual-encoding component of the model describes the effects of frequency and foveal eccentricity when encoding visual objects as internal representations. The eye-movement component describes the temporal and spatial characteristics of eye movements as they arise from shifts of visual attention. When integrated with a cognitive model, EMMA generates quantitative predictions concerning when and where the eyes move, thus serving to relate higher-level cognitive processes and attention shifts with lower-level eye-movement behavior. The paper evaluates EMMA in three illustrative domains — equation solving, reading, and visual search — and demonstrates how the model accounts for aspects of behavior that simpler models of cognitive and visual processing fail to explain.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.