Understanding how early scene viewing is guided can reveal fundamental brain mechanisms for quickly making sense of our surroundings. Viewing is often initiated from the left side. Across two experiments, we focused on search initiation for lateralised targets within real-world scenes, investigating the role of the cerebral hemispheres in guiding the first saccade. We aimed to disentangle hemispheric contribution from the effects of reading habits and distinguish between an overall dominance of the right hemisphere for visuospatial processing and finer hemispheric specialisation for the type of target template representation (from pictorial versus verbal cues), spatial scale (global versus local), and timescale (short versus longer). We replicated the tendency to initiate search leftward in both experiments. However, we found no evidence supporting a significant impact of left-to-right reading habits, either as a purely motor or attentional bias to the left. A general visuospatial dominance of the right hemisphere could not account for the results either. In Experiment 1, we found a greater probability of directing the first saccade toward targets in the left visual field but only after a verbal target cue, with no lateral differences after a pictorial cue. This suggested a contribution of the right hemisphere specialisation in perceptually simulating words' referents. Lengthening the Inter-Stimulus Interval between the cue and the scene (from 100 to 900 ms) resulted in reduced first saccade gain in the left visual field, suggesting a decreased ability of the the right hemisphere to use the target template to guide gaze close to the target object, which primarily depends on local information processing. Experiment 2, using visual versus auditory verbal cues, replicated and extended the findings for both first saccade direction and gain. Overall, our study shows that the multidetermined functional specialisation of the cerebral hemispheres is a key driver of early scene search and must be incorporated into theories and models to advance understanding of the mechanisms that guide viewing behaviour.
Read full abstract