Motion parallax allows 7-8-month-old infants to distinguish pictures from their referents

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Earlier research has shown that seven-month-old infants prefer to look at real objects over their referents. Which visual cues determine that preference? Motivated by research on adult observers highlighting the significance of motion parallax over other depth cues contributing to a sense of presence and place, we tested the hypothesis that motion parallax alone is sufficient to cause preferential looking to real objects in infants. We presented pairs of displays of toys in different formats: (a) The real three-dimensional toy; (b) a realistic image of that toy presented on screen; (c) the same image, but with added depth-from-motion-parallax. Infants preferred (a) over (b) (57% vs. 43%, p < .01) and (c) over (b) (52% vs. 48%, p < .05), but showed no significant preference between (a) and (c) (51% vs. 49%, n.s.). This supports the hypothesis that motion parallax alone can induce a looking preference comparable to that observed for real objects.

Similar Papers
  • Conference Article
  • Cite Count Icon 3
  • 10.1109/vr.2019.8798140
A Method to Introduce &amp; Evaluate Motion Parallax with Stereo for Medical AR/MR
  • Mar 1, 2019
  • Megha Kalia + 3 more

Incorrect depth perception and lack of good evaluation systems are major barriers in clinical translation of augmented and mixed reality AR/MR. Thus, a systematic study of depth cues is necessary. Therefore, in the current paper we present a method to introduce the quantitative depth cue Motion Parallax (MP) in surgical scenes and study its effect on depth perception when combined with binocular disparity. In addition to this we present an innovative virtual tool method to evaluate depth. To introduce MP, we reconstructed the tissue surface using structure from motion technique. Then to get accurate absolute scale of the reconstructed surface stereo-triangulation was used. The simulated tumor was rendered beneath the reconstructed point-cloud by rendering a hole for ‘X-ray’ like vision. The MP was introduced by rotating the entire scene from side-to-side with a tumor-surface-point as pivot for maximum impact. Finally for evaluation, we used a virtual surgical tool rendered using real-time da Vinci surgical API's forward kinematics data. In total, 12 subjects participated in a within-subjects-experiment design to study four cases, i.e., Stereo + MP ( $\text{S}+\text{MP}$ ), Mono + MP ( $\text{M}+\text{MP}$ ), Stereo + No MP $(\text{S}+\text{N}-\text{MP})$ and Mono + No MP $(\text{M}+\text{N}-\text{MP})$ . The subjects significantly overestimated Judged Percentage of True Distance in $\text{M}+\text{MP}$ when compared to $\text{M}+\text{N}-\text{MP}$ (probability $(\text{p})=0.000$ , Number of Samples $(\text{N}))=120)$ and $\text{S}+\text{N}-\text{MP}$ cases ( $\text{p}=0.001, \text{N}= 120$ ). Furthermore, the observed VariableError was less in $\text{S}+\text{MP}$ and $\text{S}+\text{N}-\text{MP}$ cases when compared to $\text{M}+\text{MP}$ and $\text{M}+\text{N}-\text{MP}$ cases. The use of Motion Parallax in console interfaces for surgical robotics showed overestimation of judged distance. But to our knowledge it is the first work studying the effect of motion parallax and stereo in the surgical context. Therefore, its further study is warranted.

  • Research Article
  • Cite Count Icon 1
  • 10.3169/itej1978.46.1179
運動視における表示遅れの影響と高速画像生成表示方法
  • Jan 1, 1992
  • The Journal of the Institute of Television Engineers of Japan
  • Takanori Sato + 2 more

The goal of our research is realistic teleconferencing. Generating images with motion parallax is a vital technology in improving human communication. To generate realistic images by Computer Graphics (CG), the number of vertexes of an object should be increased. However, to generate images with motion parallax, the images should be increased quickly. Thus, there is a tradeoff between generating highly realistic images and generating them quickly. In this paper, we evaluate the effect of the delay according to two parameters : the response time for generating images for head movement and the frequency of generating them. We also present a method to generate realistic CG images at high speed. The method is to select the image from a database of detailed models, each for a certain distance between the viewpoint and the object.

  • Research Article
  • Cite Count Icon 2
  • 10.1242/jeb.236547
Motion parallax via head movements modulates visuo-motor control in pigeons.
  • Feb 1, 2021
  • Journal of Experimental Biology
  • Yuya Hataji + 2 more

Although it has been proposed that birds acquire visual depth cues through dynamic head movements, behavioral evidence on how birds use motion parallax depth cues caused by self-motion is lacking. This study investigated whether self-generated motion parallax modulates pecking motor control and visual size perception in pigeons (Columba livia). We trained pigeons to peck a target on a touch monitor and to classify it as small or large. To manipulate motion parallax of the target, we changed the target position on the monitor according to the bird's head position in real time using a custom-built head tracker with two cameras. Pecking motor control was affected by the manipulation of motion parallax: when the motion parallax signified the target position farther than the monitor surface, the head position just before pecking to target was near the monitor surface, and vice versa. By contrast, motion parallax did not affect how the pigeons classified target sizes, implying that motion parallax might not contribute to size constancy in pigeons. These results indicate that motion parallax via head movements modulates pecking motor control in pigeons, suggesting that head movements of pigeons have the visual function of accessing motion parallax depth cues.

  • Research Article
  • Cite Count Icon 52
  • 10.1037//0096-1523.21.3.679
Comparing depth from motion with depth from binocular disparity.
  • Jan 1, 1995
  • Journal of Experimental Psychology: Human Perception and Performance
  • Frank H Durgin + 3 more

The accuracy of depth judgments that are based on binocular disparity or structure from motion (motion parallax and object rotation) was studied in 3 experiments. In Experiment 1, depth judgments were recorded for computer simulations of cones specified by binocular disparity, motion parallax, or stereokinesis. In Experiment 2, judgments were recorded for real cones in a structured environment, with depth information from binocular disparity, motion parallax, or object rotation about the y-axis. In both of these experiments, judgments from binocular disparity information were quite accurate, but judgments on the basis of geometrically equivalent or more robust motion information reflected poor recovery of quantitative depth information. A 3rd experiment demonstrated stereoscopic depth constancy for distances of 1 to 3 m using real objects in a well-illuminated, structured viewing environment in which monocular depth cues (e.g., shading) were minimized.

  • Research Article
  • Cite Count Icon 72
  • 10.1037/0096-1523.21.3.679
Comparing depth from motion with depth from binocular disparity.
  • Jan 1, 1995
  • Journal of Experimental Psychology: Human Perception and Performance
  • Frank H Durgin + 3 more

The accuracy of depth judgments that are based on binocular disparity or structure from motion (motion parallax and object rotation) was studied in 3 experiments. In Experiment 1, depth judgments were recorded for computer simulations of cones specified by binocular disparity, motion parallax, or stereokinesis. In Experiment 2, judgments were recorded for real cones in a structured environment, with depth information from binocular disparity, motion parallax, or object rotation about the y-axis. In both of these experiments, judgments from binocular disparity information were quite accurate, but judgments on the basis of geometrically equivalent or more robust motion information reflected poor recovery of quantitative depth information. A 3rd experiment demonstrated stereoscopic depth constancy for distances of 1 to 3 m using real objects in a well-illuminated, structured viewing environment in which monocular depth cues (e.g., shading) were minimized.

  • Conference Article
  • Cite Count Icon 2
  • 10.1145/2814940.2814954
Spatial Communication and Recognition in Human-agent Interaction using Motion-parallax-based 3DCG Virtual Agent
  • Oct 21, 2015
  • Naoto Yoshida + 1 more

In this paper, we propose spatial communication between a virtual agent and a user through common space in both virtual world and real space. For this purpose, we propose the virtual agent system SCoViA, which renders a synchronized synthesis of the agent's appearance corresponding to the user's relative position to the monitor based on synchronization with the user's motion parallax in order to realize human-agent communication in the real world. In this system, a real-time three-dimensional computer-generated (3DCG) agent is drawn from the changing viewpoint of the user in a virtual space corresponding to the position of the user's head as detected by face tracking. We conducted two verifications and discussed the spatial communication between a virtual agent and a user. First, we verified the effect of a synchronized redrawing of the virtual agent for the accurate recognition of a particular object in the real world. Next, we verified the approachability of the agent by reacting to the user's eye contact from a diagonal degree to the virtual agent. The results of the evaluations showed that the virtual agent's eye contact affected approachability regardless of the user's viewpoint and that our proposed system using motion parallax could significantly improve the accuracy of the agent's gazing position with each real object. Finally, we discuss the possibility of the real-world human-agent interaction using positional relationship among the agent, real objects, and the user.

  • Research Article
  • 10.1167/jov.26.1.11
Cue combination for depth perception in macular degeneration: Motion parallax augments disparity.
  • Jan 5, 2026
  • Journal of vision
  • Jade Guénot + 1 more

In macular degeneration (MD), depth perception from binocular disparity is impacted in regions with vision loss in either eye, but monocular cues like motion parallax remain available. This study investigates whether combining motion parallax with disparity improves depth perception and compensates for the loss of depth due to central field loss (CFL). Eleven MD participants and 19 controls viewed a horizontal sine-wave corrugation in depth, defined by disparity and/or motion parallax, judging which half-cycle appeared farther away in depth. We measured thresholds for each cue alone and for the two cues combined. In MD participants, cue integration benefits depended on scotoma characteristics. Disparity performance correlated strongly with the size of the stereoblind zone, while motion parallax thresholds showed no significant relation, suggesting preservation despite CFL. MD participants with extensive stereoblind zones showed elevated thresholds for both single cues compared to controls but demonstrated optimal integration when disparity was added to motion parallax. Those with small stereoblind zones achieved control-like thresholds and exhibited optimal or better than predicted integration. However, asymmetric patterns emerged with suboptimal performance when motion parallax was added to threshold disparity. Controls with simulated scotomas maintained stable integration, contrasting with variable patterns in MD. Our results show that individuals with CFL retain significant capacity for depth cue integration, contingent upon residual binocular disparity. Thus, motion parallax emerges as a valuable compensatory cue to improve depth perception in individuals with MD.

  • Research Article
  • Cite Count Icon 38
  • 10.1523/jneurosci.0251-13.2013
Joint Representation of Depth from Motion Parallax and Binocular Disparity Cues in Macaque Area MT
  • Aug 28, 2013
  • Journal of Neuroscience
  • J W Nadler + 5 more

Perception of depth is based on a variety of cues, with binocular disparity and motion parallax generally providing more precise depth information than pictorial cues. Much is known about how neurons in visual cortex represent depth from binocular disparity or motion parallax, but little is known about the joint neural representation of these depth cues. We recently described neurons in the middle temporal (MT) area that signal depth sign (near vs far) from motion parallax; here, we examine whether and how these neurons also signal depth from binocular disparity. We find that most MT neurons in rhesus monkeys (Macaca Mulatta) are selective for depth sign based on both disparity and motion parallax cues. However, the depth-sign preferences (near or far) are not always aligned: 56% of MT neurons have matched depth-sign preferences ("congruent" cells) whereas the remaining 44% of neurons prefer near depth from motion parallax and far depth from disparity, or vice versa ("opposite" cells). For congruent cells, depth-sign selectivity increases when disparity cues are added to motion parallax, but this enhancement does not occur for opposite cells. This suggests that congruent cells might contribute to perceptual integration of depth cues. We also found that neurons are clustered in MT according to their depth tuning based on motion parallax, similar to the known clustering of MT neurons for binocular disparity. Together, these findings suggest that area MT is involved in constructing a representation of 3D scene structure that takes advantage of multiple depth cues available to mobile observers.

  • Research Article
  • Cite Count Icon 113
  • 10.1038/nature06814
A neural representation of depth from motion parallax in macaque visual cortex.
  • Mar 16, 2008
  • Nature
  • Jacob W Nadler + 2 more

Perception of depth is a fundamental challenge for the visual system, particularly for observers moving through their environment. The brain makes use of multiple visual cues to reconstruct the three-dimensional structure of a scene. One potent cue, motion parallax, frequently arises during translation of the observer because the images of objects at different distances move across the retina with different velocities. Human psychophysical studies have demonstrated that motion parallax can be a powerful depth cue, and motion parallax seems to be heavily exploited by animal species that lack highly developed binocular vision. However, little is known about the neural mechanisms that underlie this capacity. Here we show, by using a virtual-reality system to translate macaque monkeys (Macaca mulatta) while they viewed motion parallax displays that simulated objects at different depths, that many neurons in the middle temporal area (area MT) signal the sign of depth (near versus far) from motion parallax in the absence of other depth cues. To achieve this, neurons must combine visual motion with extra-retinal (non-visual) signals related to the animal's movement. Our findings suggest a new neural substrate for depth perception and demonstrate a robust interaction of visual and non-visual cues in area MT. Combined with previous studies that implicate area MT in depth perception based on binocular disparities, our results suggest that area MT contains a more general representation of three-dimensional space that makes use of multiple cues.

  • Research Article
  • Cite Count Icon 23
  • 10.1016/s0042-6989(02)00117-7
Behavioral assessment of motion parallax and stereopsis as depth cues in rhesus monkeys
  • Jun 13, 2002
  • Vision Research
  • An Cao + 1 more

Behavioral assessment of motion parallax and stereopsis as depth cues in rhesus monkeys

  • Dissertation
  • Cite Count Icon 1
  • 10.15126/thesis.00002181
The use of enhanced depth information in telepresence
  • Jan 1, 2003
  • Neil Stringer

This thesis explored the potential performance benefits of enhancing depth cues in telepresence interfaces. A series of experiments addressed the role of binocular disparity and motion parallax in teleoperators’ performance. Experiments 1 and 2 demonstrated that the effects of enhancing binocular disparity and motion parallax depend upon the information demands of a given task. Enhanced depth cues helped observers to make more precise judgements in simple tasks that rely on judgement of depth differences or relative distance (alignment and depth matching tasks). However, systematic biases in performance were identified in metric tasks that rely on recovery of Euclidean geometry (shape judgements). Experiments 3 and 4 showed that teleoperators can quickly train to use enhanced depth information to perform metric tasks accurately, thus extending the range of tasks over which enhanced information can be used. Experiment 5 examined whether observers acquire transferable information about depth when learning to make depth judgements using binocular disparity or motion parallax. Participants showed no transfer of learning when training with altered binocular disparities and testing using motion parallax, or vice versa, suggesting that the learning demonstrated in Experiments 3 and 4 is cue-specific. Experiment 6 examined the use of depth cues for performing a task more typical of those performed under telepresence. The benefits of binocular and motion parallax cues, used in isolation or simultaneously, and the effects of enhanced motion parallax, were examined in a simulated “telesurgery” task, where other useful cues such as familiar size and perspective were already available. Observers’ performance vastly improved when binocular disparity was added as a cue; motion parallax, however, failed to improve performance, even when observers were encouraged to use it as a cue. These findings strongly suggest that telepresence performance may benefit from enhancing the information relevant to the specific task the system is intended for; contrary to the traditional approach in the design of telepresence, exact replication of the remote environment may not be crucial.

  • Research Article
  • Cite Count Icon 70
  • 10.1016/j.neuron.2009.07.029
MT Neurons Combine Visual Motion with a Smooth Eye Movement Signal to Code Depth-Sign from Motion Parallax
  • Aug 1, 2009
  • Neuron
  • Jacob W Nadler + 3 more

MT Neurons Combine Visual Motion with a Smooth Eye Movement Signal to Code Depth-Sign from Motion Parallax

  • Research Article
  • Cite Count Icon 19
  • 10.1016/s0141-9382(01)00067-1
Can movement parallax compensate lacking stereopsis in spatial explorative search tasks?
  • Nov 1, 2001
  • Displays
  • Urs Naepflin + 1 more

Can movement parallax compensate lacking stereopsis in spatial explorative search tasks?

  • Conference Article
  • Cite Count Icon 45
  • 10.1117/12.207547
&lt;title&gt;Absolute motion parallax weakly determines visual scale in real and virtual environments&lt;/title&gt;
  • Apr 20, 1995
  • Andrew C Beall + 3 more

The determinants of visual scale (size and distance) under monocular viewing are still largely unknown. The problem of visual scale under monocular viewing becomes readily apparent when one moves about within a virtual environment. It might be thought that the absolute motion parallax of stationary objects (both in real and virtual environments), under the assumption of the stationarity, would immediately determine their apparent size and distance for an observer who is walking about. We sought to assess the effectiveness of observer- produced motion parallax in scaling apparent size and distance within near space. We had subjects judge the apparent size and distance of real and virtual objects under closely matched conditions Real and virtual targets were 4 spheres seen in darkness at eye level. the targets ranges in diameter from 3.7 cm to 14.8 cm and were viewed monocularly from difference distances, with a subset of the size/distance combinations resulting in projectively equivalent stimuli at the viewing origin. subjects moved laterally plus and minus 1 m to produce large amounts of motion parallax. when angular size was held constant and motion parallax acted as a differential cue to target size and distance, judged size varied by a facto or 1.67 and 1.18 for the real and virtual environments, respectively, well short of the fourfold change in distal size. Similarly, distance judgments varied by factors of only 1.74 and 1.07 respectively. We conclude that absolute motion parallax only weakly determines the visual scale of nearby objects varying over a fourfold range in size.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

  • Conference Article
  • Cite Count Icon 1
  • 10.1117/12.2006663
Development of super-multiview head-up display and evaluation of motion parallax smoothness
  • Mar 12, 2013
  • Hiroyuki Nishio + 1 more

A super multi-view head-up display (SMV-HUD) was developed. The smooth motion parallax provided by the SMV technique enables a precise superposition of three-dimensional (3D) images on real objects. The developed SMV-HUD was used to explore display conditions to provide smooth motion parallax. It had three configurations that display 3D images in short-, medium-, and long-distance ranges, assuming the supposed usage of PC monitors, TVs, and public viewing, respectively. The subjective evaluation was performed by changing the depth of 3D images and the interval of viewing points. The interval of viewing points was changed by displaying identical parallax images to succeeding viewing points. We found that the ratio of the image shift between adjacent parallax images to the pixel pitch of 3D images dominated the perception of the unnatural motion parallax. When the ratio was smaller than 0.2, the discontinuity was not perceived. When the ratio was larger than 1, the discontinuity was always perceived and the 3D resolution decreased two times at transition points of viewing points. When the crosstalk between viewing points was relatively large, the discontinuity was not perceived even when the ratio was one or two, although the resolution decreased two or three times throughout the viewing region.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.