Роль зрительной, проприоцептивной и вестибулярной информации в оценке расстояния в периперсональном пространстве

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

<p><strong>Context and relevance.</strong> Accuracy of object distance estimation in a distant space is affected by the integration of visual, proprioceptive, and vestibular information. <strong>Objective:</strong> examining the contribution of visual, proprioceptive and vestibular information in estimating the egocentric distance of an object in peripersonal space. <strong>Hypothesis.</strong> Reliance on the integration of visual and proprioceptive information will predominantly affect the accuracy of estimating the distance of objects in peripersonal space. <strong>Methods and materials.</strong> 22 participants were estimating egocentric distances of a stimulus, positioned on 20, 40 and 60 cm. Three tasks were used: the guidance task — GT (including visual information), the verbal assessment task — VAT (visual information and higher cognitive processes), and the motor reproduction task — MRT (visual and proprioceptive information). In half of the experimental situations, the subjects were rotated around their vertical axis, which caused the deprivation of vestibular information. <strong>Results.</strong> Results indicate that the subjects most accurately estimated the stimulus distance when they integrated visual and proprioceptive information (MRT). When relying only on visual information, respondents overestimated stimulus distance (GT), while relying on a combination of visual information and higher cognitive processes when estimating distance (VAT), subjects consistently underestimated distance. Deprivation of vestibular information reduce differences in estimation errors between the three tasks. <strong>Conclusions. </strong>The accuracy of distance estimation relies on the integration of all information available to them from the senses in order to estimate egocentric distance as accurately as possible.</p>

Similar Papers
  • Research Article
  • Cite Count Icon 76
  • 10.1016/s0926-6410(96)00044-4
Egocentric perception through interaction among many sensory systems
  • Dec 1, 1996
  • Cognitive Brain Research
  • Masao Ohmi

Egocentric perception through interaction among many sensory systems

  • Research Article
  • Cite Count Icon 269
  • 10.1007/bf00227302
How humans combine simultaneous proprioceptive and visual position information.
  • Sep 1, 1996
  • Experimental Brain Research
  • Robert J Van Beers + 2 more

To enable us to study how humans combine simultaneously present visual and proprioceptive position information, we had subjects perform a matching task. Seated at a table, they placed their left hand under the table concealing it from their gaze. They then had to match the proprioceptively perceived position of the left hand using only proprioceptive, only visual or both proprioceptive and visual information. We analysed the variance of the indicated positions in the various conditions. We compared the results with the predictions of a model in which simultaneously present visual and proprioceptive position information about the same object is integrated in the most effective way. The results are in disagreement with the model: the variance of the condition with both visual and proprioceptive information is smaller than expected from the variances of the other conditions. This means that the available information was integrated in a highly effective way. Furthermore, the results suggest that additional information was used. This information might have been visual information about body parts other than the fingertip or it might have been visual information about the environment.

  • Research Article
  • Cite Count Icon 692
  • 10.1152/jn.1999.81.3.1355
Integration of proprioceptive and visual position-information: An experimentally supported model.
  • Mar 1, 1999
  • Journal of Neurophysiology
  • Robert J Van Beers + 2 more

To localize one's hand, i.e., to find out its position with respect to the body, humans may use proprioceptive information or visual information or both. It is still not known how the CNS combines simultaneous proprioceptive and visual information. In this study, we investigate in what position in a horizontal plane a hand is localized on the basis of simultaneous proprioceptive and visual information and compare this to the positions in which it is localized on the basis of proprioception only and vision only. Seated at a table, subjects matched target positions on the table top with their unseen left hand under the table. The experiment consisted of three series. In each of these series, the target positions were presented in three conditions: by vision only, by proprioception only, or by both vision and proprioception. In one of the three series, the visual information was veridical. In the other two, it was modified by prisms that displaced the visual field to the left and to the right, respectively. The results show that the mean of the positions indicated in the condition with both vision and proprioception generally lies off the straight line through the means of the other two conditions. In most cases the mean lies on the side predicted by a model describing the integration of multisensory information. According to this model, the visual information and the proprioceptive information are weighted with direction-dependent weights, the weights being related to the direction-dependent precision of the information in such a way that the available information is used very efficiently. Because the proposed model also can explain the unexpectedly small sizes of the variable errors in the localization of a seen hand that were reported earlier, there is strong evidence to support this model. The results imply that the CNS has knowledge about the direction-dependent precision of the proprioceptive and visual information.

  • Research Article
  • Cite Count Icon 2
  • 10.3389/fnhum.2021.702519
The Relative Contributions of Visual and Proprioceptive Inputs on Hand Localization in Early Childhood.
  • Oct 7, 2021
  • Frontiers in Human Neuroscience
  • Natasha Ratcliffe + 4 more

Forming an accurate representation of the body relies on the integration of information from multiple sensory inputs. Both vision and proprioception are important for body localization. Whilst adults have been shown to integrate these sources in an optimal fashion, few studies have investigated how children integrate visual and proprioceptive information when localizing the body. The current study used a mediated reality device called MIRAGE to explore how the brain weighs visual and proprioceptive information in a hand localization task across early childhood. Sixty-four children aged 4–11 years estimated the position of their index finger after viewing congruent or incongruent visuo-proprioceptive information regarding hand position. A developmental trajectory analysis was carried out to explore the effect of age on condition. An age effect was only found in the incongruent condition which resulted in greater mislocalization of the hand toward the visual representation as age increased. Estimates by younger children were closer to the true location of the hand compared to those by older children indicating less weighting of visual information. Regression analyses showed localizations errors in the incongruent seen condition could not be explained by proprioceptive accuracy or by general attention or social differences. This suggests that the way in which visual and proprioceptive information are integrated optimizes throughout development, with the bias toward visual information increasing with age.

  • Research Article
  • Cite Count Icon 10
  • 10.1167/jov.20.10.15
Roles of visual and non-visual information in the perception of scene-relative object motion during walking
  • Oct 14, 2020
  • Journal of Vision
  • Mingyang Xie + 3 more

Perceiving object motion during self-movement is an essential ability of humans. Previous studies have reported that the visual system can use both visual information (such as optic flow) and non-visual information (such as vestibular, somatosensory, and proprioceptive information) to identify and globally subtract the retinal motion component due to self-movement to recover scene-relative object motion. In this study, we used a motion-nulling method to directly measure and quantify the contribution of visual and non-visual information to the perception of scene-relative object motion during walking. We found that about 50% of the retinal motion component of the probe due to translational self-movement was removed with non-visual information alone and about 80% with visual information alone. With combined visual and non-visual information, the self-movement component was removed almost completely. Although non-visual information played an important role in the removal of self-movement-induced retinal motion, it was associated with decreased precision of probe motion estimates. We conclude that neither non-visual nor visual information alone is sufficient for the accurate perception of scene-relative object motion during walking, which instead requires the integration of both sources of information.

  • Research Article
  • Cite Count Icon 508
  • 10.1016/j.cub.2008.04.036
Young Children Do Not Integrate Visual and Haptic Form Information
  • May 1, 2008
  • Current Biology
  • Monica Gori + 3 more

Young Children Do Not Integrate Visual and Haptic Form Information

  • Research Article
  • Cite Count Icon 76
  • 10.1007/s00221-011-2743-7
Proprioceptive integration and body representation: insights into dancers’ expertise
  • Jun 4, 2011
  • Experimental Brain Research
  • Corinne Jola + 2 more

The experience of the body as a single coherent whole is based on multiple local sensory signals, integrated across different sensory modalities. We investigated how local information is integrated to form a single body representation and also compared the contribution of proprioceptive and visual information both in expert dancers and non-dancer controls. A number of previous studies have focused on individual differences in proprioceptive acuity at single joints and reported inconsistent findings. We used the established endpoint position matching task to measure absolute and directional errors in matching the position of one hand with the other. The matching performance was tested in three different conditions, which involved different information about the target position: only proprioceptive information from a 'target' hand which could be either the left or the right, only visual information, or both proprioceptive and visual information. Differences in matching errors between these sensory conditions suggested that dancers show better integration of local proprioceptive signals than non-dancers. The dancers also relied more on proprioception when both proprioceptive and visual information about hand position were present.

  • Research Article
  • Cite Count Icon 2
  • 10.1142/s0219635213500301
A biologically inspired neural model for visual and proprioceptive integration including sensory training
  • Dec 1, 2013
  • Journal of Integrative Neuroscience
  • Maryam Saidi + 3 more

Humans perceive the surrounding world by integration of information through different sensory modalities. Earlier models of multisensory integration rely mainly on traditional Bayesian and causal Bayesian inferences for single causal (source) and two causal (for two senses such as visual and auditory systems), respectively. In this paper a new recurrent neural model is presented for integration of visual and proprioceptive information. This model is based on population coding which is able to mimic multisensory integration of neural centers in the human brain. The simulation results agree with those achieved by casual Bayesian inference. The model can also simulate the sensory training process of visual and proprioceptive information in human. Training process in multisensory integration is a point with less attention in the literature before. The effect of proprioceptive training on multisensory perception was investigated through a set of experiments in our previous study. The current study, evaluates the effect of both modalities, i.e., visual and proprioceptive training and compares them with each other through a set of new experiments. In these experiments, the subject was asked to move his/her hand in a circle and estimate its position. The experiments were performed on eight subjects with proprioception training and eight subjects with visual training. Results of the experiments show three important points: (1) visual learning rate is significantly more than that of proprioception; (2) means of visual and proprioceptive errors are decreased by training but statistical analysis shows that this decrement is significant for proprioceptive error and non-significant for visual error, and (3) visual errors in training phase even in the beginning of it, is much less than errors of the main test stage because in the main test, the subject has to focus on two senses. The results of the experiments in this paper is in agreement with the results of the neural model simulation.

  • PDF Download Icon
  • Research Article
  • 10.3389/fnins.2023.1274949
Humans gradually integrate sudden gain or loss of visual information into spatial orientation perception.
  • Jan 8, 2024
  • Frontiers in neuroscience
  • Jamie Voros + 3 more

Vestibular and visual information is used in determining spatial orientation. Existing computational models of orientation perception focus on the integration of visual and vestibular orientation information when both are available. It is well-known, and computational models capture, differences in spatial orientation perception with visual information or without (i.e., in the dark). For example, during earth vertical yaw rotation at constant angular velocity without visual information, humans perceive their rate of rotation to decay. However, during the same sustained rotation with visual information, humans can continue to more accurately perceive self-rotation. Prior to this study, there was no existing literature on human motion perception where visual information suddenly become available or unavailable during self-motion. Via a well verified psychophysical task, we obtained perceptual reports of self-rotation during various profiles of Earth-vertical yaw rotation. The task involved transitions in the availability of visual information (and control conditions with visual information available throughout the motion or unavailable throughout). We found that when visual orientation information suddenly became available, subjects gradually integrated the new visual information over ~10 seconds. In the opposite scenario (visual information suddenly removed), past visual information continued to impact subject perception of self-rotation for ~30 seconds. We present a novel computational model of orientation perception that is consistent with the experimental results presented in this study. The gradual integration of sudden loss or gain of visual information is achieved via low pass filtering in the visual angular velocity sensory conflict pathway. In conclusion, humans gradually integrate sudden gain or loss of visual information into their existing perception of self-motion.

  • Research Article
  • Cite Count Icon 9
  • 10.1098/rsbl.2016.0196
Sensory feedback and coordinating asymmetrical landing in toads.
  • Jun 1, 2016
  • Biology Letters
  • S M Cox + 1 more

Coordinated landing requires anticipating the timing and magnitude of impact, which in turn requires sensory input. To better understand how cane toads, well known for coordinated landing, prioritize visual versus vestibular feedback during hopping, we recorded forelimb joint angle patterns and electromyographic data from five animals hopping under two conditions that were designed to force animals to land with one forelimb well before the other. In one condition, landing asymmetry was due to mid-air rolling, created by an unstable takeoff surface. In this condition, visual, vestibular and proprioceptive information could be used to predict asymmetric landing. In the other, animals took off normally, but landed asymmetrically because of a sloped landing surface. In this condition, sensory feedback provided conflicting information, and only visual feedback could appropriately predict the asymmetrical landing. During the roll treatment, when all sensory feedback could be used to predict an asymmetrical landing, pre-landing forelimb muscle activity and movement began earlier in the limb that landed first. However, no such asymmetries in forelimb preparation were apparent during hops onto sloped landings when only visual information could be used to predict landing asymmetry. These data suggest that toads prioritize vestibular or proprioceptive information over visual feedback to coordinate landing.

  • Research Article
  • Cite Count Icon 40
  • 10.1016/s0966-6362(02)00005-x
Adaptation of vibration-induced postural sway in individuals with Parkinson's disease.
  • Jan 22, 2002
  • Gait & Posture
  • Ann L Smiley-Oyen + 3 more

Adaptation of vibration-induced postural sway in individuals with Parkinson's disease.

  • Research Article
  • Cite Count Icon 105
  • 10.1007/s00221-005-2389-4
Visual bias of unseen hand position with a mirror: spatial and temporal factors
  • Jul 20, 2005
  • Experimental Brain Research
  • Nicholas P Holmes + 1 more

Two experiments examined the integration of visual and proprioceptive information concerning the location of an unseen hand, using a mirror positioned along the midsagittal plane. In experiment 1, participants tapped the fingers of both hands in synchrony, while viewing the mirror-reflection of their left hand. After 6 s, participants made reaching movements to a target with their unseen right hand behind the mirror. Reaches were accurate when visually and proprioceptively specified hand positions were congruent prior to the reach, but significantly biased by vision when the visual location conflicted with the real location. This effect was independent of the target location and depended strongly upon the relative position of the mirror-reflected hand. In experiment 2, participants made reaching movements following 4, 8, or 12 s active visuomotor or passive visual exposure to the mirror, or following passive exposure without the mirror. Reaching was biased more by the visual location following active visuomotor compared to passive visual exposure, and this bias increased with the duration of visual exposure. These results suggest that the felt position of the hand depends upon an integrated, weighted sum of visual and proprioceptive information. Visual information is weighted more strongly under active visuomotor than passive visual exposure, and with increasing exposure duration to the mirror reflected hand.

  • Research Article
  • 10.1068/v970322
Fusion of Visual and Proprioceptive Information about Hand Position Prior to Movement
  • Aug 1, 1997
  • Perception
  • M Desmurget + 2 more

The problem whether movement accuracy is better in the full open-loop condition (FOL, hand never visible) than in the static closed-loop condition (SCL, hand only visible prior to movement onset) remains widely debated. To investigate this controversial question, we studied conditions for which visual information available to the subject prior to movement onset was strictly controlled. The results of our investigation showed that the accuracy improvement observed when human subjects were allowed to see their hand, in the peripheral visual field, prior to movement: (1) concerned only the variable errors; (2) did not depend on the simultaneous vision of the hand and target (hand and target viewed simultaneously vs sequentially); (3) remained significant when pointing to proprioceptive targets; and (4) was not suppressed when the visual information was temporally (visual presentation for less than 300 ms) or spatially (vision of only the index fingertip) restricted. In addition, dissociating vision and proprioception with wedge prisms showed that a weighed hand position was used to program hand trajectory. When considered together, these results suggest that: (i) knowledge of the initial upper limb configuration or position is necessary to plan accurately goal-directed movements; (ii) static proprioceptive receptors are partially ineffective in providing an accurate estimate of the limb posture, and/or hand location relative to the body, and (iii) visual and proprioceptive information is not used in an exclusive way, but combined to furnish an accurate representation of the state of the effector prior to movement.

  • Research Article
  • Cite Count Icon 4
  • 10.7717/peerj.11301
Assessing kinesthetic proprioceptive function of the upper limb: a novel dynamic movement reproduction task using a robotic arm
  • May 3, 2021
  • PeerJ
  • Kristof Vandael + 2 more

BackgroundProprioception refers to the perception of motion and position of the body or body segments in space. A wide range of proprioceptive tests exists, although tests dynamically evaluating sensorimotor integration during upper limb movement are scarce. We introduce a novel task to evaluate kinesthetic proprioceptive function during complex upper limb movements using a robotic device. We aimed to evaluate the test–retest reliability of this newly developed Dynamic Movement Reproduction (DMR) task. Furthermore, we assessed reliability of the commonly used Joint Reposition (JR) task of the elbow, evaluated the association between both tasks, and explored the influence of visual information (viewing arm movement or not) on performance during both tasks.MethodsDuring the DMR task, participants actively reproduced movement patterns while holding a handle attached to the robotic arm, with the device encoding actual position throughout movement. In the JR task, participants actively reproduced forearm positions; with the final arm position evaluated using an angle measurement tool. The difference between target movement pattern/position and reproduced movement pattern/position served as measures of accuracy. In study 1 (N = 23), pain-free participants performed both tasks at two test sessions, 24-h apart, both with and without visual information available (i.e., vision occluded using a blindfold). In study 2 (N = 64), an independent sample of pain-free participants performed the same tasks in a single session to replicate findings regarding the association between both tasks and the influence of visual information.ResultsThe DMR task accuracy showed good-to-excellent test–retest reliability, while JR task reliability was poor: measurements did not remain sufficiently stable over testing days. The DMR and JR tasks were only weakly associated. Adding visual information (i.e., watching arm movement) had different performance effects on the tasks: it increased JR accuracy but decreased DMR accuracy, though only when the DMR task started with visual information available (i.e., an order effect).DiscussionThe DMR task’s highly standardized protocol (i.e., largely automated), precise measurement and involvement of the entire upper limb kinetic chain (i.e., shoulder, elbow and wrist joints) make it a promising tool. Moreover, the poor association between the JR and DMR tasks indicates that they likely capture unique aspects of proprioceptive function. While the former mainly captures position sense, the latter appears to capture sensorimotor integration processes underlying kinesthesia, largely independent of position sense. Finally, our results show that the integration of visual and proprioceptive information is not straightforward: additional visual information of arm movement does not necessarily make active movement reproduction more accurate, on the contrary, when movement is complex, vision appears to make it worse.

  • Research Article
  • Cite Count Icon 177
  • 10.1016/s0306-4522(01)00099-9
The interaction of visual and proprioceptive inputs in pointing to actual and remembered targets in Parkinson's disease.
  • Jul 1, 2001
  • Neuroscience
  • S.V Adamovich + 4 more

The interaction of visual and proprioceptive inputs in pointing to actual and remembered targets in Parkinson's disease.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon