Abstract

Our ability to perceive motion information on the skin is key to manipulating dynamic objects in the environment. Previous studies show that the brain derives tactile motion representations by integrating local cues of the object that impinge on the skin (e.g., speed, intensity, direction), a mechanism known as the Full Vector Average model [1]. This model was derived from studies that placed the hand in the same posture. Yet, object perception and manipulation with the hand (i.e., haptics) is a highly dynamic and goal-directed function. Thus, it is key to study whether tactile motion perception is transformed by hand position, and whether these transformations depend on the reference frame in which the motion judgement is made. Here, we asked human participants to discriminate motion stimuli on the index finger in two reference frames (hand- centric vs. sternum-centric), with the hand placed in different positions. We found that human observers can systematically represent tactile motion under explicitly instructed reference frames. We further showed that tactile motion discriminations can be accurately decoded using a Bayesian generative model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call