Abstract
Recent interest in dynamic sound localization models has created a need to better understand the head movements made by humans. Previous studies have shown that static head positions and small oscillations of the head obey Donders' law: for each facing direction there is one unique three-dimensional orientation. It is unclear whether this same constraint applies to audiovisual localization, where head movement is unrestricted and subjects may rotate their heads depending on the available auditory information. In an auditory-guided visual search task, human subjects were instructed to localize an audiovisual target within a field of visual distractors in the frontal hemisphere. During this task, head and torso movements were monitored with a motion capture system. Head rotations were found to follow Donders' law during search tasks. Individual differences were present in the amount of roll that subjects deployed, though there was no statistically significant improvement in model performance when including these individual differences in a gimbal model. The roll component of head rotation could therefore be predicted with a truncated Fick gimbal, which consists of a pitch axis nested within a yaw axis. This led to a reduction from three to two degrees of freedom when modeling head movement during localization tasks.NEW & NOTEWORTHY Understanding how humans utilize head movements during sound localization is crucial for the advancement of auditory perception models and improvement of practical applications like hearing aids and virtual reality systems. By analyzing head motion data from an auditory-guided visual search task, we concluded that findings from earlier studies on head movement can be generalized to audiovisual localization and, from this, proposed a simple model for head rotation that reduced the number of degrees of freedom.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have