Abstract

Auditory and visual localization of a real sound source under both natural and artificial listening conditions were examined. Five subjects were employed in a visual search task paradigm which contained some conditions that were aurally aided. The aurally aided conditions utilized white noise, emitted at the subjects horizon, from a single speaker at 72 dB(A). The paradigm also contained conditions in which a fully enclosed helmet was used to occlude the pinna in order to create the artificial listening conditions in which normal head-related transfer functions (HRTFs) are transformed. Reaction time and accuracy were simultaneously used within each condition to measure the extent to which normal HRTFs were transformed by the artificial listening conditions. The data were analyzed and significance was found between the natural and artificial conditions [F(8,32)=3.59; p=0.005]. There was an increase in both accuracy and reaction time between the natural and transformed HRTFs. (The experiment began with six subjects, but due to scheduling difficulties, their data were not available at the time this abstract was written. Their data will be included in the analysis for the presentation.)

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call