Abstract
In an effort to simplify human resource management and reduce costs, control towers are now more and more designed to not be implanted directly on the airport but remotely. This concept, known as Remote Control Tower, offers a “digital” working context because the view on the runways is broadcast remotely via cameras, which are located on the physical airport. This offers researchers and engineers the possibility to develop novel interaction techniques. But this technology relies on the sense of sight, which is largely used to give the operator information and interaction, and which is now becoming overloaded. In this paper, we focus on the design and the testing of new interaction forms that rely on the human senses of hearing and touch. More precisely, our study aims at quantifying the contribution of a multimodal interaction technique based on spatial sound and vibrotactile feedback to improve aircraft location. Applied to Remote Tower environment, the final purpose is to enhance Air Traffic Controller's perception and increase safety. Three different interaction modalities have been compared by involving 22 Air Traffic Controllers in a simulated environment. The experimental task consisted in locating aircraft in different airspace positions by using the senses of hearing and touch through two visibility conditions. In the first modality (spatial sound only), the sound sources (e.g. aircraft) had the same amplification factor. In the second modality (called Audio Focus), the amplification factor of the sound sources located along the participant's head sagittal axis was increased, while the intensity of the sound sources located outside this axis was decreased. In the last modality, Audio Focus was coupled with vibrotactile feedback to indicate in addition the vertical positions of aircraft. Behavioral (i.e. accuracy and response times measurements) and subjective (i.e. questionnaires) results showed significantly higher performance in poor visibility when using Audio Focus interaction. In particular, interactive spatial sound gave the participants notably higher accuracy in degraded visibility compared to spatial sound only. This result was even better when coupled with vibrotactile feedback. Meanwhile, response times were significantly longer when using Audio Focus modality (coupled with vibrotactile feedback or not), while remaining acceptably short. This study can be seen as the initial step in the development of a novel interaction technique that uses sound as a means of location when the sense of sight alone is not enough.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.