Abstract

There is little guidance for designers on how to map information requirements to tactile displays. In this paper, we propose new directions for carrying out the mapping of tactile displays based on semantic mapping techniques used in auditory and visual displays. We discuss these techniques in relation to the design of a multimodal ground control station (GCS) for unmanned aerial vehicles to improve the visually-dominated GCS interface. We hope that this approach will encourage the design of better, safer, and more intuitive UAV GCS interfaces to reduce the frequency of mishaps related to human error.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.