Abstract

AbstractSurvey knowledge of spatial environments can be successfully conveyed by visual maps. For visually impaired people, tactile maps have been proposed as a substitute. The latter are hard to read and to understand. This paper proposes how the cognitive disadvantages can be compensated for by Verbally Annotated Tactile (VAT) maps. VAT maps combine two representational components: a verbal annotation system as a propositional component and a tactile map as a spatial component. It is argued that users will benefit from the cross-modal interaction of both. In a pilot study it is shown that using tactile You-Are-Here maps that only implement the spatial component is not optimal. I argue that some of the problems observed can be compensated for by incorporating verbal annotations. Research questions on cross-modal interaction in VAT maps are formulated that address the challenges that have to be overcome in order to benefit from propositional and spatial representations induced by VAT maps.Keywordsverbal annotationtactile maprepresentationnavigationrepresentational modalitymultimodality

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.