Abstract

This work investigates whether large-scale indoor layouts can be learned and navigated non-visually, using verbal descriptions of layout geometry that are updated, e.g. contingent on a participant's location in a building. In previous research, verbal information has been used to facilitate route following, not to support free exploration and wayfinding. Our results with blindfolded-sighted participants demonstrate that accurate learning and wayfinding performance is possible using verbal descriptions and that it is sufficient to describe only local geometric detail. In addition, no differences in learning or navigation performance were observed between the verbal study and a control study using visual input. Verbal learning was also compared to the performance of a random walk model, demonstrating that human search behavior is not based on chance decision-making. However, the model performed more like human participants after adding a constraint that biased it against reversing direction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call