Abstract

Research on spatial cognition and navigation of the visually impaired suggests that vision may be a primary sensory modality that enables humans to align the egocentric (self to object) and allocentric (object to object) frames of reference in space. In the absence of vision, the frames align best in the haptic space. In the locomotor space, as the haptic space translates with the body, lack of vision causes the frames to misalign, which negatively affects action reliability. In this paper, we argue that robots can function as interfaces to the haptic and locomotor spaces in supermarkets. In the locomotor space, the robot eliminates the necessity of frame alignment and, in or near the haptic space, it cues the shopper to the salient features of the environment sufficient for product retrieval. We present a trichotomous ontology of spaces in a supermarket induced by the presence of a robotic shopping assistant and analyze the results of robot-assisted shopping experiments with ten visually impaired participants conducted in a real supermarket.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call