Abstract

Assitive robot technology is rapidly increasing in the modern era. Robot companions become more human-like over the time with the advancement of technology. Personal assitive robot applications are escalated to a vast extend featuring to medical robots, supportive robots for disabled or elderly people as companions. It is essential to possess the capabilities in relevance to navigation in an unknown environment within the assistive robot. In order to posses such competencies the robot should be able to create spatial cognitive maps and virtual maps. With aid of those constructed maps and the actual spatial map obtained by the sensory inputs such as laser scanner, robot should be able to identify the objects without perceiving any visual information. Therefore this paper proposes a method to use the spatial cognitive map to create virtual visualization of previously unknown environment based on spatial data conveyed through interactive conversation with the user and link that information with the actual spatial map obtained from the laser scanner to identify the position of objects in the domestic environment. The Conceptual map creator(CMC) and Virtual Spatial Map Link Creating (VISMALC) module have been introduced in order to combine cognitive virtual maps with the actual spatial map to identify objects without using any visual information. Capabilities of the robot have been demonstrated and validated from the experimental results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call