Abstract

We are developing a ‘gopher’ wheelchair robot which can be used as an aid for disabled individuals. The robot uses a shared control architecture where the robot and the human user share the responsibility for a retrieve and replace task. The medium of the interactive interface between the robot and the user is stereo video images. In addition, the stereo cameras serve as a primary sensor to detect and track targets which guide the robot's low-level servoing. The person is responsible for selecting objects or targets in the environment and then instructing the robot how to move relative to these targets. This paper first describes the hardware and the control interface of this human-robot system. The description here focuses on the system's video algorithms for tracking and evaluating targets. The system builds a binary shape model for each target selected by the user. It also forms a color mapping used to highlight the target in the image. This mapping is used on subsequent images to create a binary image which can be quickly matched with the target's shape model. We have tested this tracking algorithm on videotaped image sequences and on some runs with our wheelchair mobile robot. Our initial results show that this algorithm is reasonably robust for various types of edge and corner targets necessary for navigation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.