Abstract

Visual navigation can handle complicated problems, such as kidnapping, shadowing and slipping. A low-cost video camera is particularly suitable for mobile home robots in the sense of human robot interaction, and it does not disparity map computation. An efficient vision-based simultaneous localization and map building (SLAM) method is presented for home robots using a forward monocular camera. This paper also presents a novel framework of scale-invariant feature transform (SIFT), where the difference of Gaussian (DOG)- based scale-invariant feature transform method is replaced by the difference of wavelet (DOW) transform. The modified SIFT enables real-time applications or embedded systems for home robot products. Two different types of home robots, such as cleaning and service robots serve as a tested platform of the proposed vision-based navigation. The experimental results show that the robots can provide acceptable navigation performance on unstructured environment in real-time.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.