Abstract

Technologies for estimating self-position and orientation are important for both humans and robots. These technologies allow robots to perform tasks such as carrying objects and allow people to reach their destinations. Although self-position estimation technologies using GPS and laser rangefinders have been developed, few methods can be used by both humans and robots. Therefore, we developed a method that can estimate three-dimensional position and orientation using visual markers and an inertia measurement unit (IMU). Self-position can be measured with high accuracy by using a visual marker and monocular camera, but such measurement data is discrete and sparse. In contrast, an IMU can continuously measure acceleration data, but data obtained from an acceleration sensor are double-integrated, which increases position error. By combining visual marker and IMU information, position error calculations based on the acceleration sensor can be corrected, and the movement path of the object can be estimated. In demonstration experiments, the proposed method accurately estimates the three-dimensional movement distance when a person walks about 13 m, with an average error of about 40.3mm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.