Abstract
Optical tracking provides relatively high accuracy over a large workspace but requires line-of-sight between the camera and the markers, which may be difficult to maintain in actual applications. In contrast, inertial sensing does not require line-of-sight but is subject to drift, which may cause large cumulative errors, especially during the measurement of position. To handle cases where some or all of the markers are occluded, this paper proposes an inertial and optical sensor fusion approach in which the bias of the inertial sensors is estimated when the optical tracker provides full six degree-of-freedom (6-DOF) pose information. As long as the position of at least one marker can be tracked by the optical system, the 3-DOF position can be combined with the orientation estimated from the inertial measurements to recover the full 6-DOF pose information. When all the markers are occluded, the position tracking relies on the inertial sensors that are bias-corrected by the optical tracking system. Experiments are performed with an augmented reality head-mounted display (ARHMD) that integrates an optical tracking system (OTS) and inertial measurement unit (IMU). Experimental results show that under partial occlusion conditions, the root mean square errors (RMSE) of orientation and position are 0.04° and 0.134 mm, and under total occlusion conditions for 1 s, the orientation and position RMSE are 0.022° and 0.22 mm, respectively. Thus, the proposed sensor fusion approach can provide reliable 6-DOF pose under long-term partial occlusion and short-term total occlusion conditions.
Highlights
Accurate tracking of the human’s pose is important in augmented reality applications such as entertainment, military training and medical navigation
In [22] we proposed a sensor fusion approach where the 9-axial measurement from the inertial measurement unit (IMU) is bias-corrected by an optical tracking system (OTS) and used to track the orientation
This paper presents a sensor fusion approach that combines an OTS with an IMU
Summary
Accurate tracking of the human’s pose is important in augmented reality applications such as entertainment, military training and medical navigation. The high accuracy of an optical tracking system (OTS) can assist the inertial sensors to remove the bias while the inertial tracking system can improve the robustness of tracking systems by capturing the orientation or position when some of the markers are not visible. In [21], the authors propose fusion of OTS and IMU measurements to estimate position and orientation in cases of brief occlusions of tracking markers, but only the accelerometer bias is estimated in the EKF. In [22] we proposed a sensor fusion approach where the 9-axial measurement from the IMU is bias-corrected by an OTS and used to track the orientation. An EKF is implemented to estimate the orientation and position with the bias-corrected inertial sensor data as the system state driver and the OTS as part of the measurement when at least one marker is visible. This paper is organized as follows: Section 2 describes the hybrid tracking system, the error model used to correct the inertial sensor data, and the sensor fusion algorithm applied to track the orientation and position of the target; Section 3 presents the experimental results and discussion; and Section 4 states the conclusions of this paper
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.