Abstract

This paper presents a novel wearable navigation system for the blind and visually impaired in unknown dynamic environments. Usually, feature-based visual navigation and 3D reconstruction works very well for static environments. In this paper, a dynamic environment is considered, where a moving object is viewed by a moving monocular camera with inertial sensors. A novel method based on feature points from a video sequence is proposed to not only estimate the camera motion itself but also the 3D motion of the moving object so as to infer the depth between the camera and the moving object. Firstly, this video sequence is segmented into static and dynamic areas by two geometry constraints: AGOF-aided homography recovery constraint and epipolar geometry constraint, which is the first key contribution of this paper. Then the motion area related to each moving object can be considered as if a static object were viewed by a “virtual camera”, while the extracted features from the static background are used for estimating the motion of the “real camera”, compared with the “virtual camera”. The second key contribution is to solve the problem of scale ambiguity in monocular camera tracking. The scale is firstly adjusted to be global using a closed form solution with 1-point algorithm and then estimated in metric unit with the help of inertial measurements. After obtaining the motions for the real and virtual camera, the third key contribution is that the 3D moving object's motion can be derived from these two motions, because the virtual camera's motion is actually the combined motion of both the real camera and the moving object. As a result, blind people can avoid collision with moving objects. Finally, we demonstrate the robustness and effectiveness of our proposed method using a series of experimental results.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.