Abstract

The recent advances in sensing and display technologies have been transforming our living environments drastically. In this paper, a new technique is introduced to accurately reconstruct indoor environments in three-dimensions using a mobile platform. The system incorporates 4 ultrasonic sensors scanner system, an HD web camera as well as an inertial measurement unit (IMU). The whole platform is mountable on mobile facilities, such as a wheelchair. The proposed mapping approach took advantage of the precision of the 3D point clouds produced by the ultrasonic sensors system despite their scarcity to help build a more definite 3D scene. Using a robust iterative algorithm, it combined the structure from motion generated 3D point clouds with the ultrasonic sensors and IMU generated 3D point clouds to derive a much more precise point cloud using the depth measurements from the ultrasonic sensors. Because of their ability to recognize features of objects in the targeted scene, the ultrasonic generated point clouds performed feature extraction on the consecutive point cloud to ensure a perfect alignment. The range measured by ultrasonic sensors contributed to the depth correction of the generated 3D images (the 3D scenes). Experiments revealed that the system generated not only dense but precise 3D maps of the environments. The results showed that the designed 3D modeling platform is able to help in assistive living environment for self-navigation, obstacle alert, and other driving assisting tasks.

Highlights

  • Using a robust iterative algorithm, it combined the structure from motion generated 3D point clouds with the ultrasonic sensors and inertial measurement unit (IMU) generated 3D point clouds to derive a much more precise point cloud using the depth measurements from the ultrasonic sensors

  • To fuse the data from the ultrasonic sensor array to compute the distance to any objects in the scene with accuracy after removing measurement noise and drift, and secondly, to fuse the ultrasonic depth measurements and the 3D point clouds generated from multiple 2D calibrated images from the mounted web camera using existing methods

  • This article proposed a method for the 3D reconstruction of indoor environments using sensor fusion

Read more

Summary

Introduction

The fusion of multiples ultrasonic sensors and a camera to produce a better 3D. The main idea is to build up a platform combining the advantages of low-cost compensatory sensors towards acceptable indoor environment 3D reconstruction, with a satisfactory quality for mobile platforms (such as a wheelchair) navigation and driving assistance. To fuse the data from the ultrasonic sensor array to compute the distance to any objects (or obstacle) in the scene with accuracy after removing measurement noise and drift, and secondly, to fuse the ultrasonic depth measurements and the 3D point clouds generated from multiple 2D calibrated images from the mounted web camera using existing methods. At the end of the day, a precise 3D real-time reconstruction of the indoor environments for mobile navigation and driving assistance is obtained.

Objectives
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call