Abstract

Nowadays, unmanned ground vehicles (UGVs) are widely used for many applications. UGVs have sensors including multi-channel laser sensors, two-dimensional (2D) cameras, Global Positioning System receivers, and inertial measurement units (GPS–IMU). Multi-channel laser sensors and 2D cameras are installed to collect information regarding the environment surrounding the vehicle. Moreover, the GPS–IMU system is used to determine the position, acceleration, and velocity of the vehicle. This paper proposes a fast and effective method for modeling nonground scenes using multiple types of sensor data captured through a remote-controlled robot. The multi-channel laser sensor returns a point cloud in each frame. We separated the point clouds into ground and nonground areas before modeling the three-dimensional (3D) scenes. The ground part was used to create a dynamic triangular mesh based on the height map and vehicle position. The modeling of nonground parts in dynamic environments including moving objects is more challenging than modeling of ground parts. In the first step, we applied our object segmentation algorithm to divide nonground points into separate objects. Next, an object tracking algorithm was implemented to detect dynamic objects. Subsequently, nonground objects other than large dynamic ones, such as cars, were separated into two groups: surface objects and non-surface objects. We employed colored particles to model the non-surface objects. To model the surface and large dynamic objects, we used two dynamic projection panels to generate 3D meshes. In addition, we applied two processes to optimize the modeling result. First, we removed any trace of the moving objects, and collected the points on the dynamic objects in previous frames. Next, these points were merged with the nonground points in the current frame. We also applied slide window and near point projection techniques to fill the holes in the meshes. Finally, we applied texture mapping using 2D images captured using three cameras installed in the front of the robot. The results of the experiments prove that our nonground modeling method can be used to model photorealistic and real-time 3D scenes around a remote-controlled robot.

Highlights

  • IntroductionMany multimedia applications using multi-sensors have been proposed

  • In recent years, many multimedia applications using multi-sensors have been proposed

  • Many multimedia applications using multi-sensors have been proposed. These applications are widely used in many areas, such as artificial intelligence in cloud environments [1,2], Internet of Things (IoT) [3], virtual reality [4], and unmanned ground vehicles (UGVs) [5,6,7,8,9,10,11,12]

Read more

Summary

Introduction

Many multimedia applications using multi-sensors have been proposed. These applications are widely used in many areas, such as artificial intelligence in cloud environments [1,2], Internet of Things (IoT) [3], virtual reality [4], and unmanned ground vehicles (UGVs) [5,6,7,8,9,10,11,12]. Symmetry 2018, 10, 83 types: autonomous and remote controlled. In both cases, sensors are installed to collect information regarding the environment surrounding the vehicle. For a remote-controlled robot [9,10,11,12], we usually employ multi-channel laser sensors, two-dimensional (2D) cameras, and Global Positioning

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.