Abstract

Consistent 3D imaging of robot surroundings is extremely useful for aiding navigation, and a lot of research effort has been applied to propose good solutions to this challenge. In principle there are three main methods used to acquire 3D information using vision-based systems: structure from motion (SfM) and stereo vision (SV) systems, laser range scanners (LRS) and time-of-flight (TOF) cameras. SfM and SV systems usually rely on establishing correspondences between two images taken simultaneously (Faugeras, 1993), or taken by one camera at different times and places (Oliensis, 2000). Stereo cameras introduce physical restrictions on the robot due to the need for camera separation. Further, stereo cameras depend on texture matching from both camera images for range estimation. This produces a rather sparse and unevenly distributed data set. Due to the allocation problem dynamic tracking of objects is not an easy task (Hussmann & Liepert, 2009). SfM techniques must deal with correspondence, and also uncertainty about the position at which each image is taken, and dynamic changes that may occur in the time between the two images. LRSs deliver one scanning line of accurate distance measurements often used for navigation tasks (Nuechter et al., 2003). LRSs measure distances at a coarse grid across the range sensor field-of-view, also providing sparse data sets. The major disadvantage of LRS systems is the use of mechanical components and that they do not deliver 3D range data at one image capture. In dynamical scenes the range data has to be corrected due to the laser scan process. They also do not deliver any intensity or colour information of the objects. Some researchers have mounted both LRS and camera on the same robot, and integrated the data to give both image and range information (Ho & Jarvis, 2008). TOF cameras (Blanc et al., 2004 ; Schwarte et al., 1997) combine the advantage of active range sensors and camera based approaches as they provide a 2D image of intensity and exact (not estimated) distance values in real-time for each pixel. No data integration is needed since range and intensity measurements are made at each pixel. Compared to SV systems TOF cameras can deal with prominent parts of rooms such as walls, floors, and ceilings even if they are not structured. In addition to the 3D point cloud, contour and flow detection in the image plane yields motion information that can be used for applications such as car or person tracking (Hussmann et al., 2008). Compared to an LRS all range data are captured at one time between different object sample points. In conclusion it can be said 16

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.