Abstract

During the last decades, consumer-grade RGB-D (red green blue-depth) cameras have gained popularity for several applications in agricultural environments. Interestingly, these cameras are used for spatial mapping that can serve for robot localization and navigation. Mapping the environment for targeted robotic applications in agricultural fields is a particularly challenging task, owing to the high spatial and temporal variability, the possible unfavorable light conditions, and the unpredictable nature of these environments. The aim of the present study was to investigate the use of RGB-D cameras and unmanned ground vehicle (UGV) for autonomously mapping the environment of commercial orchards as well as providing information about the tree height and canopy volume. The results from the ground-based mapping system were compared with the three-dimensional (3D) orthomosaics acquired by an unmanned aerial vehicle (UAV). Overall, both sensing methods led to similar height measurements, while the tree volume was more accurately calculated by RGB-D cameras, as the 3D point cloud captured by the ground system was far more detailed. Finally, fusion of the two datasets provided the most precise representation of the trees.

Highlights

  • This georeferenced point cloud aimed to be compared with a 3D point cloud produced from a unmanned aerial vehicle (UAV)

  • The georeferenced point cloud was imported in quantum geographic information system (Q GIS) to check the converted point cloud with a 2D georeferenced raster image of the same area

  • Reprojecting the point cloud in a real-world coordinate system provided the possibility to be used in various future simulation agricultural applications and robot tasks, such as object detection, spraying, or harvesting [50]

Read more

Summary

Introduction

Computer vision could be described as the technology that combines image processing through computational algorithms to obtain certain information from images [1,2,3] or vision systems utilizing laser scanners [4] Focusing on the former case, a lot of studies have used RGB cameras so as to locate and distinguish the targets (e.g., fruits) from other objects by exploiting, for example, the shape, the color and the texture, usually combining their images with machine learning [3,5,6]. RGB cameras can only get two-dimensional (2D) information of the scene, while they are susceptible to variable light conditions and occlusions [7] These challenges have been overcome through acquiring depth measurements of higher resolution, which have the potential to provide more detailed information about the scene. Every pixel constructing the image is composed of color and distance values between a view-point and a certain point in the image (RGB-D values)

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call