Abstract

Based on computer vision technology, this paper proposes a method for identifying and locating crops in order to successfully capture crops in the process of automatic crop picking. This method innovatively combines the YOLOv3 algorithm under the DarkNet framework with the point cloud image coordinate matching method, and can achieve the goal of this paper very well. Firstly, RGB (RGB is the color representing the three channels of red, green and blue) images and depth images are obtained by using the Kinect v2 depth camera. Secondly, the YOLOv3 algorithm is used to identify the various types of target crops in the RGB images, and the feature points of the target crops are determined. Finally, the 3D coordinates of the feature points are displayed on the point cloud images. Compared with other methods, this method of crop identification has high accuracy and small positioning error, which lays a good foundation for the subsequent harvesting of crops using mechanical arms. In summary, the method used in this paper can be considered effective.

Highlights

  • In recent years, with the rapid development of artificial intelligence technology, computer vision, as an important branch of artificial intelligence, has gradually become a hot topic for researchers all over the world

  • This paper mainly deals with the RGB image and depth image acquired by the Kinect v2 depth camera and identifies and locates the target crop in the acquired image, and stores and publishes the three-dimensional coordinates

  • We innovatively combine the YOLOv3 algorithm based on the DarkNet framework with the point cloud image coordinate extraction method

Read more

Summary

Introduction

With the rapid development of artificial intelligence technology, computer vision, as an important branch of artificial intelligence, has gradually become a hot topic for researchers all over the world. After the calibration is completed, the Kinect v2 depth camera can obtain accurate RGB images and depth images and use the YOLOv3 algorithm to identify the target crops in the images. The integration of computer vision technology in the field of robots has greatly improved the functions of recognition, positioning, tracking and grabbing, which reduced the cost and improved the practicality. This test follows this method to achieve the identification and positioning of the target crops, laying the foundation for the subsequent control of the robotic arm to perform the grab operation. The depth camera is used to obtain the crop’s information, and the information is sent to the computer for further processing and analysis

Kinect v2 Depth Camera
Calibration Theory of Kinect v2
RGB and IR Camera Calibration
Crop Identification and Feature Point Location
YOLOv3 Algorithm under the DarkNet Framework
Recognition Effect
Feature Point Definition
Feature Point Extraction Method
Experimental Process and Analysis of Results
Method of This Paper
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call