Abstract
Using machine vision to identify and sort scattered regular targets is an urgent problem to be solved in automated production lines. This study proposed a three-dimensional (3D) recognition method combining monocular vision and machine learning algorithms. According to the color characteristics of the targets, to convert the original color picture into YCbCr mode and use the 2D Otsu algorithm to perform gray level image segmentation on the Cb channel. Then the Haar-feature training was carried out. The comparison of feature training and Haar method for Hough transform showed that the recognized time of Haar-feature AdaBoost trainer reached 31.00 ms, while its false recognized rate was 3.91%. The strong classifier was formed by weight combination, and the Hough contour transformation algorithm was set to correct the normal vector between plane coordinate and camera coordinate system. The monocular vision system ensured that the field of camera view had not obstructed while the dots were being struck. It was measured and calculated angles between targets and the horizontal plane which coordinate points of the identified plane feature. The testing results were compared with the Otsu and AdaBoost trainer where the prediction and training set have an error of no more than 0.25 mm. Its correct rate can reach 95%. It shows that the Otsu and Haar-feature based on AdaBoost algorithm is feasible within a certain error ranges and meet the engineering requirements for solving the poses of automated regular three-dimensional targets. Keywords: Otsu, Haar-feature, AdaBoost, 3D position, target pose, monocular vision, error analysis DOI: 10.25165/j.ijabe.20201305.5013 Citation: Li Y H, Wang H J, Zhou W L, Xue Z H. Monocular vision and calculation of regular three-dimensional target pose based on Otsu and Haar-feature AdaBoost classifier. Int J Agric & Biol Eng, 2020; 13(5): 171–180.
Highlights
Visual algorithm technology includes object shape recognition, velocity sensing, distance recognition and image pattern recognition technology[1,2]
The Otsu method is a classical threshold segmentation algorithm proposed by Otsu in 1979[8,9]
This paper proposes to use Otsu to segment pipeline targets with its high segmentation accuracy and strong adaptability[10,11]
Summary
Visual algorithm technology includes object shape recognition, velocity sensing, distance recognition and image pattern recognition technology[1,2] Among those applications, the technology of 3D recognition is most prominent, especially in the actual industrialization pipeline. With the development of industrial automation technology and the detection of targets on the production and logistics transmission line, due to the fast running speed, many types of targets and large distance changes, current machine vision recognition technology seems to be powerless[6], especially for the complex background environment, the diversity of color targets and the complicated segmentation algorithm between the small targets of the pipeline. This paper proposes to use Otsu to segment pipeline targets with its high segmentation accuracy and strong adaptability[10,11] This improved Otsu search threshold mode can quickly calculate the target segmentation threshold and theoretically optimize the iterative efficiency. In the actual working scene, after the work-piece is visually segmented by the monocular, it is necessary to solve the posture of the plane of role target to determine the relative normal vector on the camera plane
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Agricultural and Biological Engineering
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.