Abstract

The agricultural industry could greatly benefit from an intelligent system capable of supporting field workers to increase production. Such a system would need to monitor human workers, their current actions, their intentions, and possible future actions, which are the focus of this work. Herein, we propose and validate a methodology to recognize human actions during the avocado harvesting process in a Chilean farm based on combined object-pose semantic information using RGB still images. We use Faster R-CNN –Region Convolutional Neural Network– with Inception V2 convolutional object detection to recognize 17 categories, which include among others, field workers, tools, crops, and vehicles. Then, we use a convolutional-based 2D pose estimation method called OpenPose to detect 18 human skeleton joints. Both the object and the pose features are processed, normalized, and combined into a single feature vector. We test four classifiers –Support vector machine, Decision trees, K-Nearest-Neighbour, and Bagged trees– on the combined object-pose feature vectors to evaluate action classification performance. We also test such results using principal component analysis on the four classifiers to reduce dimensionality. Accuracy and inference time are analyzed for all the classifiers using 10 action categories, related to the avocado harvesting process. The results show that it is possible to detect human actions during harvesting, obtaining average accuracy performances (among all action categories) ranging from 57% to 99%, depending on the classifier used. The latter can be used to support an intelligent system, such as robots, interacting with field workers aimed at increasing productivity.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.