Abstract

In recent years, Convolutional Neural Network (CNN) has become an attractive method to recognize and localize plant species in unstructured agricultural environments. However, developed systems suffer from unoptimized combinations of the CNN model, computer hardware, camera configuration, and travel velocity to prevent missed detections. Missed detection occurs if the camera does not capture a plant due to slow inferencing speed or fast travel velocity. Furthermore, modularity was less focused on Machine Vision System (MVS) development. However, having a modular MVS can reduce the effort in development as it will allow scalability and reusability. This study proposes the derived parameter, called overlapping rate (ro), or the ratio of the camera field of view (S) and inferencing speed (fps) to the travel velocity (v⇀) to theoretically predict the plant detection rate (rd) of an MVS and aid in developing a CNN-based vision module. Using performance from existing MVS, the values of ro at different combinations of inferencing speeds (2.4 to 22 fps) and travel velocity (0.1 to 2.5 m/s) at 0.5 m field of view were calculated. The results showed that missed detections occurred when ro was less than 1. Comparing the theoretical detection rate (rd,th) to the simulated detection rate (rd,sim) showed that rd,th had a 20% margin of error in predicting plant detection rate at very low travel distances (<1 m), but there was no margin of error when travel distance was sufficient to complete a detection pattern cycle (≥10 m). The simulation results also showed that increasing S or having multiple vision modules reduced missed detection by increasing the allowable v⇀max. This number of needed vision modules was equal to rounding up the inverse of ro. Finally, a vision module that utilized SSD MobileNetV1 with an average effective inferencing speed of 16 fps was simulated, developed, and tested. Results showed that the rd,th and rd,sim had no margin of error in predicting ractual of the vision module at the tested travel velocities (0.1 to 0.3 m/s). Thus, the results of this study showed that ro can be used to predict rd and optimize the design of a CNN-based vision-equipped robot for plant detections in agricultural field operations with no margin of error at sufficient travel distance.

Highlights

  • The increasing cost and decreasing availability of agricultural labor [1,2] and the need for sustainable farming methods [3,4,5] led to the development of robots for agricultural field operations

  • Rd,th and rd,sim were equal, these results proved the validity of thTeatbhleeo7restuicmalmcoarniczeepsttshaenpdrescimisiuolnataionnd mreectahlloodfstphreetsreaninteeddiCnNthNismstouddeyl. iHn ednecteec, tridn,tgh panotdterdd,spimlacnatns abteduisffeedretnottrheeloartievteicatrlalyvedlevteerlmociintieesthoef dtheeteccotinovneryaotre. oRfeasuvlitssioshnoswysetdemthaint tchaepctuorminbginpaltainont iomfaagneospatsima ifzuendctSioSnDoMf ovbailnedNeftpVs2winithTeknnsoowrRnTSr.unning in a Jetson Nano

  • This study presented a practical approach to quantify rd and aid in the development of a Convolutional Neural Networks (CNN)-based vision module through the introduction of the dimensionless parameter r0

Read more

Summary

Introduction

The increasing cost and decreasing availability of agricultural labor [1,2] and the need for sustainable farming methods [3,4,5] led to the development of robots for agricultural field operations. A study that surveyed CNN-based weed detection and plant species classification reported 86–97% and 48–99% precisions, respectively, but data on inferencing speeds were unreported [19]. In a study by Olsen et al (2019) [22] on detecting different species of weeds, the real-time performance of ResNet-50 in an NVIDIA Jetson TX2 was only 5.5 fps at 95.1% precision Optimizing their TensorFlow model using TensorRT increased the inferencing speed to 18.7 fps. The brief review of the developed systems showed that inferencing speed ( f ps) and travel velocity ( v ) of a CNN-based MVS impact its detection rate (rd). This study proposes theoretical and simulation approaches in predicting combinations of f ps, v , and camera configuration to prevent missed plant detections and aid in developing a modular CNN-based MVS. A reusable and scalable CNN-based vision module for plant detection based on Robot Operating System (ROS) and the Jetson Nano platform

Concept
Maximizing Travel Velocity
Increasing v max
Detection Algorithm
Experimental Design
Hardware and Software Development
Dataset Preparation and Training of the CNN Model
Testing and Simulation
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call