Abstract

The wide adoption of dicamba-tolerant (DT) soybean has led to numerous cases of off-target dicamba damage to non-DT soybean and dicot crops. This study aimed to develop a method to differentiate soybean response to dicamba using unmanned-aerial-vehicle-based imagery and machine learning models. Soybean lines were visually classified into three classes of injury, i.e., tolerant, moderate, and susceptible to off-target dicamba. A quadcopter with a built-in RGB camera was used to collect images of field plots at a height of 20 m above ground level. Seven image features were extracted for each plot, including canopy coverage, contrast, entropy, green leaf index, hue, saturation, and triangular greenness index. Classification models based on artificial neural network (ANN) and random forest (RF) algorithms were developed to differentiate the three classes of response to dicamba. Significant differences for each feature were observed among classes and no significant differences across fields were observed. The ANN and RF models were able to precisely distinguish tolerant and susceptible lines with an overall accuracy of 0.74 and 0.75, respectively. The imagery-based classification model can be implemented in a breeding program to effectively differentiate phenotypic dicamba response and identify soybean lines with tolerance to off-target dicamba damage.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.