Abstract

Precise yield predictions are useful for implementing precision agriculture technologies and making better decisions in crop management. Convolutional neural networks (CNNs) have recently been used to predict crop yields in unmanned aerial vehicle (UAV)-based remote sensing studies, but weather data have not been considered in modeling. The aim of this study was to explore the potential of multimodal deep learning on rice yield prediction accuracy using UAV multispectral images at the heading stage, along with weather data. The effects of the CNN architectures, layer depths, and weather data integration methods on the prediction accuracy were evaluated. Overall, the multimodal deep learning model integrating UAV-based multispectral imagery and weather data had the potential to develop more precise rice yield predictions. The best models were those trained with weekly weather data. A simple CNN feature extractor for UAV-based multispectral image input data might be sufficient to predict crop yields accurately. However, the spatial patterns of the predicted yield maps differed from model to model, although the prediction accuracy was almost the same. The results indicated that not only the prediction accuracies, but also the robustness of within-field yield predictions, should be assessed in further studies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call