Abstract

Knowledge of the number and distribution of oil palm trees during the crop cycle is vital for sustainable management and predicting yields. The accuracy of the conventional image processing method is limited for the hand-crafted feature extraction method and the overfitting problem occurs due to the insufficient dataset. We propose a modification of the Faster Region-based Convolutional Neural Network (FRCNN) for palm tree detection to reduce the overfitting problem and improve the detection accuracy. The enhanced FRCNN (EFRCNN) leads to improved performance for detecting objects (in the same image) when they are of multiple sizes by using a feature concatenation method. Transfer learning based on a ResNet50 model is used to extract the features of the input image. High-resolution images of oil palm trees from a drone are used to form the data set, containing mature, young, and mixed oil palm tree regions. We train and test the EFRCNN, the FRCNN, a CNN used recently for oil palm image detection, and two standard methods, namely, the support vector machine (SVM) and template matching (TM). The results reveal an overall accuracy of ≥96.8% for the EFRCNN on the three test sets. The accuracy is higher than the CNN and FRCNN and substantially higher than SVM and TM. For large-scale plantations, the accuracy improvement is significant. This research provides a method for automatically counting the oil palm trees in large-scale plantations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call