Abstract
Vineyard classification is an important process within viticulture-related decision-support systems. Indeed, it improves grapevine vegetation detection, enabling both the assessment of vineyard vegetative properties and the optimization of in-field management tasks. Aerial data acquired by sensors coupled to unmanned aerial vehicles (UAVs) may be used to achieve it. Flight campaigns were conducted to acquire both RGB and multispectral data from three vineyards located in Portugal and in Italy. Red, green, blue and near infrared orthorectified mosaics resulted from the photogrammetric processing of the acquired data. They were then used to calculate RGB and multispectral vegetation indices, as well as a crop surface model (CSM). Three different supervised machine learning (ML) approaches—support vector machine (SVM), random forest (RF) and artificial neural network (ANN)—were trained to classify elements present within each vineyard into one of four classes: grapevine, shadow, soil and other vegetation. The trained models were then used to classify vineyards objects, generated from an object-based image analysis (OBIA) approach, into the four classes. Classification outcomes were compared with an automatic point-cloud classification approach and threshold-based approaches. Results shown that ANN provided a better overall classification performance, regardless of the type of features used. Features based on RGB data showed better performance than the ones based only on multispectral data. However, a higher performance was achieved when using features from both sensors. The methods presented in this study that resort to data acquired from different sensors are suitable to be used in the vineyard classification process. Furthermore, they also may be applied in other land use classification scenarios.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have