Abstract

We describe the development of tools to exploit the enormous resource of street-level imagery in Google Street View to characterize food cultivation practices along roadside transects at very high spatial resolution as a potential complement to traditional remote sensing approaches. We report on two software tools for crop identification using a deep convolutional neural network (CNN) applied to Google Street View imagery. The first, a multi-class classifier distinguishes seven regionally common cultivated plant species, as well as uncultivated vegetation, built environment, and water along the roads. The second, a prototype specialist detector, recognizes the presence of a single plant species: in our case, banana. These two classification tools were tested along roadside transects in two areas of Thailand, a country where there is good Google Street View coverage.On the entire test set, the overall accuracy of the multi-class classifier was 83.3%. For several classes, (banana, built, cassava, maize, rice, and sugarcane), the producer's accuracy was over 90%, meaning that the classifier was infrequently making omission errors. This performance on roadside transects is comparable with that of some remote-sensing classifiers, yet ours does not require any additional site-visits for ground-truthing. Moreover, the overall accuracy of the classifier on the 40% of images it is most sure about is excellent: 99.0%. For the prototype specialist detector, the area under the ROC curve was 0.9905, indicating excellent performance in detecting the presence of banana plants.While initially tested over the road network in a small area, this technique could readily be deployed on a regional or even national scale to supplement remote sensing data and yield a fine-grained analysis of food cultivation activities along roadside transects.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call