Abstract

Fully Convolutional Networks (FCN) has shown better performance than other classifiers like Random Forest (RF), Support Vector Machine (SVM) and patch-based Deep Convolutional Neural Network (DCNN), for object-based classification using orthoimage only in previous studies; however, for further improving deep learning algorithm performance, multi-view data should be considered for training data enrichment, which has not been investigated for FCN. The present study developed a novel OBIA classification using FCN and multi-view data extracted from small Unmanned Aerial System (UAS) for mapping landcovers. Specifically, this study proposed three methods to automatically generate multi-view training samples from orthoimage training datasets to conduct multi-view object-based classification using FCN, and compared their performances with each other and also with RF, SVM, and DCNN classifiers. The first method does not consider the object surrounding information, while the other two utilized object context information. We demonstrated that all the three versions of FCN multi-view object-based classification outperformed their counterparts utilizing orthoimage data only. Furthermore, the results also showed that when multi-view training samples were prepared with consideration of object surroundings, FCN trained with these samples gave much better accuracy than FCN classification trained without context information. Similar accuracies were achieved from the two methods utilizing object surrounding information, although sample preparation was conducted using two different ways. When comparing FCN with RF, SVM, DCNN implies that FCN generally produced better accuracy than the other classifiers, regardless of using orthoimage or multi-view data.

Highlights

  • Small Unmanned Aircraft System (UAS), has become a popular remote sensing platform for providing very high-resolution images targeting small or medium size sites in the past decade, due to its advantages of safety, flexibility, and low-cost over other airborne or space-borne platforms

  • This study proposed methods to utilize multi-view data for Object-based Image Analysis (OBIA) classification with the Fully Convolutional Networks (FCN) as the classifier to investigate whether multi-view data extraction and use can improve FCN performance

  • The study compared the performance of FCN with other classifiers, such as the Support Vector Machine (SVM), Random Forest (RF), and Deep Convolutional Neural Network (DCNN) using orthoimage and multi-view data

Read more

Summary

Introduction

Small Unmanned Aircraft System (UAS), has become a popular remote sensing platform for providing very high-resolution images targeting small or medium size sites in the past decade, due to its advantages of safety, flexibility, and low-cost over other airborne or space-borne platforms. Object-based Image Analysis (OBIA) has been routinely employed to process UAS images for landcover mapping, with its capability of generating more appealing maps and comparable (if not higher) classification accuracy when compared with pixel-based methods [3,4,5,6,7,8]. Analyzing the UAS images using traditional OBIA normally starts with bundle adjustment procedure to produce orthoimage from all the UAS images. Image segmentation algorithm is conducted to segment the orthoimage to groups of homogeneous pixels to form numerous meaningful objects. Feature extraction and selection that have to be conducted during traditional OBIA procedures are challenging tasks and can limit classification performance

Methods
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call