ABSTRACTAccurate urban land use/cover monitoring is an essential step towards a sustainable future. As a key part of the classification process, the characteristics of reference data can significantly affect classification accuracy and quality of produced maps. However, ideal reference data is not always readily available; users frequently have difficulty generating sufficient reference data for some classes given time, cost, data availability, expertise level, or other limitations. This study aims at dealing with this lack of sufficiently balanced reference data by presenting a modified hybrid one-class support vector data description (SVDD) model. The underlying hypothesis is that the lack of balanced reference data can be overcome through integration of partially extracted results and multi-temporal spectral information. The partially extracted results, defined as highly accurate classified pixels identified in previous algorithmic iterations, allow a gradual increase of the available training data. Furthermore, the method incorporates a voting system that integrates multi-temporal images using the SVDD algorithm. We applied this hybrid method to binary impervious classification of multi-temporal Landsat Thematic Mapper imagery from Central New York with imbalanced reference data. The proposed hybrid one-class SVDD model achieved a 5–6% improvement in overall accuracy and 0.05–0.09 in kappa than the typical one-class SVDD benchmark. While the method was tested on a single site (albeit with an unusually high reference dataset size of >870,000 pixels) we feel confident to suggest implementation of our methodology in other sites over the traditional method. This is because our approach automatically reverts to the traditional method when voting is inconsistent or there is a limited number of highly accurately classified pixels to assist future iterations. Future work could explore the quantity and temporal specificity (e.g. benefits of specific months) of the multi-temporal image selection and/or test other one-class classifiers.
Read full abstract