Abstract

This paper focuses on a method to overcome some of the disadvantages that are related with the use of artificial neural networks (ANNs) as supervised classifiers. The proposed method aims at speeding up network learning, improving classification accuracies and reducing variability on classification performance due to random weight initialization. This can be realized by transferring implicit knowledge from a previously learned source task to a new target task using the proposed algorithm, Discriminality Based Transfer (DBT). The presented approach is compared with conventional network training and a literal transfer method in a 13-class tropical savannah classification experiment using Landsat Thematic Mapper (TM) data. Knowledge was extracted from a network trained on the Kara experimental site in Togo. This information was used to classify the Savanes-L'Oti area which differs in terms of geographical position, image acquisition date, climatological condition and land cover. It was possible to speed up network learning 5.2, 4.3 and 1.8 times using, respectively, 5-, 10- and 20-pixels-per-class training sets. Larger training sets showed less speed improvement. After applying DBT, average classification accuracies were not significantly different from accuracies obtained after training random initialized networks, although DBT tended to show better performance on smaller training sets. It was possible to explain differences in individual class accuracies by analysing Bhattacharyya (BH) distances calculated between all Kara and Savanes-L'Oti classes. Finally, variability on classification performance decreased significantly when training with 5-, 10- and 20-pixels-per-class training sets after DBT application.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call