Abstract
Convolutional neural networks (CNNs) can effectively detect objects in satellite images. They use a large number of labeled samples for training appropriate feature extractor. However, creating labels requires significant concentration and increases the workload of users, because satellite images cover quite large areas relative to the scale of the objects. We propose a human cooperative semi-self-training (SST) framework to reduce users' burden for training a CNN. The SST cooperatively collects training samples from unlabeled samples by repeating the following two phases: self-training and user intervention. The self-training automatically constructs the training dataset and trains the CNN, while the user intervention expands the number of accurately labeled samples. Notably, the SST requests users to label samples only if the automatic training stagnates. We improve the self-training phase to reduce the frequency of user intervention through the following two modules: intelligent dataset construction and pre-training. The intelligent dataset construction module automatically collects only effective training samples based on automatic labeling and evaluation of the collected samples utilizing tentatively trained CNN, while the pre-training module facilitates training of feature extractor in the CNN by allowing the CNN to learn handcrafted image features. Introducing the pre-training module into the dataset construction can effectively improve the self-training because the quality of the automatic labeling based on the pre-trained CNN can be enhanced. The experimental results demonstrated that the CNN trained by the proposed method yielded 92% of performance with only 2.6% of the labeled samples relative to the model trained with the complete dataset.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.