Abstract

Advances in space-based ocean surveillance systems have improved the detection of objects from high-quality remote-sensing big data. Previous studies mainly focused on finding and recognizing objects based on deep learning and statistical frameworks. Studies have not fully explored the transparent and reasonable decision-making process for the final predicted results, which is vital for civil and military applications. An explainable attention network for fine-grained ship image classification is proposed in the present study to bridge this gap. The present study seeks to increase attention to objects’ discriminative parts and explore intrinsic relationships between multiple attention parts and predicted outcomes. Several causal multi-attention maps are generated by combining the multi-head attention mechanism and structural causal model. The convolutional filters in the last layer of the network are divided into several groups, and each group is designed to express specific semantic information under supervision of the filter loss function. The results show which parts of the objects are adopted as the key factors for the network to make the final predicted outcome. In the training process, the network is designed to rapidly focus on the salient feature of objects and played a role in guiding other parts of the network to improve the explainable capability of the network without affecting the discrimination power or compromising the classification accuracy. Extensive experiments based on two public datasets show that the network is highly effective as indicated by high classification accuracy and explainable ability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call