Abstract

Person re-identification (Re-ID) aims to retrieve a person of interest across multiple nonoverlapping cameras. In the last few years, the construction of aerial person Re-ID datasets has been appealing because visual surveillance through unmanned aerial vehicle (UAV) platforms has become very valuable in real-world scenarios. However, great differences are exhibited between the pedestrian images captured by ground cameras and those captured by UAVs. Person Re-ID methods based on ground person images have difficulty when performing Re-ID based on aerial person images. In this paper, we first propose a novel meta-transfer learning method for person Re-ID in aerial imagery; this approach trains a generalisable Re-ID model to learn discriminative feature representations for aerial person images. Specifically, a meta-learning strategy is introduced to study a feature extractor, and a transfer learning strategy is introduced to utilise and further improve the acquired meta-knowledge. To overcome the convergence speed and recognition accuracy reductions caused by the presence of difficult categories in the given dataset, we propose a learning strategy based on curriculum sampling that is harmonised with our meta-transfer learning framework. In addition, a new metric formulation of sample similarity is introduced based on Mahalanobis distance to improve the optimisation of the model. Extensive comparative evaluation experiments are conducted on the large-scale aerial Re-ID dataset, and the results obtained show that our method achieves a Rank-1 accuracy of 63.63 $$\%$$ and a mean average precision (mAP) of 38.02 $$\%$$ , demonstrating its potential for completing person Re-ID in aerial images. Ablation studies also validate that each component contributes to improving the performance of the model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call