Abstract

The massive attention to the surveillance video-based analysis makes the vehicle re-identification one of the current hot areas of interest to study. Extracting discriminative visual representations for vehicle re-identification is a challenging task due to the low-variance among the vehicles that share same model, brand, type, and color. Recently, several methods have been proposed for vehicle re-identification, that either use feature learning or metric learning approach. However, designing an efficient and cost-effective model is significantly demanded. In this paper, we propose multi-label-based similarity learning (MLSL) for vehicle re-identification obtaining an efficient deep-learning-based model that derives robust vehicle representations. Overall, our model features two main parts. First, a multi-label-based similarity learner that employs Siamese network on three different attributes of the vehicles: vehicle ID, color, and type. The second part is a regular CNN-based feature learner that employed to learn feature representations with vehicle ID attribute. The model is trained jointly with both parts. In order to validate the effectiveness of our model, a set of extensive experiments has been conducted on three of the largest well-known datasets VeRi-776, VehicleID, and VERI-Wild datasets. Furthermore, the parts of the proposed model are validated by exploring the influence of each part on the entire model performance. The results prove the superiority of our model over multiple state-of-the-art methods on the three mentioned datasets.

Highlights

  • The task of extracting robust visual representations is the cornerstone of building all effective algorithms for computer vision applications

  • Vehicle image analysis has widely attracted the attention of researchers due to the revolution in artificial intelligence techniques convolutional neural networks (CNNs)

  • Our contribution of this work can be summarized as follows: 1) Introduce a multi-label-based similarity learning for vehicle re-identification that jointly learns three different similarities of the vehicle pairs with the attributes: vehicle ID, color, and type

Read more

Summary

INTRODUCTION

The task of extracting robust visual representations is the cornerstone of building all effective algorithms for computer vision applications. Most of the state-of-the-art methods use metric (Similarity) learning scheme either as the cornerstone of their models or as the most important part This learning scheme pushes the neural network to generate more discriminating features. Our contribution of this work can be summarized as follows: 1) Introduce a multi-label-based similarity learning for vehicle re-identification that jointly learns three different similarities of the vehicle pairs with the attributes: vehicle ID, color, and type. Unlike the literature methods, where the assigned label of similarity for each pair of vehicle images should be either 0 or 1 based on only the vehicle ID, our proposed model is inspired by human visual attention mechanism, where it is designed to minimize the distance of the vehicles with same identity to 0, whereas the distance between dissimilar vehicles is maximized and contributed by each unshared attribute, i.e., ID, color, and type.

RELATED WORK
VEHICLE MULTI-LABEL-BASED SIMILARITY LEARNING
ONLINE BATCH GENERATOR
EXPERIMENTAL RESULTS
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call