Abstract
Learning a robust and discriminative feature representation has always been challenging in vehicle reidentification due to intra-class variability and inter-class similarity. While most research contributions focus on adding more attributes or modifying existing state-of-the-art architectures to enhance the learning capacity, our approach tackles the problem by multi-domain learning. Many multi-domain learning approaches blindly merge datasets and later fine-tune the model on the target dataset. These methods are computationally expensive. Our paper focuses on effectively constructing a large-scale vehicle re-id dataset by selectively choosing images sharing similar attributes to the target dataset only. Through extensive experiments, we conclude that our approach can outperform other state-of-the-art models in vehicle re-identification by large margins. We also compare our method with other multi-domain learning methods to show that ours uses less external data but still has superior performance, proving that not all data matters.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have