Abstract

Unsupervised domain adaptive vehicle density estimation aims to transfer the knowledge learned from the labeled source domain to the unlabeled target domain, which has received extensive attention due to its practicality and effectiveness. However, the existing methods often rely on a large amount of expensive labeled data, and ignore the large differences between different data domains, making the network trained in one scene difficult to apply to the target scene. To solve these problems, this paper proposes an unsupervised domain adaptive vehicle density estimation network based on joint adversarial learning (UDANet), which effectively extends the model to unlabeled target domain data by adding two domain adaptive components to the pre-trained backbone. Specifically, a multi-level feature discriminator is designed to align vehicle features from the image level and refine features from the pixel level by joint adversarial learning, thus achieving cross-domain transfer of features from the source domain to the target domain. In addition, to address the shared convolutional structure that incorrectly highlights irrelevant information, a feature correction module is designed to improve the adaptability of the network to target domain data by re-modeling the source and target data according to channel independence. Several well-known backbones are used to prove the superiority of the backbone and the mobility of the two domain adaptive components. The domain adaptation experimental results on four vehicle datasets show that UDANet achieves high estimation accuracy compared to current state-of-the-art unsupervised methods, and is suitable for the task of vehicle density estimation in complex scenes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.