Abstract

Tracking vehicles across a city using a network of multiple cameras are pivotal for enhancing urban and traffic management systems. However, this task is riddled with challenges such as wide geographical coverage, frequent view obstructions, and the diverse appearances of vehicles from various angles.
 To address these complexities, the proposed solution, dubbed Overlapped Vehicle Detection and Tracking using Multimodal Contrastive Domain Sharing Generative Adversarial Network optimized with Efficient Multi-camera system (MCDS-GAN), leverages cutting-edge techniques from computer vision, image processing, machine learning, and sensor fusion. This advanced system detects and tracks vehicles even in scenarios where multiple camera views overlap, making it applicable across domains like traffic management, surveillance, and autonomous vehicles.
 The methodology involves utilizing datasets like Common Objects in Context and ImageNet for training. Detection and tracking are performed using the Multimodal Contrastive Domain Sharing Generative Adversarial Network, followed by vehicle re-identification facilitated by the Topological Information Embedded Convolution Neural Network (TIE-CNN).
 Moreover, optimization techniques are employed to ensure synchronization and efficiency within the system. Implemented in Python, the effectiveness of MCDS-GAN is rigorously evaluated using metrics such as Accuracy, Precision, Recall, Latency, Response Time, and Scalability. Simulation results showcase its superiority, achieving significantly higher accuracy rates compared to existing methods such as OC-MCT-OFOV, MT-MCT-VM-CLM, and TI-VRI.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call