Abstract
Vehicle trajectory anomaly detection plays an essential role in the fields of traffic video surveillance, autonomous driving navigation, and taxi fraud detection. Deep generative models have been shown to be promising solutions for anomaly detection, avoiding the costs involved in manual labeling. However, existing popular generative models such as Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs) are often plagued by training instability, mode collapse, and poor sample quality. To resolve the dilemma, we present DiffTAD, a novel vehicle trajectory anomaly detection framework based on the emerging diffusion models. DiffTAD formalizes anomaly detection as a noisy-to-normal process that progressively adds noise to the vehicle trajectory until the path is corrupted to pure Gaussian noise. The core idea of our framework is to devise deep neural networks to learn the reverse of the diffusion process and to detect anomalies by comparing the difference between a query trajectory and its reconstruction. DiffTAD is a parameterized Markov chain trained with variational inference and allows the mean square error to optimize the reweighted variational lower bound. In addition, DiffTAD integrates decoupled Transformer-based temporal and spatial encoders to model the temporal dependencies and spatial interactions among vehicles in the diffusion models. Experiments on the real-world trajectory dataset TRAFFIC demonstrate that our DiffTAD achieves significant improvements over existing state-of-the-art methods, with the maximum enhancements reaching 25.87% and 35.59% in terms of AUC and F1. While on the synthetic datasets CROSS, SynTra, and MAAD, the maximum improvements in AUC/F1 are 27.47%/38.56%, 25.38%/31.42%, and 58.22%/50.04%, respectively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.