Abstract
This study introduces the Abnormality Converging Scene Analysis Method (ACSAM) to detect abnormal group behavior using monitored videos or CCTV images in crowded scenarios. Abnormal behavior recognition involves classifying activities and gestures in continuous scenes, which traditionally presents significant computational challenges, particularly in complex crowd scenes, leading to reduced recognition accuracy. To address these issues, ACSAM employs a convolutional neural network (CNN) enhanced with Abnormality and Crowd Behavior Training layers to accurately detect and classify abnormal activities, regardless of crowd density. The method involves extracting frames from the input scene and using CNN to perform conditional validation of abnormality factors, comparing current values with previous high values to maximize detection accuracy. As the abnormality factor increases, the identification rate improves with higher training iterations. The system was tested on 26 video samples and trained on 34 samples, demonstrating superior performance to other approaches like DeepROD, MSI-CNN, and PT-2DCNN. Specifically, ACSAM achieved a 12.55% improvement in accuracy, a 12.97% increase in recall, and a 10.23% enhancement in convergence rate. These results suggest that ACSAM effectively overcomes the computational challenges inherent in crowd scene detection, offering a robust solution for real-time abnormal behavior recognition in crowded environments.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have