Abstract
AbstractAutomatic Violence Detection and Classification (AVDC) with deep learning has garnered significant attention in computer vision research. This paper presents a novel approach for combining a custom Deep Convolutional Neural Network (DCNN) with a Gated Recurrent Unit (GRU) in developing a new AVDC model called BrutNet. Specifically, a time‐distributed DCNN (TD‐DCNN) is developed to generate a compact 2D representation with 512 spatial features per frame from a set of equally‐spaced frames of dimension 16090 in short video segments. Further to leverage the temporal information, a GRU layer is utilised, generating a condensed 1D vector that enables binary classification of violent or non‐violent content through multiple dense layers. Overfitting is addressed by incorporating dropout layers with a rate of 0.5, while the hidden and output layers employ rectified linear unit (ReLU) and sigmoid activations, respectively. The model is trained on the NVIDIA Tesla K80 GPU through Google Colab, demonstrating superior performance compared to existing models across various video datasets, including hockey fights, movie fights, AVD, and RWF‐2000. Notably, the model stands out by requiring only 3.416 million parameters and achieving impressive test accuracies of 97.62%, 100%, 97.22%, and 86.43% on the respective datasets. Thus, BrutNet exhibits the potential to emerge as a highly efficient and robust AVDC model in support of greater public safety, content moderation and censorship, computer‐aided investigations, and law enforcement.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.