Nowadays, video surveillance systems are gaining increasing attention in the computer vision fields due to user demands for security purposes. Observing human movement and predicting such movement perception are promising due to video surveillance systems. In this work, Hierarchical Auto-Associative Polynomial Convolutional Neural Networks (HA-PCNN) with Garran Rufa Fish Optimization (GRFO) is proposed for Human Activity Recognition (HAR) using a video surveillance system. Initially, video surveillance systems-based human activity images are obtained, which are collected by video camera by recording daily human activities. After that, the input images are fed to a pre-processing approach named as Switched Mode Fuzzy Median Filter (SMFM) method. In this, the noise presented in the input images is reduced by applying the SMFM model to normalize the dataset and improve the image quality. After that, the pre-processed images are given to the developed Fast Discrete Curvelet Transform with Wrapping (FDCT-WRP)-based feature extraction method to extract the relevant features. Moreover, the extracted features are given to the HA-PCNN model for HA classification. Generally, the HA-PCNN method does not express any adoption of optimization methods for scaling the ideal parameters and classification. Here, Garra Rufa Fish Optimization (GRFO) is proposed to enhance the parameters of HA-PCNN. Thus, the proposed HA-PCNN-GRFO methodology can classify human activities like standing, sitting, running, walking and sleeping. By then the performance of the proposed HA-PCNN-GRFO methodology is evaluated using Python Platform, and metrics like recall, accuracy, F-measure, precision, and specificity are analyzed. Thus, the proposed HA-PCNN-GRFO approach achieved 97.3% average accuracy rate, 96.5% average recall, 97.2% average precision, 95.9% average specificity, and 96.8% average F1-score for classifying human activities using UCI HAR dataset. Also, the proposed HA-PCNN-GRFO approach is outperformed in HAR than the conventional approaches.
Read full abstract