Abstract

Video surveillance in smart cities provides efficient city operations, safer communities, and improved municipal services. Object detection is a computer vision-based technology, which is utilized for detecting instances of semantic objects of a specific class in digital videos and images. Crowd density analysis is a widely used application of object detection, while crowd density classification techniques face complications such as inter-scene deviations, non-uniform density, intra-scene deviations and occlusion. The convolution neural network (CNN) model is advantageous. This study presents Aquila Optimization with Transfer Learning based Crowd Density Analysis for Sustainable Smart Cities (AOTL-CDA3S). The presented AOTL-CDA3S technique aims to identify different kinds of crowd densities in the smart cities. For accomplishing this, the proposed AOTL-CDA3S model initially applies a weighted average filter (WAF) technique for improving the quality of the input frames. Next, the AOTL-CDA3S technique employs an AO algorithm with the SqueezeNet model for feature extraction. Finally, to classify crowd densities, an extreme gradient boosting (XGBoost) classification model is used. The experimental validation of the AOTL-CDA3S approach is tested by means of benchmark crowd datasets and the results are examined under distinct metrics. This study reports the improvements of the AOTL-CDA3S model over recent state of the art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call