Abstract

Counting the number of people in a crowd has gained attention in the last decade. Due to its benefit to many applications such as crowd behavior analysis, crowd management, and video surveillance systems, etc. Counting crowded scenes, like stadiums, represents a challenging task due to the inherent occlusions and density of the crowd inside and outside the stadiums. Finding a pattern to control thousands of people and counting them is a challenging task. With the introduction of Convolutional Neural Networks (CNN), enables performing this task with acceptable performance. The accuracy of a CNN-based method is related to the size of data used for training. The availability of the dataset is sparse. In particular, there is no dataset in the literature that can be used for training applications for crowd scene. This paper proposes two main contributions including a new dataset for crowd counting, and a CNN-based method for counting the number of people and generating the crowd density maps. The proposed dataset for Football Supporters Crowd (FSC-Set) is composed of 6000 annotated images (manually) of different types of scenes that contain thousands of people gathering in or around the stadiums. FSC-Set contains more than 1.5 Million individuals. The collected images are captured under varying Fields of Views (FOV), illuminations, resolutions, and scales. The proposed dataset can also be utilized for other applications, such as individual’s localization and face detection as well as team recognition from supporter images. Further, we propose a CNN-based method named FSCNet for crowd counting exploiting context-aware attention, spatial-wise attention, and channel-wise attention modules. The proposed method is evaluated on our established FSC-Set and other existing datasets then compared to state-of-the-art methods. The obtained results show satisfactory performances on all the datasets. The dataset is made publicly available and can be requested using the following link: <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://sites.google.com/view/fscrowd-dataset/</uri>

Highlights

  • T HE analysis of data is a challenging task due to vast growth of the amount of data in the majority of domains, especially when considering the analysis of data related to video technologies which is typically associated with requiring large communication, computation, storage, and transmission [1]

  • The list of methods used in the comparison are: CSRNET [10], SPN [11], Attention Scaling Network (ASNet) [49], MCNN [50], SANet [51], CANNet [25], SCAR [26], MobileCount [23], Structured Knowledge Transfer (SKT) [21], and DENet [24]

  • EVALUATION METRICS In order to measure the effectiveness of each method including the proposed method, Mean Absolute Error (MAE) and Mean Squared Error (MSE) metrics are used

Read more

Summary

Introduction

T HE analysis of data is a challenging task due to vast growth of the amount of data in the majority of domains, especially when considering the analysis of data related to video technologies which is typically associated with requiring large communication, computation, storage, and transmission [1]. The achieved development in video surveillance techniques makes the analysis of the data stored difficult [2]. The extraction of meaningful and interesting information from videos represents a key major task [4]. The extracted features can be used to interpret surveilled scenes. Researchers have carried a substantial amount of work to detect the pertinent information, according to the purpose and the analyzed situations, from the visualized video

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.