Abstract

To address automatic detection of pedestrian fall events and provide feedback in emergency situations, this paper proposes an attention-guided real-time and robust method for pedestrian detection in complex scenes. First, the YOLOv3 network is used to effectively detect pedestrians in the videos. Then, an improved DeepSort algorithm is used to track by detection. After tracking, the authors extract effective features from the tracked bounding box, use the output of the last convolutional layer, and introduce the attention weight factor into the tracking module for final fall event prediction. Finally, the authors use the sliding window for storing feature maps and SVM classifier to redetect fall events. The experimental results on the CityPersons dataset, Montreal fall dataset, and self-built dataset indicate that this approach has good performance in complex scenes. The pedestrian detection rate is 87.05%, the accuracy of fall event detection reaches 98.55%, and the delay is within 120 ms.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.