Abstract

Detecting pedestrians in a crowded scene in real time is a challenging task in monitoring and managing crowd. Many researchers around the world have addressed this task and managed to achieve satisfactory results. However, the problem of automating detection of pedestrians in the crowd is still an open issue depending on the density of crowd in a scene. To ensure safety and security, automating the crowd detection and tracking process in real time is necessary in designing a robust and secure system. Detecting and localizing objects has successfully aided in identifying the major problems with detecting pedestrians and has been a major step forward in managing crowd automatically. In this paper, we have used tiny YOLOv4. YOLO (You Only Look Once) has proved quite useful in detecting and localizing objects in an image with impressive response speed. YOLO network usually scales an entire image into fixed sized grids and then identifies and detects the region into these grids using bounding boxes. Using transfer learning on an already trained YOLO inception model on COCO dataset, detection of pedestrians in surveillance videos is handled. The paper discusses the implementation and detection performance of the proposed YOLOv4 tiny model on the UCSD pedestrian Detection dataset with promising results.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.