Abstract
Pedestrian detection is a particular issue in both academia and industry. However, most existing pedestrian detection methods usually fail to detect small-scale pedestrians due to the introduction of feeble contrast and motion blur in images and videos. In this paper, we propose a multi-level feature fusion strategy to detect multi-scale pedestrians, which works particularly well with small-scale pedestrians that are relatively far from the camera. We propose a multi-level feature fusion strategy to make the shallow feature maps encode more semantic and global information to detect small-scale pedestrians. In addition, we redesign the aspect ratio of anchors to make it more robust for pedestrian detection task. The extensive experiments on both Caltech and CityPersons datasets demonstrate that our method outperforms the state-of-the-art pedestrian detection algorithms. Our proposed approach achieves a MR<sup>−2</sup> of 0.84%, 23.91% and 62.19% under the “Near”, Medium” and “Far” settings respectively on Caltech dataset, and also leads a better speed-accuracy trade-off with 0.28 second per image of 1024×2048 pixel compared with others on CityPersons dataset.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.