Abstract

Improving the reliability of nighttime pedestrian detection is a crucial challenge towards the design of robust autonomous systems. Not surprisingly, most pedestrian fatalities occur in low-illumination settings, thus emphasizing the need for new algorithmic advances. This work presents a novel pedestrian detection approach that makes a number of crucial modifications to the state-of-the-art YOLOV5-PANet architecture, in order to improve the reliability of features extracted from nighttime images. More specifically, the proposed architecture systematically incorporates powerful shuffle attention mechanisms and a transformer module to improve the feature learning pipeline. Instead of advocating the use of other sensing modalities that are better suited for nighttime detection, our approach relies only on conventional RGB cameras and is hence broadly applicable. Our empirical studies with nighttime pedestrian detection benchmarks show that with only minimal increase in model complexity, our approach provides significant improvements in detection efficacy over existing solutions. Finally, we explore the impact of post-hoc network pruning on the speed-accuracy trade-off of our approach and demonstrate that it is well suited for reduced memory/compute requirements.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.