Abstract

In nighttime driving scenes, due to insufficient and uneven lighting, and the scarcity of high-quality datasets, the miss rate of nighttime pedestrian detection (PD) is much higher than that of daytime. Vision-based distance detection (DD) has the advantages of low cost and good interpretability, but the existing methods have low precision, poor robustness, and the DD is mostly performed independently of PD. A narrowband near-infrared (NIR) camera and an NIR lamp to relieve the impact of noisy visible light (VIS) and improve imaging quality have been selected; a LiDAR to obtain distance information has been embedded in the system; a nighttime driving scene pedestrian and its distance joint detection dataset NIRPed has been built. The NIRPed includes 146k pedestrian annotations, which is 3 times that of NightOwls, the largest VIS nighttime pedestrian dataset. Based on Faster-RCNN, a joint PD and DD method using monocular imaging has been proposed. The propo-sed method realizes the joint detection of pedestrian and its distance on the NIRPed dataset with the PD log-average miss rate and DD mean absolute error rate of 6.5% and 5.5%, respectively. For comparison, the joint detection method was implemented on other large-scale VIS pedestrian datasets as well. Moreover, compared with the existing vision-based DD methods, the proposed method is less affected by pedestrian distance and height, exhibits higher accuracy and robustness, and satisfied the ISO requirements for intelligent transportation systems (absolute error rate <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$&lt;$</tex-math> </inline-formula> 15%).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call