Abstract

Recently, advanced driver assistance systems (ADAS) have attracted wide attention in pedestrian detection for using the multi-spectrum generated by multi-sensors. However, it is quite challenging for image-based sensors to perform their tasks due to instabilities such as light changes, object shading, or weather conditions. Considering all the above, based on different spectral information of RGB and thermal images, this study proposed a deep learning (DL) framework to improve the problem of confusing light sources and extract highly differentiated multimodal features through multispectral fusion. Pedestrian detection methods, including a double-stream multispectral network (DSMN), were used to extract a multispectral fusion and double-stream detector with Yolo-based (MFDs-Yolo) information. Moreover, a self-adaptive multispectral weight adjustment method improved illumination–aware network (i-IAN) for later fusion strategy, making different modes complimentary. According to the experimental results, the good performance of this detection method was demonstrated in the public dataset KAIST and the multispectral pedestrian detection dataset FLIR, and it even performed better than the most advanced method in the miss rate (MR) (IoU@0.75) evaluation system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call