Abstract

Recent advancements in environmental perception for autonomous vehicles have been driven by deep learning-based approaches. However, effective traffic target detection in complex environments remains a challenging task. This paper presents a novel dual-modal instance segmentation deep neural network (DM-ISDNN) by merging camera and LIDAR data, which can be used to deal with the problem of target detection in complex environments efficiently based on multi-sensor data fusion. Due to the sparseness of the LIDAR point cloud data, we propose a weight assignment function that assigns different weight coefficients to different feature pyramid convolutional layers for the LIDAR sub-network. We compare and analyze the adaptations of early-, middle-, and late-stage fusion architectures in depth. By comprehensively considering the detection accuracy and detection speed, the middle-stage fusion architecture with a weight assignment mechanism, with the best performance, is selected. This work has great significance for exploring the best feature fusion scheme of a multi-modal neural network. In addition, we apply a mask distribution function to improve the quality of the predicted mask. A dual-modal traffic object instance segmentation dataset is established using a 7481 camera and LIDAR data pairs from the KITTI dataset, with 79,118 manually annotated instance masks. To the best of our knowledge, there is no existing instance annotation for the KITTI dataset with such quality and volume. A novel dual-modal dataset, composed of 14,652 camera and LIDAR data pairs, is collected using our own developed autonomous vehicle under different environmental conditions in real driving scenarios, for which a total of 62,579 instance masks are obtained using semi-automatic annotation method. This dataset can be used to validate the detection performance under complex environmental conditions of instance segmentation networks. Experimental results on the dual-modal KITTI Benchmark demonstrate that DM-ISDNN using middle-stage data fusion and the weight assignment mechanism has better detection performance than single- and dual-modal networks with other data fusion strategies, which validates the robustness and effectiveness of the proposed method. Meanwhile, compared to the state-of-the-art instance segmentation networks, our method shows much better detection performance, in terms of AP and F1 score, on the dual-modal dataset collected under complex environmental conditions, which further validates the superiority of our method.

Highlights

  • With an increase in vehicle ownership, frequent traffic accidents, low vehicle traffic efficiency, and environmental pollution have become key factors restricting the development of the automotiveRemote Sens. 2020, 12, 3274; doi:10.3390/rs12203274 www.mdpi.com/journal/remotesensingRemote Sens. 2020, 12, 3274 Remote Sens. 2020, xx, x FOR PEER REVIEW2 of 22 2 of 21 industrryy [1]

  • We proposed a novel dual-modal instance segmentation network, which has great significance for dealing with the problem of target detection in complex environments efficiently based on multi-sensor data fusion

  • By comprehensively considering the detection accuracy and detection speed, the middle-stage fusion architecture with a weight assignment mechanism has been selected for feature fusion in our paper

Read more

Summary

Introduction

With an increase in vehicle ownership, frequent traffic accidents, low vehicle traffic efficiency, and environmental pollution have become key factors restricting the development of the automotiveRemote Sens. 2020, 12, 3274; doi:10.3390/rs12203274 www.mdpi.com/journal/remotesensingRemote Sens. 2020, 12, 3274 Remote Sens. 2020, xx, x FOR PEER REVIEW2 of 22 2 of 21 industrryy [1]. With an increase in vehicle ownership, frequent traffic accidents, low vehicle traffic efficiency, and environmental pollution have become key factors restricting the development of the automotive. 2 of 22 2 of 21 industrryy [1]. AAuuttoonnoommoouussvveehhiciclelesshhaavveebbeeeenn rreecceeiivviinngg aatttteennttiioonn dduuee ttoo tthheeiirr grreaatt potentiaall for improving vvehehicilcelesafseatfyetayndapnedrfopremrfaonrcme,atnrcaeffi, cteraffiffciicenecfyf,icainednceyn,eragnydeffiecnierngcy [2ef].fiAcinenacuyto[n2o]m. SHeonwsinevgesre, rnosbours.tHanodwaecvceurr,artoebduestet catniodna,cclausrsaitfiecdateitoenc,taionnd, tcrlacsksifnigcaotfiotrna,ffiancdtatrrgaectksi,nsgucohf atrsapffeidcetsatrrgiaentss,, scuychlisatss, vpeehdiecsletrsi,aanns,dcsyocolinst,si,nvceohmicplelse,x aenndvirsoonmone,nitns rceommapinleax tenchvniriocnaml cehnatlslernegmeafionr tahetescehnnsiicnagl cshyastlelemngoef faourtothneomseonusisnvgeshyisctlems. EExxaammpplleess ooff ccoommpplleexx eennvviirroonnmmeennttaall ccoonnddiittiioonnss:: ((aa)) ssuunnnnyy ddaayy--ttiimmee ((hhiigghh iilllluummiinnaattiioonn aanndd great vvisiisbibiliiltiyty);)(;b()bra) inrayi(nloyw(liollwumiilnluatmioin)a;t(ico)ns)m; o(cg)gysm(boagdgvyisi(bbialidty)v; (idsi)bniliigtyh)t;-ti(md)e (nloigwhitl-ltuimeina(ltoiown ialnludmbinaadtiovnisiabnidlitbya);d avnisdib(ilei)tyt)h; eanadve(er)agtheeparveecriasgioenproefcicsliaosnsiofifcacltaiossnifipcraetdioinctipornedoicftiMonasokf MR-aCskNRN-, CRNetiNn,aR-Metainska,-ManadskY,OanLdACYOT LuAndCeTr udniffdeerrendtifefenrveinrot nemnveinrotanlmcoendtaitlicoonnsd. EExxaammpplleess ooff ccoommpplleexx eennvviirroonnmmeennttaall ccoonnddiittiioonnss:: ((aa)) ssuunnnnyy ddaayy--ttiimmee ((hhiigghh iilllluummiinnaattiioonn aanndd great vvisiisbibiliiltiyty);)(;b()bra) inrayi(nloyw(liollwumiilnluatmioin)a;t(ico)ns)m; o(cg)gysm(boagdgvyisi(bbialidty)v; (idsi)bniliigtyh)t;-ti(md)e (nloigwhitl-ltuimeina(ltoiown ialnludmbinaadtiovnisiabnidlitbya);d avnisdib(ilei)tyt)h; eanadve(er)agtheeparveecriasgioenproefcicsliaosnsiofifcacltaiossnifipcraetdioinctipornedoicftiMonasokf MR-aCskNRN-, CRNetiNn,aR-Metainska,-ManadskY,OanLdACYOT LuAndCeTr udniffdeerrendtifefenrveinrot nemnveinrotanlmcoendtaitlicoonnsd. itions

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call