Abstract

Currently, it is difficult for autonomous driving to detect objects accurately enough at night, mainly due to the poor light at night. The laser camera’s captured RGB image information is less effective than during the day, and the image content is seriously affected. The blurred outline of the object and the decreased accuracy of semantic segmentation lead to missed detections. In view of this situation, this study proposes the following solutions: introducing point cloud data collected by lidar during data acquisition, combining it with the image information from the camera, to constitute precise multimodal three-dimensional information. Then, using a dual adversarial network to preprocess the data and U-net semantic segmentation to segment the multimodal 3D information. The simulation experiment is then used to test and evaluate the segmentation effect. After the completion of the training, the object information detection is applied during the autonomous driving of the vehicle. This method, when compared with the general method, has accurate detection, is independent of the intensity of the light, and has the advantage of good universality.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.