Abstract
AbstractIn intelligent vehicles, road environment perception technology is a key component of autonomous driving assistance systems. This component is the foundation for vehicle decision‐making and control, and is a guarantee of safety during the driving of the vehicle. The existing environment perception technology mainly targets well‐lit environments and requires visible light imaging equipment. Therefore, in low visibility environments, this technology cannot make good judgments about the external environment. Many existing perception systems mainly rely on sensors. Under low visibility conditions, these sensors weaken their effectiveness due to signal transmission, reflection, or absorption, resulting in incomplete or distorted data collection. Reduced visibility often affects the sensing range of various sensors, hindering the system's ability to detect and recognize distant objects, thereby limiting the necessary advance warning and response time for safe navigation. In response to this issue, this study proposed a combined method of infrared imaging and polarized imaging to collect feature data on road conditions in low visibility environments. Then, the obtained images were denoised and enhanced. The processed images were input into the system for recognition, and the images were analyzed and recognized using a low visibility road situation semantic segmentation algorithm based on deep learning. The research outcomes denoted that the pixel accuracy, average pixel accuracy, and average intersection ratio of the variable weight combination model in polarized degree images were 91.2%, 89.1%, and 71.6%, respectively. Those in infrared images were 83.6%, 90.6%, and 62.1%, respectively. The various indicators of the variable weight combination model were higher than those of the U‐shaped neural network model, indicating its performance is relatively excellent. The research results indicated that infrared imaging helps to acquire information at night or in low light conditions, while polarized imaging can provide better adaptation to cluttered light and reflections, enabling the system to provide more robust environmental sensing in complex weather conditions. It fills a critical gap in perception for autonomous driving systems in adverse weather conditions.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.