Abstract
Visual semantic segmentation is a key technology to realize scene understanding for autonomous driving and its accuracy is affected by the light changes in images. This paper proposes a novel multi-exposure fusion approach to visual semantic enhancement of autonomous driving. Firstly, a multi-exposure image sequence is aligned to construct a stable image input. Secondly, high contrast regions of multi-exposure image sequences are evaluated by context aggregation network (CAN) to predict image weight map. Finally, the high-quality image is generated by weighted fusion of multi-exposure image sequences. The proposed approach is validated by using Cityscapes’ HDR dataset and real environment data. The experimental results show that the proposed method effectively restores lost features in the light changing images and enhances accuracy of subsequent semantic segmentation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.