Abstract

Semantic segmentation of parking lot scenes is the prerequisite of environment perception for automatic parking technology. It provides environmental semantic information for automatic parking of vehicles. However, due to the dim light of the parking lot environment, unclear ground signs, road reflections and other factors, semantic segmentation such as FCN The method is not yet able to segment ground signs such as background and lane lines to meet the perception needs of automatic parking. This paper proposes a parking lot scene semantic segmentation method based on a surround view system. The surround view system consists of 4 fisheye cameras. The images acquired by each camera are subjected to distortion correction, inverse perspective transformation and image stitching fusion to obtain a ring view. Based on the ring view, a semantic segmentation algorithm based on attention and feature fusion is proposed. Experiments are carried out on a self-made parking lot dataset with a size of 1280x960 pixels. The results show that the method proposed in this paper improves the mIoU of the FCN model by 12.3%.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.