Abstract

Abstract: Fisheye cameras, valued for their wide field of view, play a crucial role in perceiving the surrounding environment of vehicles. However, there is a lack of specific research addressing the processing of significant distortion features in segmenting fish-eye images. Additionally, fish-eye images for autonomous driving face the challenge of few datasets, potentially causing over fitting and hindering the model's generalization ability.
 Based on the semantic segmentation task, a method for transforming normal images into fish-eye images is proposed, which expands the fish-eye image dataset. By employing the Transformer network and the Across Feature Map Attention, the segmentation performance is further improved, achieving a 55.6% mIOU on Woodscape. Additionally, leveraging the concept of knowledge distillation, the network ensures a strong generalization based on dual-domain learning without compromising performance on Woodscape (54% mIOU).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.