Abstract

Pedestrian detection in fish-eye images is always an important problem in advanced driver assistance systems (ADAS). In conventional methods, pedestrian detectors will be trained using fish-eye images. But it is hard to collect and label enough fish-eye images manually. Therefore, a new strategy for training fish-eye pedestrian detectors using images from normal pedestrian datasets is proposed in this work. Concretely, Fish-eye Spatial Transformer Network (FSTN) is designed to generate pedestrian features in fish-eye images. FSTN aims to simulate distorted pedestrian features on the feature maps. Then the entire network is trained via adversary. FSTN is trained to generate examples which are difficult for pedestrian detectors to classify. So that the detectors are more robust to the deformation. FSTN can be embedded into state-of-the-art detectors easily. And the entire pedestrian detector, where the FSTN embedded, can be trained end to end via adversary. Moreover, experiments on ETH and KITTI pedestrian datasets show the slight accuracy improvement of pedestrian detection in fish-eye images using adversarial network compared with conventional methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.