Abstract

This paper aims to develop a vision-based driver assistance system for scene awareness using video frames obtained from a dashboard camera. A saliency image map is devised with features pertinent to the driving scene. This saliency map mimics the human contour and motion sensitive visual perception by extracting spatial, spectral, and temporal information from the input frames and applying entropy driven image-context-feature data fusion. The resultant fusion output comprises high-level descriptors for still segment boundaries and non-stationary object appearance. Following the segmentation and foreground object detection stage, an adaptive maximum likelihood classifier selects road surface regions. The proposed scene driven vision system improves the driver’s situational awareness by enabling adaptive road surface classification. As experimental results demonstrate, context-aware low-level to high-level information fusion based on human vision model produces superior segmentation, tracking, and classification results that lead to high- level abstraction of driving scene.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.