Abstract
PurposeThis study aims to comprehensively address the challenge of delineating traffic scenarios in video footage captured by an embedded camera within an autonomous vehicle.Design/methodology/approachThis methodology involves systematically elucidating the traffic context by leveraging data from the object recognition subsystem embedded in vehicular road infrastructure. A knowledge base containing production rules and logical inference mechanism was developed. These components enable real-time procedures for describing traffic situations.FindingsThe production rule system focuses on semantically modeling entities that are categorized as traffic lights and road signs. The effectiveness of the methodology was tested experimentally using diverse image datasets representing various meteorological conditions. A thorough analysis of the results was conducted, which opens avenues for future research.Originality/valueOriginality lies in the potential integration of the developed methodology into an autonomous vehicle’s control system, working alongside other procedures that analyze the current situation. These applications extend to driver assistance systems, harmonized with augmented reality technology, and enhance human decision-making processes.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Intelligent Unmanned Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.