Abstract

PurposeThis study aims to comprehensively address the challenge of delineating traffic scenarios in video footage captured by an embedded camera within an autonomous vehicle.Design/methodology/approachThis methodology involves systematically elucidating the traffic context by leveraging data from the object recognition subsystem embedded in vehicular road infrastructure. A knowledge base containing production rules and logical inference mechanism was developed. These components enable real-time procedures for describing traffic situations.FindingsThe production rule system focuses on semantically modeling entities that are categorized as traffic lights and road signs. The effectiveness of the methodology was tested experimentally using diverse image datasets representing various meteorological conditions. A thorough analysis of the results was conducted, which opens avenues for future research.Originality/valueOriginality lies in the potential integration of the developed methodology into an autonomous vehicle’s control system, working alongside other procedures that analyze the current situation. These applications extend to driver assistance systems, harmonized with augmented reality technology, and enhance human decision-making processes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call