Abstract
In the United States, the Model Inventory of Roadway Element (MIRE) provides a comprehensive list of data that are needed to support states’ data-driven safety programs. The intersection control is part of the MIRE Fundamental Data Elements (FDE) for which state Departments of Transportation are required to complete the collection by September 30, 2026. It is essential roadway data that have been used widely in traffic safety studies. This study proposes a scalable and automated deep learning framework for detecting and classifying stop and yield intersection controls using panoramic street view images. The Faster Region-based Convolutional Neural Networks (Faster R-CNN) model architecture was used to detect and classify stop and yield signs from the images. A transfer learning process was deployed using the Inception-ResNet-v2 generic feature extractor to accelerate the training and performance of the deep learning model with less data collection effort. The effectiveness and scalability of the proposed framework were tested on a sample of road intersections in the state of Michigan. The proposed deep learning model achieved a recall value of 97.7% and 98.2% for detecting and classifying stop and yield signs respectively. The evaluation of the model performance at a county level suggests that the model can be scaled to a statewide level without a substantial increase in the demand for computational resources. As demonstrated in this study, state DOTs can leverage the advancement of deep learning techniques and the availability of imagery data to expedite the process of collecting the MIRE data.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Transportation Research Record: Journal of the Transportation Research Board
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.