Estimating road traffic noise is essential for examining the quality of sounding environment and mitigating such a non-negligible pollutant in urban areas. However, existing estimated models often have limited applicability to specific traffic conditions, while the required parameters may not be readily available for city-wide collection. This paper proposes a data-driven approach for measuring road-level acoustic information of traffic with street view imagery. Specifically, we utilize portable vehicle-equipped hardware for in-situ noise acquisition and employ a deep learning model ResNet to learn high-level visual features from street view images that are closely associated with road traffic noise. The ResNet captures meaningful patterns from the input data, and the output probability vectors are then fed into a Random-Forest regression algorithm to quantitatively estimate the noise in decibels for different road segments. The MAE and RMSE of the DCNN-RF model are 2.01 and 2.71, respectively. Additionally, we employ a gradient-weighted Class Active Mapping approach to visually interpret our deep learning model and explore the significant elements in streetscapes that contribute to the model's estimations. Our proposed framework facilitates low-cost and fine-scale road traffic noise estimations and sheds light on how auditory information could be inferred from street imagery, which may benefit practices in geography and urban planning.
Read full abstract