Abstract

This study proposes a data fusion and deep learning (DL) framework that learns high-level traffic features from network-level images to predict large-scale, multi-route, speed and volume of connected vehicles (CVs). We present a scalable and parallel method of processing statewide CVs’ trajectory data that leads to real-time insights on the micro-scale in time and space (two-dimensional (2D) arrays) on graphics processing unit (GPUs) using the Nvidia rapids framework and dask parallel cluster, which provided a 50× speed-up in the data extraction, transform and load (ETL). A UNet model is then applied to perform feature extraction and multi-route speed and volume channels over a multi-step prediction horizon. The accuracy and robustness of the proposed model are evaluated by taking different road types, times of day and image snippets and comparing the model to benchmarks: Convolutional Long–Short-Term Memory (ConvLSTM) and a historical average (HA). The results show that the proposed model outperforms benchmarks with an average improvement of 15% over ConvLSTM and 65% over the HA. Comparing the image snippets from each prediction model to the actual image shows that image textures were highly similar in UNet to the benchmark models used. UNet’s dominance in performing image predictions was also evident in multi-step forecasting, where the increase in errors was relatively minimal over longer prediction horizons.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call