Abstract

Intelligent Transportation System (ITS) is the fundamental requirement to an intelligent transport system. The proposed hybrid model Stacked Bidirectional LSTM and Attention-based GRU (SBAG) is used for predicting the large scale traffic speed. To capture bidirectional temporal dependencies and spatial features, BDLSTM and attention-based GRU are exploited. It is the first time in traffic speed prediction that bidirectional LSTM and attention-based GRU are exploited as a building block of network architecture to measure the backward dependencies of a network. We have also examined the behaviour of the attention layer in our proposed model. We compared the proposed model with state-of-the-art models e.g. Fully Convolutional Network, Gated Recurrent Unit, Long -short term Memory, Bidirectional Long-short term Memory and achieved superior performance in large scale traffic speed prediction.

Highlights

  • The performance of Intelligent Transportation System (ITS) applications principally depends on the quality of traffic data

  • We proposed a hybrid deep learning model known as stacked bidirectional Long-short Term Memory (LSTM) with attention Gated Recurrent Unit (GRU) (SBAG) neural network for large scale traffic speed prediction

  • H⃐ are the forward and backward layer output that iteratively calculated by using positive sequence inputs from time T − n to T − 1 and vice versa, backward and forward layers outputs are calculated by eq 3-8, ŶT is an output vector that can be generated by Bidirectional LSTM, where each element is calculated from the eq 9

Read more

Summary

INTRODUCTION

The performance of Intelligent Transportation System (ITS) applications principally depends on the quality of traffic data. LSTMs have the ability to deal with long term dependencies In recent days, they have been gaining popularity in traffic forecasting because of a representative www.ijacsa.thesai.org (IJACSA) International Journal of Advanced Computer Science and Applications, Vol 11, No 1, 2020 deep learning method handling sequence data. As per the literature review, a few studies utilized backward dependency To cover this gap, bidirectional LSTMs (BDLSTMs) architecture is adopted as a network structure component because it can handle both forward and backward dependencies. We proposed a hybrid deep learning model known as stacked bidirectional LSTM with attention GRU (SBAG) neural network for large scale traffic speed prediction. We proposed a hybrid model considering the backward dependencies using Bidirectional LSTM to improve feature learning. Wf Wi , Wo , WC Uf , Ui , Uo, UC . bf , bi , bo , and bC , are the weight matrices and bias vector parameter which need to be learned during training. σg is the gate activation function and hyperbolic tangent function being tan h

Input Data
Attention Mechanism
Dataset Description
Model Optimization
Evaluation Criteria
Comparison with State-of-the-Art Models
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.