Abstract

Real-world traffic prediction is challenging and requires accuracy, efficiency, and generalizability for applications. Most studies used two-step data imputation-prediction procedures or recurrent networks to perform predictions based on imputed data, resulting in error accumulation. Training the same model independently for different missing rates and missing data patterns requires high computational costs. This strategy cannot be integrated with existing methods, causing low generalizability. We propose a multi-task pretraining and fine-tuning (MTPF) approach to address these issues. Specifically, we pretrain an encoder and decoders with different missing data patterns and missing rates and an adversarial noise to learn the correlations between different scenarios and generate robust hidden representations. The encoder and decoder perform fine-tuning for specific missing data scenarios after pretraining. Multiple parallel tasks, including prediction, data imputation, contrastive learning, and adaptive graph learning, are used for pretraining to enable the model to learn the hidden representation for prediction and data imputation during fine-tuning. The MTPF is compared with nine baseline models for prediction and five baseline models for data imputation using two real-world datasets. The experimental results show that MTPF outperforms many state-of-the-art baseline models, has a shorter training time, and is a model-agnostic off-the-shelf plug-in that improves baseline model performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call