Abstract

Multivariate time series data in practical applications, such as health care, geoscience, and biology, are characterized by a variety of missing values. In time series prediction and other related tasks, it has been noted that missing values and their missing patterns are often correlated with the target labels, a.k.a., informative missingness. There is very limited work on exploiting the missing patterns for effective imputation and improving prediction performance. In this paper, we develop novel deep learning models, namely GRU-D, as one of the early attempts. GRU-D is based on Gated Recurrent Unit (GRU), a state-of-the-art recurrent neural network. It takes two representations of missing patterns, i.e., masking and time interval, and effectively incorporates them into a deep model architecture so that it not only captures the long-term temporal dependencies in time series, but also utilizes the missing patterns to achieve better prediction results. Experiments of time series classification tasks on real-world clinical datasets (MIMIC-III, PhysioNet) and synthetic datasets demonstrate that our models achieve state-of-the-art performance and provide useful insights for better understanding and utilization of missing values in time series analysis.

Highlights

  • Non-Recurrent Neural Networks (RNNs) Baselines (Non-RNN): We evaluate logistic regression (LR), support vector machines (SVM) and random forest (RF), which are widely used in health care applications

  • We demonstrate the performance of our proposed models on one synthetic and two real-world health-care datasets and compare them to several strong machine learning and deep learning approaches in classification tasks

  • Off-the-shelf RNN architectures with imputation can only achieve comparable performance to Random Forests and SVMs, and they do not demonstrate the full advantage of representation learning

Read more

Summary

Methods

Notations. where for each xtd denotes the tW ∈ e{1d,e2n,o...te,aTm},uxltti∈va riaDteretpimreesesnertsietshwe itt-hthDovbasreiravbaletisoonfsl(ean.gkt.ah.,TmaesaXsu=rem(xe1n, txs2),. In another work[22], the authors achieve their best performance on diagnosis prediction by feeding masking with zero-filled missing values in the recurrent neural network Their model is equivalent to the GRU-Simple model without feeding the time interval (δ) given that the input features are normalized to have mean value 0 before fed into the RNN model. RNN Baselines (RNN): We take the RNN baselines described before (GRU-Mean, GRU-Forward, GRU-Simple), and LSTM-Mean (LSTM model with mean-imputation on the missing measurements) as RNN baselines As mentioned before, these models are widely used in existing work[22,23,24] on applying RNN on health care time series data with missing values or irregular time stamps. To further evaluate the proposed models, we provide more detailed comparisons and evaluations on multilayer RNN models and with different model sizes

Results
Limitations
Summary
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call