Incomplete time series present a significant challenge for downstream analysis. In the field of time series, Large Language Models are already being used for prediction, classification, and, in rare cases, imputation. This study thoroughly examines the imputation of time series using Large Language Models. Within a defined experimental setup, current state-of-the-art time series imputation methods are compared with the performance of Large Language Models. Parameter-efficient fine-tuning methods are applied to adapt the Large Language Models to the imputation task. The results indicate that the models are suitable for time series imputation. The performance of these models depends on the number of parameters and the type of pre-training. Small specialized models, such as BERT, compete with models like Llama2 and outperform them on selected datasets. Furthermore, it becomes clear that the attention and feedforward network components of Large Language Models are particularly well-suited for adaptation to imputation, and parameter-efficient methods are also performance-enhancing.
Read full abstract