Abstract
Abstract How can we efficiently determine meta-parameter values for deep learning-based time-series forecasting given a time-series dataset? This paper introduces Xtune, an efficient and novel meta-parameter tuning method for deep learning-based time-series forecasting, leveraging explainable AI techniques. In particular, this study focuses on optimizing the window size for time-series forecasting. Xtune determines the optimal meta-parameter value for these methods and can also be applied to tune the window size for anomaly detection methods that utilize deep learning-based time-series forecasting. Extensive experiments on real-world datasets and forecasting methods demonstrate that Xtune efficiently identifies the optimal meta-parameter value and consistently outperforms the existing methods in terms of execution speed.
Highlights
How can we efficiently and effectively determine the optimal meta-parameter value for a time-series forecasting method that utilizes deep learning models like Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU), without the need to run the method with a large number of meta-parameters? This study focuses on meta-parameter tuning for deep learning-based time-series forecasting
We propose an effective meta-parameter tuning method, named Xtune, which leverages explainable AI (XAI) techniques to determine the meta-parameter value for time-series forecasting using deep learning techniques
Long Short-Term Memory (LSTM) networks [2] belong to the recurrent neural network (RNN) framework that excel in capturing long-term dependencies in sequential data
Summary
How can we efficiently and effectively determine the optimal meta-parameter value for a time-series forecasting method that utilizes deep learning models like LSTM and GRU, without the need to run the method with a large number of meta-parameters? This study focuses on meta-parameter tuning for deep learning-based time-series forecasting. We present Xtune, an efficient and novel meta-parameter tuning method for time-series forecasting using deep learning. Long Short-Term Memory (LSTM) networks [2] belong to the recurrent neural network (RNN) framework that excel in capturing long-term dependencies in sequential data They have been widely used for time series forecasting. LSTMs incorporate memory cells and gates that allow them to selectively remember or forget information over time This capability makes them effective at capturing complex temporal patterns and handling sequences of varying lengths. Gated Recurrent Units (GRU) [3] belong to the recurrent neural network framework similar to LSTM networks They have a simplified architecture with fewer gates, making them computationally more efficient than LSTMs. GRUs can effectively capture temporal dependencies and have been successfully applied to time series forecasting tasks.
Published Version
Join us for a 30 min session where you can share your feedback and ask us any queries you have