Blood glucose (BG) prediction has advanced to a state of the art thanks to deep learning models, which have been demonstrated to enhance type 1 diabetes (T1D) therapy. Nevertheless, most current models are limited to single-horizon prediction and have several practical drawbacks, including poor interpretability. In this study, we develop a novel approach to optimize the forecasting model in the training processes using the Optuna function, cross-validation, and the latest dataset. We offer a novel deep learning framework for multi-horizon BG prediction: the temporal fusion transformer (TFT). TFT employs a self-attention mechanism to extract long-term temporal dependencies and enables a model with an auto-tuning adjustment approach on hyperparameters and a cross-validation function on univariate and multivariate input models. On a clinical dataset with T1D subjects, namely ShanghaiT1DM (16 subjects), D1NAMO (9 subjects), and OhioT1DM (6 subjects) datasets, it achieved an average root mean square error (RMSE) of the three datasets used. The TFT model gave a lower error score than the baseline models of neural hierarchical interpolation for time series (N-HiTS)and long short-term memory (LSTM) with the values of 10.08+0.31 mg/dL and 12.34+0.62 mg/dL on the univariate input model and 9.18+1.21 mg/dL and 14.33+0.52 mg/dL on the multivariate input model for the prediction horizons of 30 and 60 minutes, respectively. This result explained that the TFT model was adequate for carrying out multi-horizon BG value-level forecasting and potentially deploying on-edge devices to improve clinical efforts to manage BG levels for T1D patients in real-time application.