The application of Machine learning and deep learning techniques for time series forecasting has gained significant attention in recent years. Numerous endeavors have been devoted to automating forecasting through the utilization of cutting-edge neural networks. Notably, the recurrent neural network (LSTM – Long Short-Term Memory) has emerged as a central concept in most research endeavors. Although LSTM was initially introduced in 1997 for sequence modeling, subsequent updates have primarily focused on language learning tasks. These updates have introduced various computational mechanisms within the LSTM cell, including the forget gate, input gate, and output gate. In this study, we investigate the impact of each computational component in isolation to analyze their effects on time series forecasting tasks. Our experiments utilize the Jena weather dataset and Appliance Energy Usage time series for evaluation. The experimental results reveal that variations of the LSTM model outperform the most popular LSRM cell format in terms of error rate and training time. Specifically, the variations identified in this paper demonstrate superior generalization capabilities and yield reduced forecasting errors.