Artificial Intelligence for Optimizing Solar Power Systems with Integrated Storage: A Critical Review of Techniques, Challenges, and Emerging Trends
The global transition toward sustainable energy has significantly accelerated the deployment of solar power systems. Yet, the inherent variability of solar energy continues to present considerable challenges in ensuring its stable and efficient integration into modern power grids. As the demand for clean and dependable energy sources intensifies, the integration of artificial intelligence (AI) with solar systems, particularly those coupled with energy storage, has emerged as a promising and increasingly vital solution. It explores the practical applications of machine learning (ML), deep learning (DL), fuzzy logic, and emerging generative AI models, focusing on their roles in areas such as solar irradiance forecasting, energy management, fault detection, and overall operational optimisation. Alongside these advancements, the review also addresses persistent challenges, including data limitations, difficulties in model generalization, and the integration of AI in real-time control scenarios. We included peer-reviewed journal articles published between 2015 and 2025 that apply AI methods to PV + ESS, with empirical evaluation. We excluded studies lacking evaluation against baselines or those focusing solely on PV or ESS in isolation. We searched IEEE Xplore, Scopus, Web of Science, and Google Scholar up to 1 July 2025. Two reviewers independently screened titles/abstracts and full texts; disagreements were resolved via discussion. Risk of bias was assessed with a custom tool evaluating validation method, dataset partitioning, baseline comparison, overfitting risk, and reporting clarity. Results were synthesized narratively by grouping AI techniques (forecasting, MPPT/control, dispatch, data augmentation). We screened 412 records and included 67 studies published between 2018 and 2025, following a documented PRISMA process. The review revealed that AI-driven techniques significantly enhance performance in solar + battery energy storage system (BESS) applications. In solar irradiance and PV output forecasting, deep learning models in particular, long short-term memory (LSTM) and hybrid convolutional neural network–LSTM (CNN–LSTM) architectures repeatedly outperform conventional statistical methods, obtaining significantly lower Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and higher R-squared. Smarter energy dispatch and market-based storage decisions are made possible by reinforcement learning and deep reinforcement learning frameworks, which increase economic returns and lower curtailment risks. Furthermore, hybrid metaheuristic–AI optimisation improves control tuning and system sizing with increased efficiency and convergence. In conclusion, AI enables transformative gains in forecasting, dispatch, and optimisation for solar-BESSs. Future efforts should focus on explainable, robust AI models, standardized benchmark datasets, and real-world pilot deployments to ensure scalability, reliability, and stakeholder trust.
- Research Article
16
- 10.1016/j.jclepro.2024.140585
- Jan 1, 2024
- Journal of Cleaner Production
An interpretable horizontal federated deep learning approach to improve short-term solar irradiance forecasting
- Conference Article
2
- 10.1109/icoei56765.2023.10125954
- Apr 11, 2023
Powering remote mobile communication towers is essential for establishing reliable communication. If the backup diesel generators in the remote mobile towers are replaced by renewable energy source such as solar PV system, the environmental pollution is reduced. Solar irradiance (SI) forecast is crucial for effective planning and operation of solar energy systems. Deep learning (DL) algorithms have become an optimistic method for predicting SI in recent years owing to their ability for handling intricate non-linear relationships and sizable data sets. This study reviews the most prominent Deep learning models (DLM) for predicting SI, which include Convolution Neural Networks (CNNs), Long Short term Memory (LSTM), Deep Belief Networks (DBNs), Recurrent Neural Networks (RNNs) and Hybrid Models. The study discusses each model's benefits and drawbacks as well as how well it was applied to different SI predicting issues. The review also highlights the importance of carefully selecting the model architecture and hyperparameters, and the training data quantity and quality, for the success of DLM for solar irradiance forecasting. The models are compared based on their accuracy and efficiency in predicting solar irradiance for different time horizons, from few seconds to several hours. The study provides insights into the suitability of different DLM for solar irradiance forecasting and outlining the future paths in the area of DL for solar irradiance predictions.
- Research Article
- 10.30574/wjaets.2025.15.3.1168
- Jun 30, 2025
- World Journal of Advanced Engineering Technology and Sciences
The integration of Artificial Intelligence (AI) into solar energy systems has revolutionized the way we predict, optimize, and manage photovoltaic (PV) infrastructure. This review comprehensively explores the advancements in AI techniques including machine learning, deep learning, hybrid models, and metaheuristics used for solar irradiance forecasting, fault detection, output prediction, and system optimization over the past decade. Experimental comparisons reveal that deep learning models like LSTM and CNN consistently outperform traditional algorithms, while hybrid approaches such as CNN-LSTM yield the most accurate results across volatile environments. The review also proposes a modular theoretical framework to unify AI integration in solar systems and outlines the challenges of interpretability, data availability, and real-time deployment. The study concludes with a forward-looking perspective, emphasizing the potential of edge computing, federated learning, and interpretable AI to address existing limitations and support a more sustainable and intelligent energy future.
- Research Article
- 10.3389/fpubh.2025.1547450
- Apr 2, 2025
- Frontiers in public health
Hospital-acquired infections (HAIs) represent a persistent challenge in healthcare, contributing to substantial morbidity, mortality, and economic burden. Artificial intelligence (AI) offers promising potential for improving HAIs prevention through advanced predictive capabilities. To evaluate the effectiveness, usability, and challenges of AI models in preventing, detecting, and managing HAIs. This integrative review synthesized findings from 42 studies, guided by the SPIDER framework for inclusion criteria. We assessed the quality of included studies by applying the TRIPOD checklist to individual predictive studies and the AMSTAR 2 tool for reviews. AI models demonstrated high predictive accuracy for the detection, surveillance, and prevention of multiple HAIs, with models for surgical site infections and urinary tract infections frequently achieving area-under-the-curve (AUC) scores exceeding 0.80, indicating strong reliability. Comparative data suggest that while both machine learning and deep learning approaches perform well, some deep learning models may offer slight advantages in complex data environments. Advanced algorithms, including neural networks, decision trees, and random forests, significantly improved detection rates when integrated with EHRs, enabling real-time surveillance and timely interventions. In resource-constrained settings, non-real-time AI models utilizing historical EHR data showed considerable scalability, facilitating broader implementation in infection surveillance and control. AI-supported surveillance systems outperformed traditional methods in accurately identifying infection rates and enhancing compliance with hand hygiene protocols. Furthermore, Explainable AI (XAI) frameworks and interpretability tools such as Shapley additive explanations (SHAP) values increased clinician trust and facilitated actionable insights. AI also played a pivotal role in antimicrobial stewardship by predicting the emergence of multidrug-resistant organisms and guiding optimal antibiotic usage, thereby reducing reliance on second-line treatments. However, challenges including the need for comprehensive clinician training, high integration costs, and ensuring compatibility with existing workflows were identified as barriers to widespread adoption. The integration of AI in HAI prevention and management represents a potentially transformative shift in enhancing predictive capabilities and supporting effective infection control measures. Successful implementation necessitates standardized validation protocols, transparent data reporting, and the development of user-friendly interfaces to ensure seamless adoption by healthcare professionals. Variability in data sources and model validations across studies underscores the necessity for multicenter collaborations and external validations to ensure consistent performance across diverse healthcare environments. Innovations in non-real-time AI frameworks offer viable solutions for scaling AI applications in low- and middle-income countries (LMICs), addressing the higher prevalence of HAIs in these regions. Artificial Intelligence stands as a transformative tool in the fight against hospital-acquired infections, offering advanced solutions for prevention, surveillance, and management. To fully realize its potential, the healthcare sector must prioritize rigorous validation standards, comprehensive data quality reporting, and the incorporation of interpretability tools to build clinician confidence. By adopting scalable AI models and fostering interdisciplinary collaborations, healthcare systems can overcome existing barriers, integrating AI seamlessly into infection control policies and ultimately enhancing patient safety and care quality. Further research is needed to evaluate cost-effectiveness, real-world applications, and strategies (e.g., clinician training and the integration of explainable AI) to improve trust and broaden clinical adoption.
- Research Article
- 10.57233/ijsgs.v9i1.407
- Mar 31, 2023
- International Journal of Science for Global Sustainability
In order to ensure energy security and environmental sustainability, transition to renewable energy sources is required. One of the most viable and sustainable renewable energy sources is solar. However, developing solar energy systems requires solar radiation data which is scarce for most locations including Northwest Nigeria. In order to address this challenge, solar radiation is usually estimated from the available meteorological parameters. Several previous studies have used various methods including geospatial techniques and machine learning to predict monthly and yearly solar radiation, while few studies have focused on the estimation of daily solar radiation. Meanwhile, providing daily solar radiation data is necessary for the development of solar energy systems. Deep learning has been shown to be effective in solar radiation forecasting. To evaluate the performance of the deep learning method for daily solar radiation prediction, a Long Short-Term Memory (LSTM) based deep learning model was developed in this study. The forecasting model was created using daily solar radiation data collected over a 21-year period by the Nigerian Meteorological Agency in three major towns in North West Nigeria: Kano, Kaduna, and Katsina. The model was evaluated using two statistical indicators: coefficient of determination (R2) and Root Mean Square Error (RMSE). Results showed that R2 of 0.79 and 0.78 were obtained for the training and testing datasets respectively, while RMSE of 0.46 and 0.47 were obtained for the training and testing datasets respectively. Overall, the LSTM deep learning model has been proven to be effective in forecasting daily solar radiation.
- Research Article
- 10.1002/aaai.12093
- Jun 1, 2023
- AI Magazine
AI climate tipping‐point discovery (ACTD)
- Research Article
- 10.33564/ijeast.2025.v09i11.016
- Mar 1, 2025
- International Journal of Engineering Applied Sciences and Technology
The integration of Artificial Intelligence (AI) into financial forecasting has transformed traditional stock market prediction methods. This research paper explores the effectiveness of AI techniques in forecasting stock trends within the Indian stock market over the last few years. We examine AI methodologies, including machine learning (ML), deep learning (DL), and hybrid models applied to the Bombay Stock Exchange (BSE) and the National Stock Exchange (NSE). By comparing historical data with AI-predicted trends, this study evaluates prediction accuracy and market relevance (Patel et al., 2015). Furthermore, the research outlines existing study gaps and proposes a future scope of integrating AI with behavioral finance and real-time analytics (Chen et al., 2022). The prediction of stock market trends remains a significant challenge due to the stochastic and non-linear nature of financial time series data (Zhang & Zhou, 2020). With the proliferation of Artificial Intelligence (AI), particularly Machine Learning (ML) and Deep Learning (DL) techniques, there has been a paradigm shift in the modeling and forecasting of stock price movements (Fischer & Krauss, 2018). This paper presents a comprehensive study on the effectiveness of AI in predicting stock market trends within the Indian financial ecosystem, focusing on a comparative analysis of AI models implemented over the last five years (2018–2023) on major indices and stocks listed on the NSE and BSE. This research evaluates the performance of multiple AI algorithms—including Support Vector Machines (SVM), Random Forests (RF), Long Short-Term Memory (LSTM) networks, Gated Recurrent Units (GRU), and hybrid ensemble models—in forecasting short- and medium-term price trends using historical stock data (Krauss et al., 2017; Chen & He, 2021). Standard evaluation metrics such as Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and classification accuracy are employed to assess model efficacy. The results indicate that while traditional machine learning algorithms offer moderate predictive power, advanced deep learning and hybrid models significantly outperform them, particularly during periods of high market volatility, such as the COVID-19 pandemic and post-lockdown recovery phases (Weng et al., 2018; Jain & Jain, 2021). The study identifies key research gaps, including limited sectoral diversification in datasets, inadequate integration of sentiment and behavioral data, overfitting issues in complex models, and the lack of real-time prediction systems in the Indian context (Gupta & Pathak, 2022). Furthermore, current models often ignore nonquantitative factors such as investor sentiment, macroeconomic indicators, and global events, which can critically impact prediction accuracy (Nassirtoussi et al., 2014). These limitations suggest the need for a more holistic and interdisciplinary approach to AI-driven financial forecasting. The scope for future research includes the development of real-time, adaptive AI systems using high-frequency trading data, the incorporation of behavioral finance through social media and news analytics, and the exploration of quantum computing-based AI models (Liu et al., 2023). From a practical standpoint, the findings of this study offer valuable insights for institutional investors, financial analysts, regulatory bodies, and developers of AIpowered trading platforms. The study concludes that while AI is not a definitive solution for market prediction, it provides a powerful augmentative tool that, when designed with robustness, transparency, and adaptability, can significantly enhance decision-making in India’s fastevolving financial markets.
- Research Article
- 10.3390/forecast7040058
- Oct 17, 2025
- Forecasting
As the world is shifting toward cleaner energy sources, accurate forecasting of solar radiation is critical for optimizing the performance and integration of solar energy systems. In this study, we explore eight machine learning models, namely, Random Forest Regressor, Linear Regression Model, Artificial Neural Network, k-Nearest Neighbors, Support Vector Regression, Gradient Boosting Regressor, Gaussian Process Regression, and Deep Learning, as to their use in forecasting direct solar radiation across six climatically diverse regions in the Kingdom of Saudi Arabia. The models were evaluated using eight statistical metrics along with time-series and absolute error analyses. A key contribution of this work is the introduction of Trigonometric Cyclical Encoding, which has significantly improved temporal representation learning. Comparative SHAP-based feature-importance analysis revealed that Trigonometric Cyclical Encoding enhanced the explanatory power of temporal features by 49.26% for monthly cycles and 53.30% for daily cycles. The findings show that Deep Learning achieved the lowest root mean square error, as well as the highest coefficient of determination, while Artificial Neural Network demonstrated consistently high accuracy across the sites. Support Vector Regression performed optimally but was less reliable in some regions. Error and time-series analyses reveal that Artificial Neural Network and Deep Learning maintained stable prediction accuracy throughout high solar radiation seasons, whereas Linear Regression, Random Forest Regressor, and k-Nearest Neighbors showed greater fluctuations. The proposed Trigonometric Cyclical Encoding technique further enhanced model performance by maintaining the overall fitness of the models, which ranged between 81.79% and 94.36% in all scenarios. This paper supports the effective planning of solar energy and integration in challenging climatic conditions.
- Research Article
150
- 10.3390/app8081286
- Aug 1, 2018
- Applied Sciences
Solar photovoltaic (PV) power forecasting has become an important issue with regard to the power grid in terms of the effective integration of large-scale PV plants. As the main influence factor of PV power generation, solar irradiance and its accurate forecasting are the prerequisite for solar PV power forecasting. However, previous forecasting approaches using manual feature extraction (MFE), traditional modeling and single deep learning (DL) models could not satisfy the performance requirements in partial scenarios with complex fluctuations. Therefore, an improved DL model based on wavelet decomposition (WD), the Convolutional Neural Network (CNN), and Long Short-Term Memory (LSTM) is proposed for day-ahead solar irradiance forecasting. Given the high dependency of solar irradiance on weather status, the proposed model is individually established under four general weather type (i.e., sunny, cloudy, rainy and heavy rainy). For certain weather types, the raw solar irradiance sequence is decomposed into several subsequences via discrete wavelet transformation. Then each subsequence is fed into the CNN based local feature extractor to automatically learn the abstract feature representation from the raw subsequence data. Since the extracted features of each subsequence are also time series data, they are individually transported to LSTM to construct the subsequence forecasting model. In the end, the final solar irradiance forecasting results under certain weather types are obtained via the wavelet reconstruction of these forecasted subsequences. This case study further verifies the enhanced forecasting accuracy of our proposed method via a comparison with traditional and single DL models.
- Research Article
41
- 10.1016/j.fertnstert.2020.10.040
- Nov 1, 2020
- Fertility and Sterility
Predictive modeling in reproductive medicine: Where will the future of artificial intelligence research take us?
- Discussion
6
- 10.1016/j.ejmp.2021.05.008
- Mar 1, 2021
- Physica Medica
Focus issue: Artificial intelligence in medical physics.
- Conference Article
15
- 10.1109/td-asia.2009.5356904
- Oct 1, 2009
Recently, CO <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> emissions from residential sector are increasing rapidly due to the proliferation of all-electric houses. Therefore, it is expected that renewable energy is installed in all-electric houses. The solar heater system, as a kind of renewable energy, is assumed to be installed in a typical residence. However, solar radiation is not constant and collected thermal energy of solar heater system is influenced by meteorological conditions. Therefore, the solar heater system is required optimal control method by using solar radiation forecasting. This work evaluate of energy reduction, cost reduction and CO <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> reduction which are used to supply hot-water in evening by using the solar heater system. Simulation results show the effectiveness of solar radiation forecasting by comparing control methods with/without solar radiation forecasting. Moreover, the effectiveness of the solar heater system is shown by comparing the energy reduction effect. In this paper comparison is made of a electrical heater system, a solar heater system and a heater pump system. The results show, the energy reduction rate is about 80% by using 3 panels of the solar heater system. Furthermore, the payout time in case of using solar heater system is shorter than the case of using heater pump system.
- Conference Article
4
- 10.1109/ceepe51765.2021.9475652
- Apr 23, 2021
For optimal functioning, grid-connected photovoltaic (GCPV) systems need day-ahead power forecasting, as this ensures overall enhanced management in areas such as reliability, scheduling, and efficiency in energy trading. Solar irradiation forecasts are especially important for obtaining photovoltaic (PV) power production predictions, given that that PV output represents a function of solar irradiation. Recently, the Long Short-Term Memory (LSTM) model is being increasingly applied in solar irradiance forecasting, but the performance of LSTM is still relatively unknown. The present paper explores how meteorological and geographical (i.e., exogenous) and past records of solar irradiance (i.e., endogenous) variables may be incorporated as input features in day-ahead solar irradiance forecasting models that use deep learning models. In this study, the results for the LSTM model are compared to those for the Radial Basis Function neural network (RBFNN) in relation to both multivariate time series forecasting (MTSF) and univariate time series forecasting (UTSF). The results of the comparisons show that the UTSF_LSTM model performs better than other models with regard to minimum forecasting errors. Our results have also been validated with data from a region that features different climatic conditions from those originally tested. Overall, the outcome of these investigations clearly indicate the superiority of the proposed UTSF_LSTM method when compared to the UTSF_RBFNN, MTSF_RBFNN, or MTSF_LSTM developed models with regard to the coefficient of determination (R2) and the Root Mean Square Error (RMSE).
- Research Article
- 10.1016/j.jmb.2025.169181
- Sep 1, 2025
- Journal of molecular biology
Artificial-intelligence-driven Innovations in Mechanistic Computational Modeling and Digital Twins for Biomedical Applications.
- Research Article
142
- 10.1016/j.renene.2021.02.161
- Mar 4, 2021
- Renewable Energy
Hybrid deep neural model for hourly solar irradiance forecasting
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.