Dynamic Equivalence of Active Distribution Network: Multiscale and Multimodal Fusion Deep Learning Method with Automatic Parameter Tuning

  • Abstract
  • Literature Map
  • References
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Dynamic equivalence of active distribution networks (ADNs) is emerging as one of the most important issues for the backbone network security analysis due to high penetration of distributed generations (DGs) and electricity vehicles (EVs). The multiscale and multimodal fusion deep learning (MMFDL) method proposed in this paper contains two modalities, one of which is a CNN + attention module to simulate Newton Raphson power flow calculation (NRPFC) for the important feature extraction of a power system caused by disturbance, which is motivated by the similarities between NRPFC and convolution network computation. The other is a long short-term memory (LSTM) + fully connected (FC) module for load modeling based on the fact that LSTM + FC can represent a load′s differential algebraic equations (DAEs). Moreover, to better capture the relationship between voltage and power, the multiscale fusion method is used to aggregate load modeling models with different voltage input sizes and combined with CNN + attention, merging as MMFDL to represent the dynamic behaviors of ADNs. Then, the Kepler optimization algorithm (KOA) is applied to automatically tune the adjustable parameters of MMFLD (called KOA-MMFDL), especially the LSTM and FC hidden layer number, as they are important for load modeling and there is no human knowledge to set these parameters. The performance of the proposed method was evaluated by employing different electric power systems and various disturbance scenarios. The error analysis shows that the proposed method can accurately represent the dynamic response of ADNs. In addition, comparative experiments verified that the proposed method is more robust and generalizable than other advanced non-mechanism methods.

ReferencesShowing 10 of 28 papers
  • Cite Count Icon 7
  • 10.1109/tpwrs.2023.3250648
Synthesis Load Model With Renewable Energy Sources for Transient Stability Studies
  • Jan 1, 2024
  • IEEE Transactions on Power Systems
  • Tiankai Lan + 3 more

  • Cite Count Icon 204
  • 10.1109/tcst.2014.2311852
Wide-Area Damping Controller for Power System Interarea Oscillations: A Networked Predictive Control Approach
  • Jan 1, 2015
  • IEEE Transactions on Control Systems Technology
  • Wei Yao + 4 more

  • Cite Count Icon 318
  • 10.1016/j.knosys.2023.110454
Kepler optimization algorithm: A new metaheuristic algorithm inspired by Kepler’s laws of planetary motion
  • Mar 11, 2023
  • Knowledge-Based Systems
  • Mohamed Abdel-Basset + 4 more

  • Cite Count Icon 1
  • 10.1109/tpwrs.2023.3270139
Free-Form Dynamic Load Model Synthesis With Symbolic Regression Based on Sparse Dictionary Learning
  • Mar 1, 2024
  • IEEE Transactions on Power Systems
  • You Lin + 2 more

  • Cite Count Icon 183
  • 10.1109/tpwrs.2013.2283064
A Systematic Approach for Dynamic Security Assessment and the Corresponding Preventive Control Scheme Based on Decision Trees
  • Mar 1, 2014
  • IEEE Transactions on Power Systems
  • Chengxi Liu + 6 more

  • Cite Count Icon 52
  • 10.1109/59.331443
Power system dynamic load modeling using artificial neural networks
  • Jan 1, 1994
  • IEEE Transactions on Power Systems
  • Bih-Yuan Ku + 3 more

  • Cite Count Icon 42
  • 10.1109/tdc.2016.7520081
Implementation of the WECC Composite Load Model for utilities using the component-based modeling approach
  • May 1, 2016
  • Anish Gaikwad + 2 more

  • Cite Count Icon 74
  • 10.1109/tpwrs.2009.2036711
A Neural-Network-Based Method of Modeling Electric Arc Furnace Load for Power Engineering Study
  • Feb 1, 2010
  • IEEE Transactions on Power Systems
  • G.W Chang + 2 more

  • Cite Count Icon 61
  • 10.1016/b978-0-08-097747-8.00006-2
Chapter 6 - Orbital Maneuvers
  • Oct 18, 2013
  • Orbital Mechanics for Engineering Students
  • Howard D Curtis

  • Cite Count Icon 7377
  • 10.1109/72.279181
Learning long-term dependencies with gradient descent is difficult
  • Mar 1, 1994
  • IEEE Transactions on Neural Networks
  • Y Bengio + 2 more

Similar Papers
  • Conference Article
  • Cite Count Icon 23
  • 10.1109/mwscas.2019.8885035
Performance of Three Slim Variants of The Long Short-Term Memory (LSTM) Layer
  • Aug 1, 2019
  • Daniel Kent + 1 more

The Long Short-Term Memory (LSTM) layer is an important advancement in the field of neural networks and machine learning, allowing for effective training and impressive inference performance. LSTM-based neural networks have been successfully employed in various applications such as speech processing and language translation. The LSTM layer can be simplified by removing certain components, potentially speeding up training and runtime with limited change in performance. In particular, several recently introduced variants, called Slim LSTMs, have shown success in initial experiments to support this view. In this paper, we perform computational analysis of the validation accuracy of a convolutional plus recurrent neural network architecture designed to analyze sentiment, using comparatively the standard LSTM and three Slim LSTM layers. We found that some realizations of the Slim LSTM layers can potentially perform as well as the standard LSTM layer for our considered architecture targeted at sentiment analysis.

  • Research Article
  • Cite Count Icon 2
  • 10.1016/j.artmed.2024.102922
ConvLSNet: A lightweight architecture based on ConvLSTM model for the classification of pulmonary conditions using multichannel lung sound recordings
  • Jun 22, 2024
  • Artificial Intelligence In Medicine
  • Faezeh Majzoobi + 3 more

ConvLSNet: A lightweight architecture based on ConvLSTM model for the classification of pulmonary conditions using multichannel lung sound recordings

  • Research Article
  • Cite Count Icon 34
  • 10.1016/j.cviu.2019.102840
Video captioning using boosted and parallel Long Short-Term Memory networks
  • Oct 11, 2019
  • Computer Vision and Image Understanding
  • Masoomeh Nabati + 1 more

Video captioning using boosted and parallel Long Short-Term Memory networks

  • Research Article
  • Cite Count Icon 68
  • 10.1109/tnnls.2018.2885219
A Novel Equivalent Model of Active Distribution Networks Based on LSTM.
  • Jul 29, 2019
  • IEEE Transactions on Neural Networks and Learning Systems
  • Chao Zheng + 6 more

Dynamic behaviors of distribution networks are of great importance for the power system analysis. Nowadays, due to the integration of the renewable energy generation, energy storage, plug-in electric vehicles, and distribution networks turn from passive systems to active ones. Hence, the dynamic behaviors of active distribution networks (ADNs) are much more complex than the traditional ones. The research interests how to establish an accurate model of ADNs in modern power systems are drawing a great deal of attention. In this paper, motivated by the similarities between power system differential algebraic equations and the forward calculation flows of recurrent neural networks (RNNs), a long short-term memory (LSTM) RNN-based equivalent model is proposed to accurately represent the ADNs. First, the adoption reasons of the proposed LSTM RNN-based equivalent model are explained, and its advantages are analyzed from the mathematical point of view. Then, the accuracy and generalization performance of the proposed model is evaluated using the IEEE 39-Bus New England system integrated with ADNs in the study cases. It reveals that the proposed LSTM RNN-based equivalent model has a generalization capability to capture the dynamic behaviors of ADNs with high accuracy.

  • Research Article
  • Cite Count Icon 37
  • 10.1016/j.neucom.2021.04.005
A parallel multi-module deep reinforcement learning algorithm for stock trading
  • Apr 6, 2021
  • Neurocomputing
  • Cong Ma + 4 more

A parallel multi-module deep reinforcement learning algorithm for stock trading

  • Research Article
  • 10.36706/sjia.v1i1.14
Multilabel Classification for News Article Using Long Short-Term Memory
  • Jul 9, 2020
  • Sriwijaya Journal of Informatics and Applications
  • Winda Kurnia Sari + 2 more

Multilabel text classification is a task of categorizing text into one or more categories. Like other machine learning, multilabel classification performance is limited when there is small labeled data and leads to the difficulty of capturing semantic relationships. In this case, it requires a multi-label text classification technique that can group four labels from news articles. Deep Learning is a proposed method for solving problems in multi-label text classification techniques. By comparing the seven proposed Long Short-Term Memory (LSTM) models with large-scale datasets by dividing 4 LSTM models with 1 layer, 2 layer and 3-layer LSTM and Bidirectional LSTM to show that LSTM can achieve good performance in multi-label text classification. The results show that the evaluation of the performance of the 2-layer LSTM model in the training process obtained an accuracy of 96 with the highest testing accuracy of all models at 94.3. The performance results for model 3 with 1-layer LSTM obtained the average value of precision, recall, and f1-score equal to the 94 training process accuracy. This states that model 3 with 1-layer LSTM both training and testing process is better. The comparison among seven proposed LSTM models shows that model 3 with 1 layer LSTM is the best model.

  • Supplementary Content
  • Cite Count Icon 15
  • 10.1155/2022/1563707
Automated Detection of Rehabilitation Exercise by Stroke Patients Using 3-Layer CNN-LSTM Model.
  • Feb 4, 2022
  • Journal of Healthcare Engineering
  • Zia Ur Rahman + 5 more

According to statistics, stroke is the second or third leading cause of death and adult disability. Stroke causes losing control of the motor function, paralysis of body parts, and severe back pain for which a physiotherapist employs many therapies to restore the mobility needs of everyday life. This research article presents an automated approach to detect different therapy exercises performed by stroke patients during rehabilitation. The detection of rehabilitation exercise is a complex area of human activity recognition (HAR). Due to numerous achievements and increasing popularity of deep learning (DL) techniques, in this research article a DL model that combines convolutional neural network (CNN) and long short-term memory (LSTM) is proposed and is named as 3-Layer CNN-LSTM model. The dataset is collected through RGB (red, green, and blue) camera under the supervision of a physiotherapist, which is resized in the preprocessing stage. The 3-layer CNN-LSTM model takes preprocessed data at the convolutional layer. The convolutional layer extracts useful features from input data. The extracted features are then processed by adjusting weights through fully connected (FC) layers. The FC layers are followed by the LSTM layer. The LSTM layer further processes this data to learn its spatial and temporal dynamics. For comparison, we trained CNN model over the prescribed dataset and achieved 89.9% accuracy. The conducted experimental examination shows that the 3-Layer CNN-LSTM outperforms CNN and KNN algorithm and achieved 91.3% accuracy.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 1
  • 10.3390/app122010248
Active Noise Reduction with Filtered Least-Mean-Square Algorithm Improved by Long Short-Term Memory Models for Radiation Noise of Diesel Engine
  • Oct 12, 2022
  • Applied Sciences
  • Semin Kwon + 2 more

This study presents an active noise control (ANC) algorithm using long short-term memory (LSTM) layers as a type of recurrent neural network. The filtered least-mean-square (FxLMS) algorithm is a widely used ANC algorithm, where the noise in a target area is reduced through a control signal generated from an adaptive filter. Artificial intelligence can enhance the reduction performance of ANC for specific applications. An LSTM is an artificial neural network for recognizing patterns in arbitrarily long sequence data. In this study, an ANC controller consisting of LSTM layers based on deep neural networks was designed for predicting a reference noise signal, which was used to generate the control signal to minimize the noise residue. The structure of the LSTM neural networks and procedure for training the LSTM controller for the ANC were determined. Simulations were conducted to compare the convergence time and performances of the ANC with the LSTM controller and those with a conventional FxLMS algorithm. The noise source adopted sounds from a single-cylinder diesel engine, while reference noises selected were single harmonics, superposed harmonics, and impulsive signals generated from the diesel engine. The characteristics of each algorithm were examined through a Fourier transform analysis of the ANC results. The simulation results demonstrated that the proposed ANC method with LSTM layers showed outstanding noise reduction capabilities in narrowband, broadband, and impulsive noise environments, without high computational cost and complexity relative to the conventional FxLMS algorithm.

  • Research Article
  • Cite Count Icon 872
  • 10.1016/j.bspc.2018.08.035
Speech emotion recognition using deep 1D & 2D CNN LSTM networks
  • Sep 11, 2018
  • Biomedical Signal Processing and Control
  • Jianfeng Zhao + 2 more

Speech emotion recognition using deep 1D & 2D CNN LSTM networks

  • Research Article
  • Cite Count Icon 18
  • 10.1190/geo2020-0749.1
Deep-learning missing well-log prediction via long short-term memory network with attention-period mechanism
  • Dec 23, 2022
  • GEOPHYSICS
  • Liuqing Yang + 5 more

Underground reservoir information can be obtained through well-log interpretation. However, some logs might be missing due to various reasons, such as instrument failure. A deep-learning-based method that combines a convolutional layer and a long short-term memory (LSTM) layer is proposed to estimate the missing logs without the expensive relogging. The convolutional layer is used to extract the depth-series features initially, which are then input into the LSTM layer. To improve the feature memory and extraction capabilities of the LSTM layer, we construct two LSTM-based components: the first component uses an attention mechanism to optimize the LSTM units by adaptively adjusting network weights, and the second component uses a period-skip mechanism, which enhances the sensitivity of aperiodic changes in the depth series by learning the information of the shallow sequence. In addition, we add an autoregressive component to enhance the linear feature extraction capability while learning the nonlinear relationship between different logs. A total of 13 wells from two different regions are used for experiments, including 11 training and two test wells. We use one well to calculate the uncertainties of four time-series networks, i.e., our proposed network and three benchmark models (recurrent neural network, gated recurrent unit, and LSTM), to demonstrate the stability and robustness of the proposed method. Furthermore, we evaluate the performance of our proposed method in several crossover experiments, e.g., different logging intervals, depths, and input logs. Compared to a state-of-the-art deep learning method and a classic LSTM network, the proposed network has higher reliability, which is reflected in the feature extraction of depth series with a larger span. The experimental results demonstrate that our proposed network can accurately generate sonic and other unknown logs.

  • PDF Download Icon
  • Research Article
  • 10.2478/acss-2023-0013
Multichannel Approach for Sentiment Analysis Using Stack of Neural Network with Lexicon Based Padding and Attention Mechanism
  • Jun 1, 2023
  • Applied Computer Systems
  • Venkateswara Rao Kota + 1 more

Sentiment analysis (SA) has been an important focus of study in the fields of computational linguistics and data analysis for a decade. Recently, promising results have been achieved when applying DNN models to sentiment analysis tasks. Long short-term memory (LSTM) models, as well as its derivatives like gated recurrent unit (GRU), are becoming increasingly popular in neural architecture used for sentiment analysis. Using these models in the feature extraction layer of a DNN results in a high dimensional feature space, despite the fact that the models can handle sequences of arbitrary length. Another problem with these models is that they weight each feature equally. Natural language processing (NLP) makes use of word embeddings created with word2vec. For many NLP jobs, deep neural networks have become the method of choice. Traditional deep networks are not dependable in storing contextual information, so dealing with sequential data like text and sound was a nightmare for such networks. This research proposes multichannel word embedding and employing stack of neural networks with lexicon-based padding and attention mechanism (MCSNNLA) method for SA. Using convolution neural network (CNN), Bi-LSTM, and the attention process in mind, this approach to sentiment analysis is described. One embedding layer, two convolution layers with max-pooling, one LSTM layer, and two fully connected (FC) layers make up the proposed technique, which is tailored for sentence-level SA. To address the shortcomings of prior SA models for product reviews, the MCSNNLA model integrates the aforementioned sentiment lexicon with deep learning technologies. The MCSNNLA model combines the strengths of emotion lexicons with those of deep learning. To begin, the reviews are processed with the sentiment lexicon in order to enhance the sentiment features. The experimental findings show that the model has the potential to greatly improve text SA performance.

  • Research Article
  • Cite Count Icon 1
  • 10.61356/j.iswa.2024.2224
An Attention-Based Deep Learning Approach for Lithium-ion Battery Lifespan Prediction: Analysis and Experimental Validation
  • Apr 16, 2024
  • Information Sciences with Applications
  • Ahmed Darwish

The potential for lithium-ion batteries to become unstable can lead to operational malfunctions within the system and result in safety incidents. Therefore, accurately forecasting the remaining useful life (RUL) is beneficial in mitigating the likelihood of battery failure and prolonging its operational lifespan. Hence, precise estimation of RUL can help prevent numerous safety incidents and minimize resource wastage, presenting a significant and complex issue. This paper introduces a Deep Learning (DL) model that utilizes Long Short-Term Memory (LSTM) and attention mechanism to improve the accuracy of predicting the RUL of lithium-ion batteries. Initially, the battery capacity regeneration phenomenon is captured by applying four LSTM layers, followed by implementing an attention mechanism to align input and output sequences based on the content or semantics of the input sequence. Finally, the final prediction outcomes are generated via a Fully Connected (FC) layer. The efficacy of the proposed model is assessed through the utilization of the NASA dataset, and its performance is contrasted with various deep learning models to highlight its efficacy. Results from the experiments demonstrate that the suggested At-LSTM presents a robust option for forecasting the RUL of lithium-ion batteries, as it delivers superior results compared to all other models examined.

  • Research Article
  • Cite Count Icon 50
  • 10.1109/tip.2019.2913544
Sample Fusion Network: An End-to-End Data Augmentation Network for Skeleton-Based Human Action Recognition.
  • May 2, 2019
  • IEEE Transactions on Image Processing
  • Fanyang Meng + 4 more

Data augmentation is a widely used technique for enhancing the generalization ability of deep neural networks for skeleton-based human action recognition (HAR) tasks. Most existing data augmentation methods generate new samples by means of handcrafted transforms. However, these methods often cannot be trained and then are discarded during testing because of the lack of learnable parameters. To solve those problems, a novel type of data augmentation network called a sample fusion network (SFN) is proposed. Instead of using handcrafted transforms, an SFN generates new samples via a long short-term memory (LSTM) autoencoder (AE) network. Therefore, an SFN and HAR network can be cascaded together to form a combined network that can be trained in an end-to-end manner. Moreover, an adaptive weighting strategy is employed to improve the complementarity between a sample and the new sample generated from it by an SFN, thus allowing the SFN to more efficiently improve the performance of the HAR network during testing. The experimental results on various datasets verify that the proposed method outperforms state-of-the-art data augmentation methods. More importantly, the proposed SFN architecture is a general framework that can be integrated with various types of networks for HAR. For example, when a baseline HAR model with three LSTM layers and one fully connected (FC) layer was used, the classification accuracy was increased from 79.53% to 90.75% on the NTU RGB+D dataset using a cross-view protocol, thus outperforming most other methods.

  • Research Article
  • Cite Count Icon 2
  • 10.61356/j.saem.2024.1251
A Data-driven Deep Learning Approach for Remaining Useful Life of Rolling Bearings
  • Feb 13, 2024
  • Systems Assessment and Engineering Management
  • Ahmed Darwish

The bearing is a commonly used rotating element, and its condition significantly impacts the operation and maintenance of machinery. Therefore, accurately predicting the Remaining Useful Life (RUL) of bearings holds great importance. Deep learning has made significant progress in RUL prediction. This study presents a Deep Learning (DL) model incorporating a Convolution Neural Network (CNN), Long Short-Term Memory (LSTM), and attention mechanism to enhance RUL prediction accuracy for rolling bearings. Initially, time domain input data is processed by the CNN for feature extraction. Subsequently, two LSTM layers are utilized to capture intricate temporal relationships and create more abstract data representations, followed by the incorporation of an attention mechanism to align input and output sequences based on the content or semantics of the input sequence. Ultimately, the final predictions are made through a Fully Connected (FC) layer. The effectiveness of the proposed model is evaluated using the IEEE PHM 2012 Challenge dataset, and its performance is compared to various deep learning models to showcase its efficacy. Experimental results indicate that the suggested CNN-ALSTM model is a reliable choice for predicting the RUL of rolling bearings, outperforming all other models considered.

  • Conference Article
  • Cite Count Icon 1
  • 10.1109/iemcon.2018.8615054
Video Predictive Object Detector
  • Nov 1, 2018
  • Mohammed Hamada Gasmallah + 1 more

With the rise of video datasets and self-driving cars, many industries seek a way to perform quick object detection on video, as well as perform predictive tracking on these objects. We propose a predictive video object detector (POD net) integrating the You Only Look Once v2 (YOLOv2) framework with the convolutional 2-dimensional (2D) Long Short Term Memory (LSTM) model proposed by Shi et al. Our POD net performs object detection using YOLOv2 and object prediction using the LSTM model in an iterative manner with a view to improve object detection in video streams via object prediction. In this study we present two different approaches that we implemented to predict objects in subsequent video clips. The first approach, PODv1, applies a post-temporal pattern matching mechanism wherein the YOLOv2 detector is used to detect objects in multiple images and the LSTM layer is used to perform temporal feature mapping across the output tensors of the detectors. The second approach, PODv2, provides better results by applying the temporal feature mapping first across the images and then feeding the output into the YOLOv2 detector which is wrapped using a Time Distributed layer. We tested POD net on the Multi-Object Tracking (MOT) 2017 dataset and the network was able to perform predictive object detection and tracking, demonstrating that the LSTM layer is useful for a variety of video analysis problems.

More from: Mathematics
  • New
  • Research Article
  • 10.3390/math13213543
Pull-Based Output Rate Control of a Flexible Job Shop in a Multi-Shop Production Chain
  • Nov 5, 2025
  • Mathematics
  • Wei Weng + 2 more

  • New
  • Research Article
  • 10.3390/math13213539
A Novel Integrated Fuzzy Analytic Hierarchy Process with a 4-Tuple Hedge Algebra Semantics for Assessing the Level of Digital Transformation of Enterprises
  • Nov 4, 2025
  • Mathematics
  • Nhu Van Kien + 3 more

  • New
  • Research Article
  • 10.3390/math13213528
Time-Series Recommendation Quality, Algorithm Aversion, and Data-Driven Decisions: A Temporal Human–AI Interaction Perspective
  • Nov 4, 2025
  • Mathematics
  • Shan Jiang + 4 more

  • New
  • Research Article
  • 10.3390/math13213533
Research on Path Planning for Mobile Robot Using the Enhanced Artificial Lemming Algorithm
  • Nov 4, 2025
  • Mathematics
  • Pengju Qu + 2 more

  • New
  • Research Article
  • 10.3390/math13213537
Diffusion in Heterogeneous Media with Stochastic Resetting and Pauses
  • Nov 4, 2025
  • Mathematics
  • Ervin K Lenzi + 2 more

  • New
  • Research Article
  • 10.3390/math13213532
On the Existence and Uniqueness of Two-Dimensional Nonlinear Fuzzy Difference Equations with Logarithmic Interactions
  • Nov 4, 2025
  • Mathematics
  • Yasser Almoteri + 1 more

  • New
  • Research Article
  • 10.3390/math13213530
Correction: Li et al. FSDN-DETR: Enhancing Fuzzy Systems Adapter with DeNoising Anchor Boxes for Transfer Learning in Small Object Detection. Mathematics 2025, 13, 287
  • Nov 4, 2025
  • Mathematics
  • Zhijie Li + 6 more

  • New
  • Research Article
  • 10.3390/math13213529
The Design of a Rocket Angular Stabilization System Based on Stability and Performance Indices Using the Coefficient Method
  • Nov 4, 2025
  • Mathematics
  • Meirbek Moldabekov + 4 more

  • New
  • Research Article
  • 10.3390/math13213535
Fuzzy MCDM Methodology Application in Analysis of Annual Operational Efficiency in Passenger and Freight Air Transport
  • Nov 4, 2025
  • Mathematics
  • Nikola Petrović + 5 more

  • New
  • Research Article
  • 10.3390/math13213538
Mono-ViM: A Self-Supervised Mamba Framework for Monocular Depth Estimation in Endoscopic Scenes
  • Nov 4, 2025
  • Mathematics
  • Shengli Chen + 5 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon