Deep Learning Techniques to Enhance Energy Efficiency of Home Appliances by Analyzing Air Quality Levels

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Energy efficiency in home appliances is a critical area of research that addresses the growing demand for reducing energy consumption. The rapid growth in artificial intelligence has prioritized the development of advanced methods to improve sustainable energy consumption, particularly by optimizing the energy efficiency of home appliances. The research introduces a novel deep learning-based framework to enhance energy efficiency in home appliances by leveraging insights from Indoor Air Quality (IAQ) metrics. Unlike conventional energy management approaches, which face challenges such as limited datasets, computational inefficiencies, and a lack of generalizability, the research incorporates advanced preprocessing and augmentation techniques. Specifically, a hybrid Synthetic Minority Over-sampling Technique - Edited Nearest Neighbors (SMOTE-ENN) approach addresses class imbalance, while Z-score normalization ensures consistent feature scaling. Among the evaluated models, the Bidirectional Gated Recurrent Unit (GRU) and the Stacked Long Short-Term Memory (LSTM) stand out, achieving exceptional validation accuracies of 99.81% and 99.64%, respectively, demonstrating superior generalization. This framework uniquely integrates IAQ data to optimize energy usage dynamically, showcasing how environmental factors such as CO2, humidity, and temperature can inform sustainable energy practices. These findings underscore the transformative potential of deep learning in fostering ecofriendly innovations for smart home energy management. They show the broader potential for integrating artificial intelligence-driven approaches into energy policies and sustainability strategies, enabling more effective reductions in residential energy consumption and combating climate change.

Similar Papers
  • Research Article
  • 10.21271/zjpas.37.6.13
Enhancing IoT Anomaly Detection using Hybrid CNN-LSTM Model and Interpretable Feature Selection
  • Dec 31, 2025
  • Zanco Journal of Pure and Applied Sciences
  • Soran Ahmed Hasan + 1 more

Securing Internet of Things (IoT) networks is an ongoing challenge. As more devices connect to the internet with limited resources, these systems have become more vulnerable to cyberattacks. Many attacks continually evolve and become more sophisticated. This highlights the need for scalable, efficient anomaly detection deployable close to IoT devices to minimize latency, while maintaining high accuracy with low memory and computational demands. Many solutions have been applied for enhancing the problem area, either they are heavy models unsuitable for edge devices or they lack generalizability with recent datasets and current attack traffic patterns. Our research suggests a lightweight anomaly detection model that combines Convolution Neural Network (CNN) and Long Short Term Memory (LSTM) model, to recognize patterns across both spatial and temporal dimensions, as well as identify significant relationships among an interpretable selected set of features. with SHapley Additive exPlanations (SHAP) for feature selection and Synthetic Minority Oversampling Technique - Edited Nearest Neighbors (SMOTE-ENN) for balancing the distribution of classes in the datasets. The model’s performance was evaluated using accuracy, precision, recall, and F1 parameters. Following the study, an accuracy rate of 99.12% for multiclassification is achieved in the CICIoT2023 dataset. In the TON_IoT dataset, a multiclassification success rate of 99.08% is reached. The model with 10 features selected achieved 99.0%, 98.85% in the CICIoT2023 and TON_IoT dataset. With just 43,406 trainable parameters and Top 10 features selected proposed framework offers a lightweight, explainable model that is effective for edge IoT devices with limited resources.

  • Research Article
  • 10.47890/jadct/2020/njabbour/10123453
A comparative meta-analysis of residential green building policies and their impact on overall energy consumption patterns
  • May 19, 2020
  • Journal of Architectural Design and Construction Technology
  • Naim Jabbour

Data shows residential energy consumption constituting a significant portion of the overall energy end use in the European Union (EU), ranging between 15% and 30%. Furthermore, the EU’s dependency on foreign fossil fuel-based energy imports has been steadily increasing since 1993, constituting approximately 60% of its primary energy. This paper provides an analytical re-view of diverse residential building/energy policies in targeted EU countries, to shed insight on the impact of such policies and measures on energy use and efficiency trends. Accordingly, the adoption of robust residential green and energy efficient building policies in the EU has increased in the past decade. Moreover, data from EU energy efficiency and consumption databases attributes 44% of total energy savings since 2000 to energy upgrades and improvements within the residential sector. Consequently, many EU countries and organizations are continuously evaluating residential building energy consumption patterns to increase the sec-tor’s overall energy performance. To that end, energy efficiency gains in EU households were measured at 1% in 2000 compared to 27.8% in 2016, a 2600% increase. Accordingly, 36 policies have been implemented successfully since 1991 across the EU targeting improvements in residential energy efficiency and reductions in energy use. Moreover, the adoption of National Energy Efficiency Actions Plans (NEEACP) across the EU have been a major driver of energy savings and energy efficiency. Most energy efficiency plans have followed a holistic multi-dimensional approach targeting the following areas, legislative actions, financial incentives, fiscal tax exemptions, and public education and awareness programs and campaigns. These measures and policy instruments have cumulatively generated significant energy savings and measurable improvements in energy performance across the EU since their inception. As a result, EU residential energy consumption trends show a consistent decrease over the past decade. The purpose of this analysis is to explore, examine, and compare the various green building and energy-related policies in the EU, highlighting some of the more robust and progressive aspects of such policies. The paper will also analyze the multiple policies and guidelines across targeted European nations. Lastly, the study will assess the status of green residential building policies in Lebanon, drawing from the comprehensive European measures, in order to recommend a comprehensive set of guidelines to advance energy policies and building practices in the country. Keywords: Building Policies; Residential Energy Patterns; Residential Energy Consumption; Energy Savings

  • Research Article
  • Cite Count Icon 1
  • 10.24843/lkjiti.2025.v16.i02.p05
Evaluation of the performance of the Smote, Smote Enn, and Borderline Smote resampling methods based on the number of outlier data with Z Score
  • Aug 30, 2025
  • Lontar Komputer : Jurnal Ilmiah Teknologi Informasi
  • Arisgunadi Gunadi + 2 more

Handling class imbalances in datasets is a significant challenge in the classification process. Disruption occurs if the minority class has a crucial role in decision-making. Oversampling is one of the solutions that is widely used to overcome this problem. This study compares the performance of three popular oversampling methods, namely SMOTE (Synthetic Minority Oversampling Technique), SMOTE-ENN (SMOTE with Edited Nearest Neighbor), and Borderline-SMOTE, based on the number of outlier data produced. Outlier data is measured using a Z-score-based statistical approach. The research was conducted by applying the three oversampling methods on several datasets. Evaluation is carried out by counting the number of outlier data after the resample process, as well as by evaluating their impact on the performance of the classification model using metrics such as accuracy, precision, recall, and F1-score. The research results show that there is no significant difference in the number of outlier data in SMOTE, ENN SMOTE, or borderline SMOTE. In the diabetes.csv dataset, it was found that the percentage of outlier data in the initial condition and the condition after resampling with SMOTE, resampling with SMOTE ENN, and borderline SMOTE were 7.4%, 6.8%, 6.7%, and 63%, respectively. For the predict_ honor.csv dataset, the data are 7.1%, 7.3%, 7.6%, and 7%. For the winequality.csv dataset, the data are 8%, 7.8%, 6.8%, and 5.8%. Meanwhile, smoking.csv data found 7.1%, 7.3%, 7.6%, and 7.0%. However, if we look at each feature in each dataset, more varied conditions are found regarding the performance of the three algorithms, which is related to the number of outlier data produced. In terms of differences, no significant differences were found in the number of outlier data produced. The second finding is related to the performance of the decision tree classification model. It can be stated that the influence of feature correlation is more important than perfect data balance in the dataset.

  • Research Article
  • 10.1088/1757-899x/960/4/042024
A comparative meta-analysis of residential green building policies and measures in the EU, and their impact on overall energy patterns
  • Dec 1, 2020
  • IOP Conference Series: Materials Science and Engineering
  • Naim Jabbour

Data shows residential energy consumption constituting a significant portion of the overall energy end use in the European Union (EU), ranging between 15% and 30%. Furthermore, the EU’s dependency on foreign fossil fuel-based energy imports has been steadily increasing since 1993, constituting approximately 60% of its primary energy. This paper provides an analytical review of diverse residential building/energy policies in targeted EU countries, to shed insight on the impact of such policies and measures on the energy use and efficiency trends. Accordingly, the adoption of robust residential green and energy efficient building policies in the EU has increased in the past decade. Moreover, the data from the EU energy efficiency and consumption databases attributes 44% of total energy savings since 2000 to energy upgrades and improvements within the residential sector. Consequently, many EU countries and organizations are continuously evaluating residential building energy consumption patterns to increase the sector’s overall energy performance. To that end, the energy efficiency gains in EU households were measured at 1% in 2000 compared to 27.8% in 2016, a 2600% increase. Accordingly, 36 policies have been implemented successfully since 1991 across the EU targeting improvements in residential energy efficiency and reductions in the energy use. Moreover, the adoption of National Energy Efficiency Actions Plans (NEEACP) across the EU has been a major driver of energy savings and energy efficiency. Most energy efficiency plans have followed a holistic multi-dimensional approach targeting the following areas, legislative actions, financial incentives, fiscal tax exemptions, and public education and awareness programs and campaigns. These measures and policy instruments have cumulatively generated significant energy savings and measurable improvements in the energy performance across the EU since their inception. As a result, the EU residential energy consumption trends show a consistent decrease over the past decade. The purpose of this analysis is to explore, examine, and compare the various green building and energy-related policies in the EU, highlighting some of the more robust and progressive aspects of such policies. Lastly, the paper analyzes the multiple policies and guidelines across targeted EU nations.

  • Research Article
  • Cite Count Icon 6
  • 10.2478/jdis-2021-0011
A Rebalancing Framework for Classification of Imbalanced Medical Appointment No-show Data
  • Jan 27, 2021
  • Journal of Data and Information Science
  • Ulagapriya Krishnan + 1 more

Purpose This paper aims to improve the classification performance when the data is imbalanced by applying different sampling techniques available in Machine Learning. Design/methodology/approach The medical appointment no-show dataset is imbalanced, and when classification algorithms are applied directly to the dataset, it is biased towards the majority class, ignoring the minority class. To avoid this issue, multiple sampling techniques such as Random Over Sampling (ROS), Random Under Sampling (RUS), Synthetic Minority Oversampling TEchnique (SMOTE), ADAptive SYNthetic Sampling (ADASYN), Edited Nearest Neighbor (ENN), and Condensed Nearest Neighbor (CNN) are applied in order to make the dataset balanced. The performance is assessed by the Decision Tree classifier with the listed sampling techniques and the best performance is identified. Findings This study focuses on the comparison of the performance metrics of various sampling methods widely used. It is revealed that, compared to other techniques, the Recall is high when ENN is applied CNN and ADASYN have performed equally well on the Imbalanced data. Research limitations The testing was carried out with limited dataset and needs to be tested with a larger dataset. Practical implications This framework will be useful whenever the data is imbalanced in real world scenarios, which ultimately improves the performance. Originality/value This paper uses the rebalancing framework on medical appointment no-show dataset to predict the no-shows and removes the bias towards minority class.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 4
  • 10.3390/a17050175
Cross-Project Defect Prediction Based on Domain Adaptation and LSTM Optimization
  • Apr 24, 2024
  • Algorithms
  • Khadija Javed + 3 more

Cross-project defect prediction (CPDP) aims to predict software defects in a target project domain by leveraging information from different source project domains, allowing testers to identify defective modules quickly. However, CPDP models often underperform due to different data distributions between source and target domains, class imbalances, and the presence of noisy and irrelevant instances in both source and target projects. Additionally, standard features often fail to capture sufficient semantic and contextual information from the source project, leading to poor prediction performance in the target project. To address these challenges, this research proposes Smote Correlation and Attention Gated recurrent unit based Long Short-Term Memory optimization (SCAG-LSTM), which first employs a novel hybrid technique that extends the synthetic minority over-sampling technique (SMOTE) with edited nearest neighbors (ENN) to rebalance class distributions and mitigate the issues caused by noisy and irrelevant instances in both source and target domains. Furthermore, correlation-based feature selection (CFS) with best-first search (BFS) is utilized to identify and select the most important features, aiming to reduce the differences in data distribution among projects. Additionally, SCAG-LSTM integrates bidirectional gated recurrent unit (Bi-GRU) and bidirectional long short-term memory (Bi-LSTM) networks to enhance the effectiveness of the long short-term memory (LSTM) model. These components efficiently capture semantic and contextual information as well as dependencies within the data, leading to more accurate predictions. Moreover, an attention mechanism is incorporated into the model to focus on key features, further improving prediction performance. Experiments are conducted on apache_lucene, equinox, eclipse_jdt_core, eclipse_pde_ui, and mylyn (AEEEM) and predictor models in software engineering (PROMISE) datasets and compared with active learning-based method (ALTRA), multi-source-based cross-project defect prediction method (MSCPDP), the two-phase feature importance amplification method (TFIA) on AEEEM and the two-phase transfer learning method (TPTL), domain adaptive kernel twin support vector machines method (DA-KTSVMO), and generative adversarial long-short term memory neural networks method (GB-CPDP) on PROMISE datasets. The results demonstrate that the proposed SCAG-LSTM model enhances the baseline models by 33.03%, 29.15% and 1.48% in terms of F1-measure and by 16.32%, 34.41% and 3.59% in terms of Area Under the Curve (AUC) on the AEEEM dataset, while on the PROMISE dataset it enhances the baseline models’ F1-measure by 42.60%, 32.00% and 25.10% and AUC by 34.90%, 27.80% and 12.96%. These findings suggest that the proposed model exhibits strong predictive performance.

  • Research Article
  • Cite Count Icon 1
  • 10.1038/s41598-025-13754-1
DDoS classification of network traffic in software defined networking SDN using a hybrid convolutional and gated recurrent neural network
  • Aug 9, 2025
  • Scientific Reports
  • Ahmed M Elshewey + 4 more

Deep learning (DL) has emerged as a powerful tool for intelligent cyberattack detection, especially Distributed Denial-of-Service (DDoS) in Software-Defined Networking (SDN), where rapid and accurate traffic classification is essential for ensuring security. This paper presents a comprehensive evaluation of six deep learning models (Multilayer Perceptron (MLP), one-dimensional Convolutional Neural Network (1D-CNN), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Recurrent Neural Network (RNN), and a proposed hybrid CNN-GRU model) for binary classification of network traffic into benign or attack classes. The experiments were conducted on an SDN traffic dataset initially exhibiting class imbalance. To address this, Synthetic Minority Over-sampling Technique (SMOTE) was applied, resulting in a balanced dataset of 24,500 samples (12,250 benign and 12,250 attacks). A robust preprocessing pipeline followed, including missing value verification (no missing values were found), feature normalization using StandardScaler to standardize numerical values, reshaping the data into 3D format to fit temporal models like CNN and GRU, and stratified train-test split (80% training, 20% testing) to maintain class distribution. The CNN-GRU model integrates a 1D convolutional layer for spatial pattern extraction and a GRU layer for temporal sequence learning, followed by dense layers with dropout regularization. The model was trained using the Adam optimizer with early stopping to prevent overfitting. Among all models, the CNN-GRU hybrid achieved perfect test performance, with 100% accuracy, 1.0000 precision, recall, and F1-score, and an ROC AUC of 1.0000. It also demonstrated exceptional generalization, achieving a mean cross-validation (CV) accuracy of 99.70% ± 0.09% and a mean AUC of 1.0000 ± 0.0000 across 5-fold stratified cross-validation. While individual models such as GRU, 1D-CNN, and LSTM also showed strong performance, the CNN-GRU hybrid consistently outperformed them in both accuracy and stability. These results validate the effectiveness of combining convolutional and recurrent architectures, augmented with data balancing via SMOTE, for highly accurate SDN-based intrusion detection.Supplementary InformationThe online version contains supplementary material available at 10.1038/s41598-025-13754-1.

  • Research Article
  • 10.62411/jcta.12021
Improving Credit Card Fraud Detection with Ensemble Deep Learning-Based Models: A Hybrid Approach Using SMOTE-ENN
  • Feb 12, 2025
  • Journal of Computing Theories and Applications
  • Lossan Bonde + 1 more

Advances in information and internet technologies have significantly transformed the business environment, including the financial sector. The COVID-19 pandemic has further accelerated this digital adoption, expanding the e-commerce industry and highlighting the necessity for secure online transactions. Credit Card Fraud Detection (CCFD) stands critical as the prevalence of fraudulent activities continues to rise with the increasing volume of online transactions. Traditional methods for detecting fraud, such as rule-based systems and basic machine learning models, tend to fail to keep pace with fraudsters' evolving tactics. This study proposes a novel ensemble deep learning-based approach that combines Convolutional Neural Networks (CNN), Gated Recurrent Units (GRU), and Multilayer Perceptron (MLP) with the Synthetic Minority Oversampling Technique and Edited Nearest Neighbors (SMOTE-ENN) to address class imbalance and improve detection accuracy. The methodology integrates CNN for feature extraction, GRU for sequential transaction analysis, and Multilayer Perceptron (MLP) as a meta-learner in a stacking framework. By leveraging SMOTE-ENN, the proposed approach enhances data balance and prevents overfitting. With synthetic data, the robustness and accuracy of the model have been improved, particularly in scenarios where fraudulent examples are scarce. The experiments conducted on real-world credit card transaction datasets have established that our approach outperforms existing methods, achieving higher metrics performance.

  • Research Article
  • Cite Count Icon 28
  • 10.1088/1402-4896/acea05
Data augmentation and hybrid feature amalgamation to detect audio deep fake attacks
  • Aug 3, 2023
  • Physica Scripta
  • Nidhi Chakravarty + 1 more

The ability to distinguish between authentic and fake audio is become increasingly difficult due to the increasing accuracy of text-to-speech models, posing a serious threat to speaker verification systems. Furthermore, audio deepfakes are becoming a more likely source of deception with the development of sophisticated methods for producing synthetic voice. The ASVspoof dataset has recently been used extensively in research on the detection of audio deep fakes, together with a variety of machine and deep learning methods. The proposed work in this paper combines data augmentation techniques with hybrid feature extraction method at front-end. Two variants of audio augmentation method and Synthetic Minority Over Sampling Technique (SMOTE) have been used, which have been combined individually with Mel Frequency Cepstral Coefficients (MFCC), Gammatone Cepstral Coefficients (GTCC) and hybrid these two feature extraction methods for implementing front-end feature extraction. To implement the back-end our proposed work two deep learning models, Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), and two Machine Learning (ML) classifier Random Forest (RF) and Support Vector Machine (SVM) have been used. For training, and evaluation ASVspoof 2019 Logical Access (LA) partition, and for testing of the said systems, and ASVspoof 2021 deep fake partition have been used. After analysing the results, it can be observed that combination of MFCC+GTCC with SMOTE at front-end and LSTM at back-end has outperformed all other models with 99% test accuracy, and 1.6 % Equal Error Rate (EER) over deepfake partition. Also, the testing of this best combination has been done on DEepfake CROss-lingual (DECRO) dataset. To access the effectiveness of proposed model under noisy scenarios, we have analysed our best model under noisy condition by adding Babble Noise, Street Noise and Car Noise to test data.

  • Book Chapter
  • Cite Count Icon 6
  • 10.1007/978-3-030-55180-3_28
Comparison of Hybrid Recurrent Neural Networks for Univariate Time Series Forecasting
  • Aug 25, 2020
  • Anibal Flores + 2 more

The work presented in this paper aims to improve the accuracy of forecasting models in univariate time series, for this it is experimented with different hybrid models of two and four layers based on recurrent neural networks such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU). It is experimented with two time series corresponding to downward thermal infrared and all sky insolation incident on a horizontal surface obtained from NASA’s repository. In the first time series, the results achieved by the two-layer hybrid models (LSTM + GRU and GRU + LSTM) outperformed the results achieved by the non-hybrid models (LSTM + LSTM and GRU + GRU); while only two of six four-layer hybrid models (GRU + LSTM + GRU + LSTM and LSTM + LSTM + GRU + GRU) outperformed non-hybrid models (LSTM + LSTM + LSTM + LSTM and GRU + GRU + GRU + GRU). In the second time series, only one model (LSTM + GRU) of two hybrid models outperformed the two non-hybrid models (LSTM + LSTM and GRU + GRU); while the four-layer hybrid models, none could exceed the results of the non-hybrid models.

  • Research Article
  • Cite Count Icon 8
  • 10.1109/access.2023.3251745
RFSE-GRU: Data Balanced Classification Model for Mobile Encrypted Traffic in Big Data Environment
  • Jan 1, 2023
  • IEEE Access
  • Murat Dener + 2 more

With the widespread use of mobile technologies and the Internet, traffic in mobile networks is increasing. This situation has made the classification of traffic an important element for data security and network management. However, encryption of traffic in modern networks makes it difficult to classify traffic with traditional methods. In this study, a unique deep learning-based classification model is proposed for the classification of encrypted mobile traffic data. The proposed model is a classification model called RFSE-GRU, which combines the Gated Recurrent Units (GRU) algorithm, feature selection and data balancing. The features that are more meaningful in the classification process are determined by selecting the features with the Random Forest algorithm. In addition, Synthetic Minority Oversampling Technique (SMOTE) oversampling algorithm and Edited Nearest Neighbor (ENN) undersampling algorithm were used together to reduce the negative impact of data imbalance on classification performance. The study was carried out on Apache Spark big data platform in Google Colab environment. In the study, ISCX VPN-Non VPN and UTMobileNet2021 datasets were used. Binary and multiclass classifications were made for the ISCX VPN-Non VPN dataset, and multiclass classifications were made for the UTMobileNet2021 dataset by using various algorithms on the datasets. The proposed model has been compared with eleven different algorithms and hybrid methods. At the same time, the effect of data balancing and feature selection on classification performance is examined. As a result, the proposed model achieved 93.91%, 82.68% and 96.83% accuracy rates in ISCX VPN-Non VPN binary and multiclass, UTMobileNet2021 multiclass classifications, respectively.

  • Research Article
  • 10.1080/10255842.2025.2530648
Drug usage classification based on personality and demographic features using a combination of sampling and machine learning algorithms
  • Jul 7, 2025
  • Computer Methods in Biomechanics and Biomedical Engineering
  • Shuoxu Zhang

Drug use stems from biopsychosocial factors. This study classified 18 drug types using personality and demographics. After preprocessing, three sampling techniques Random Oversampling, Synthetic Minority Over-sampling Technique using Euclidean Norm (SMOTEN), Synthetic Minority Over-sampling Technique using Euclidean Norm – Edited Nearest Neighbors (SMOTEENN) and seven machine learning (ML) models Random Forest (RF), Extreme Gradient Boosting (XGBoost), Decision tree (DT), Extra Tree, Support Vector Classification (SVC), Linear SVC and Logistic Regression (LR) were applied to build a robust, accurate prediction model for drug classification. Random Over Sampler and Extra Trees improved F1 scores in unbalanced data, as shown in a case study.

  • Research Article
  • Cite Count Icon 2
  • 10.1016/j.heliyon.2024.e28569
AI-supported estimation of safety critical wind shear-induced aircraft go-around events utilizing pilot reports
  • Mar 21, 2024
  • Heliyon
  • Afaq Khattak + 4 more

The occurrence of wind shear and severe thunderstorms during the final approach phase contributes to nearly half of all aviation accidents. Pilots usually employ the go-around procedure in order to lower the likelihood of an unsafe landing. However, multiple factors influence the go-arounds induced by wind shear. In order to predict the wind shear-induced go-around, this study utilized a cutting-edge AI-based Combined Kernel and Tree Boosting (KTBoost) framework with various data augmentation strategies. First, the KTBoost model was trained, tested, and compared to other Machine Learning models using the data extracted from Hong Kong International Airport (HKIA)-based Pilot Reports for the years 2017–2021. The performance evaluation revealed that the KTBoost model with Synthetic Minority Oversampling Technique - Edited Nearest Neighbor (SMOTE-ENN)- augmented data demonstrated superior performance as measured by the F1-Score (94.37%) and G-Mean (94.87%). Subsequently, the SHapley Additive exPlanations (SHAP) approach was employed to elucidate the interpretation of the KTBoost model using data that had been treated with the SMOTE-ENN technique. According to the findings, flight type, wind shear magnitude, and approach runway contributed the most to the wind shear-induced go-around. Compared to international flights, Hong Kong-based airlines endured the highest number of wind shear-induced go-arounds. Shear due to the tailwind contributed more to the go-around than the headwinds. The runways with the most wind shear-induced Go-arounds were 07C and 07R.

  • Research Article
  • Cite Count Icon 4
  • 10.1016/j.apacoust.2021.108618
Class-imbalanced voice pathology classification: Combining hybrid sampling with optimal two-factor random forests
  • Jan 20, 2022
  • Applied Acoustics
  • Xiaojun Zhang + 4 more

Class-imbalanced voice pathology classification: Combining hybrid sampling with optimal two-factor random forests

  • Conference Article
  • Cite Count Icon 1
  • 10.1109/iemecon56962.2023.10092325
Credit Risk Prediction using Extra Trees Ensemble Method
  • Feb 10, 2023
  • Trishita Saha + 4 more

Credit risk is the measurement of a person’s likelihood of being able to pay back a loan borrowed from the bank in the future. If the bank borrower can pay the money back later only then will the bank lend money to the borrower otherwise not. Because the entire computation of the borrower’s asset is done manually without the assistance of cutting-edge tools or technologies, the work of determining whether to lend money to the borrower or not is laborious, time-consuming and subject to human errors. A bad prediction could cost the bank a lot of money if the borrower is unable to repay the money that was lent to them. To overcome all these problems this paper proposes an expert system named as Expert System for Credit Risk Prediction using SMOTE and ENN (ESCRPSE) which uses the combination of an oversampling technique known as Synthetic Minority Oversampling Technique (SMOTE) and an undersampling technique known as Edited Nearest Neighbor (ENN) to deal with class imbalance problem and uses ensemble bagging technique Extra Trees (ET) to make the prediction. Two datasets have been used for the proposed paper. The accuracy and fl-score achieved by the proposed model ESCRPSE is compared with different single-classifier based models and various ensemble models present in the literature and it is observed that the proposed model outperforms these models greatly.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.