Early prediction of final body weight in Hanwoo steers using machine and deep learning models.
Accurate early prediction of final body weight (BW) is essential for optimized feeding strategies and slaughter planning in beef cattle production. This study evaluated the performance of three machine learning models (k-nearest neighbors, Random Forest, eXtreme Gradient Boosting), and one deep learning model [long short-term memory (LSTM)] to forecast the final BW of Hanwoo steers at various time points prior to slaughter. A total of 196 Hanwoo steers (7 to 31 months of age) from a commercial farm were utilized. Input data included monthly BW and feed nutrient intake (crude protein, ether extract, neutral detergent fiber, total digestible nutrients) across three growth stages. Six input configurations (I1-I6) were designed to predict the final BW at 17, 13, 9, 6, 3, and 1 month(s) before slaughter, with a target age of 31 months. The machine and deep learning models were assessed by five-fold cross-validation (training set) and a test set and evaluated via the coefficient of determination (R²) and root mean squared error (RMSE). Among the tested models, the LSTM achieved the highest prediction accuracy across all the configurations. The performance of the LSTM improved as the prediction point approached the target slaughter age: I1 (R² = 0.60, RMSE = 52.80), I2 (0.72, 45.40), I3 (0.76, 40.92), I4 (0.83, 35.84), I5 (0.90, 33.12), and I6 (0.97, 22.62). These results demonstrated that LSTM effectively captured temporal dependencies in sequential data, enabling more accurate BW forecasting under commercial conditions. While I6 achieved the highest prediction accuracy, the 3-6 month predictions (I4 and I5) demonstrated reasonably high accuracy, which could provide a practical timeframe for farm-level management and planning. This approach could be utilized in evidence-based decision-making in Hanwoo production by providing reliable predictions well ahead of slaughter.
- Research Article
20
- 10.1371/journal.pone.0317619
- Jan 23, 2025
- PloS one
This study presents a comprehensive comparative analysis of Machine Learning (ML) and Deep Learning (DL) models for predicting Wind Turbine (WT) power output based on environmental variables such as temperature, humidity, wind speed, and wind direction. Along with Artificial Neural Network (ANN), Long Short-Term Memory (LSTM), Recurrent Neural Network (RNN), and Convolutional Neural Network (CNN), the following ML models were looked at: Linear Regression (LR), Support Vector Regressor (SVR), Random Forest (RF), Extra Trees (ET), Adaptive Boosting (AdaBoost), Categorical Boosting (CatBoost), Extreme Gradient Boosting (XGBoost), and Light Gradient Boosting Machine (LightGBM). Using a dataset of 40,000 observations, the models were assessed based on R-squared, Mean Absolute Error (MAE), and Root Mean Square Error (RMSE). ET achieved the highest performance among ML models, with an R-squared value of 0.7231 and a RMSE of 0.1512. Among DL models, ANN demonstrated the best performance, achieving an R-squared value of 0.7248 and a RMSE of 0.1516. The results show that DL models, especially ANN, did slightly better than the best ML models. This means that they are better at modeling non-linear dependencies in multivariate data. Preprocessing techniques, including feature scaling and parameter tuning, improved model performance by enhancing data consistency and optimizing hyperparameters. When compared to previous benchmarks, the performance of both ANN and ET demonstrates significant predictive accuracy gains in WT power output forecasting. This study's novelty lies in directly comparing a diverse range of ML and DL algorithms while highlighting the potential of advanced computational approaches for renewable energy optimization.
- Research Article
8
- 10.1007/s11356-024-35764-8
- Jan 1, 2025
- Environmental Science and Pollution Research
Human-induced global warming, primarily attributed to the rise in atmospheric CO2, poses a substantial risk to the survival of humanity. While most research focuses on predicting annual CO2 emissions, which are crucial for setting long-term emission mitigation targets, the precise prediction of daily CO2 emissions is equally vital for setting short-term targets. This study examines the performance of 14 models in predicting daily CO2 emissions data from 1/1/2022 to 30/9/2023 across the top four polluting regions (China, India, the USA, and the EU27&UK). The 14 models used in the study include four statistical models (ARMA, ARIMA, SARMA, and SARIMA), three machine learning models (support vector machine (SVM), random forest (RF), and gradient boosting (GB)), and seven deep learning models (artificial neural network (ANN), recurrent neural network variations such as gated recurrent unit (GRU), long short-term memory (LSTM), bidirectional-LSTM (BILSTM), and three hybrid combinations of CNN-RNN). Performance evaluation employs four metrics (R2, MAE, RMSE, and MAPE). The results show that the machine learning (ML) and deep learning (DL) models, with higher R2 (0.714–0.932) and lower RMSE (0.480–0.247) values, respectively, outperformed the statistical model, which had R2 (− 0.060–0.719) and RMSE (1.695–0.537) values, in predicting daily CO2 emissions across all four regions. The performance of the ML and DL models was further enhanced by differencing, a technique that improves accuracy by ensuring stationarity and creating additional features and patterns from which the model can learn. Additionally, applying ensemble techniques such as bagging and voting improved the performance of the ML models by approximately 9.6%, whereas hybrid combinations of CNN-RNN enhanced the performance of the RNN models. In summary, the performance of both the ML and DL models was relatively similar. However, due to the high computational requirements associated with DL models, the recommended models for daily CO2 emission prediction are ML models using the ensemble technique of voting and bagging. This model can assist in accurately forecasting daily emissions, aiding authorities in setting targets for CO2 emission reduction.
- Research Article
5
- 10.54254/2755-2721/55/20241475
- Jul 25, 2024
- Applied and Computational Engineering
The digital era has transformed the way businesses interact with their customers, with online platforms serving as crucial touchpoints for user engagement. Understanding customer behavior in this context is paramount for enhancing user experience, optimizing marketing strategies, and driving business growth. This study aims to explore the likelihood of customers making purchases based on their clickstream data by employing both machine learning and deep learning techniques. This research uses a machine learning model Random Forest (RF), Gradient Boosting Decision Trees (GBDT), Extreme Gradient Boosting (XGBOOST) and deep learning model Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM) to predict whether customers will purchase the items using 33,040,175 records in the file of the click and 1,177,769 records in the buys file from real e-commerce customers. The results show that both machine learning and deep learning can accurately forecast the purchasing behavior of customers with an accuracy of around 72 to 75 percent. For the machine learning model, attains the highest prediction accuracy when using a sliding window of 6 days. For the deep learning model, the LSTM model with 50 layers shows the highest prediction of customers willingness to purchase an item. Compared with previous studies, the three machine learning models narrow the range of days, give more accurate predictions, and also improve the model. Both RNN and LSTM show similar accuracy for customer behavior. The current research has asserted that both machine learning and deep learning models give profound results on whether customers will purchase a product, and there is not a significant difference between machine learning and deep learning in this classification topic.
- Research Article
9
- 10.1038/s41598-024-82931-5
- Dec 28, 2024
- Scientific Reports
Failure to predict stroke promptly may lead to delayed treatment, causing severe consequences like permanent neurological damage or death. Early detection using deep learning (DL) and machine learning (ML) models can enhance patient outcomes and mitigate the long-term effects of strokes. The aim of this study is to compare these models, exploring their efficacy in predicting stroke. This study analyzed a dataset comprising 663 records from patients hospitalized at Hazrat Rasool Akram Hospital in Tehran, Iran, including 401 healthy individuals and 262 stroke patients. A total of eight established ML (SVM, XGB, KNN, RF) and DL (DNN, FNN, LSTM, CNN) models were utilized to predict stroke. Techniques such as 10-fold cross-validation and hyperparameter tuning were implemented to prevent overfitting. The study also focused on interpretability through Shapley Additive Explanations (SHAP). The evaluation of model’s performance was based on accuracy, specificity, sensitivity, F1-score, and ROC curve metrics. Among DL models, LSTM showed superior sensitivity at 96.15%, while FNN exhibited better specificity (96.0%), accuracy (96.0%), F1-score (95.0%), and ROC (98.0%) among DL models. For ML models, RF displayed higher sensitivity (99.9%), accuracy (99.0%), specificity (100%), F1-score (99.0%), and ROC (99.9%). Overall, RF outperformed all models, while DL models surpassed ML models in most metrics except for RF. DL models (CNN, LSTM, DNN, FNN) achieved sensitivities from 93.0 to 96.15%, specificities from 80.0 to 96.0%, accuracies from 92.0 to 96.0%, F1-scores from 87.34 to 95.0%, and ROC scores from 95.0 to 98.0%. In contrast, ML models (KNN, XGB, SVM) showed sensitivities between 29.0% and 94.0%, specificities between 89.47% and 96.0%, accuracies between 71.0% and 95.0%, F1-scores between 44.0% and 95.0%, and ROC scores between 64.0% and 95.0%. This study demonstrates the efficacy of DL and ML models in predicting stroke, with the RF models outperforming all others in key metrics. While DL models generally surpassed ML models, RF’s exceptional performance highlights the potential of combining these technologies for early stroke detection, significantly improving patient outcomes by preventing severe consequences like permanent neurological damage or death.
- Research Article
23
- 10.1007/s13755-024-00281-y
- Mar 6, 2024
- Health Information Science and Systems
PurposeThe main aim of our study was to explore the utility of artificial intelligence (AI) in diagnosing autism spectrum disorder (ASD). The study primarily focused on using machine learning (ML) and deep learning (DL) models to detect ASD potential cases by analyzing text inputs, especially from social media platforms like Twitter. This is to overcome the ongoing challenges in ASD diagnosis, such as the requirement for specialized professionals and extensive resources. Timely identification, particularly in children, is essential to provide immediate intervention and support, thereby improving the quality of life for affected individuals.MethodsWe employed natural language processing (NLP) techniques along with ML models like decision trees, extreme gradient boosting (XGB), k-nearest neighbors algorithm (KNN), and DL models such as recurrent neural networks (RNN), long short-term memory (LSTM), bidirectional long short-term memory (Bi-LSTM), bidirectional encoder representations from transformers (BERT and BERTweet). We extracted a dataset of 404,627 tweets from Twitter users using the platform’s API and classified them based on whether they were written by individuals claiming to have ASD (ASD users) or by those without ASD (non-ASD users). From this dataset, we used a subset of 90,000 tweets (45,000 from each classification group) for the training and testing of these models.ResultsThe application of our AI models yielded promising results, with the predictive model reaching an accuracy of almost 88% when classifying texts that potentially originated from individuals with ASD.ConclusionOur research demonstrated the potential of using AI, particularly DL models, in enhancing the accuracy of ASD detection and diagnosis. This innovative approach signifies the critical role AI can play in advancing early diagnostic techniques, enabling better patient outcomes and underlining the importance of early identification of ASD, especially in children.
- Preprint Article
- 10.5194/ems2025-562
- Jul 16, 2025
Machine learning (ML) and deep learning (DL) models can play an important role when it comes to modelling complicated processes. Such capability is necessary for hydrological and climate-related applications. Generally, ML models utilize precipitation and temperature time series of a basin as input to develop a lumped rainfall-runoff model to simulate streamflow at the basin outlet. However, when it is divided into several sub-basins, Graph Neural Networks (GNN) can consider each sub-basin as a node and link them together using a connectivity matrix to account for spatial variations of hydroclimatic variables. In this study, GNN and various ML models with different types of architecture, ranging from neural networks, tree-based structure, and gradient boosting, were exploited for daily streamflow simulation over different case studies. For each case study, the basin was divided into a few sub-basins for which daily precipitation and temperature data were aggregated and used as input. For training GNN, the connection matrix of sub-basins was also used as input. Basically, 75% of historical records were utilized to train GNN and different ML models, e.g., artificial neural networks, support vector machine, decision tree, random forest, eXtreme Gradient Boosting (XGBoost), Light Gradient-Boosting Machine (LightGBM), and Category Boosting (CatBoost), while the rest was used for testing. Streamflow simulation was conducted with/without considering seasonality impact and lag times. The obtained results clearly demonstrate that considering seasonality and time lags can enhance accuracy of streamflow predictions based on Kling–Gupta efficiency (KGE). Furthermore, GNN with seasonality impact and time lags achieved promising results across different case studies with KGE>0.85 for training and KGE>0.59 for testing data, respectively. Among ML models, boosting models, e.g., LightGBM and XGBoost, performed slightly better than other ML models. for Finally, this comparative analysis provides valuable insights for ML/DL applications in climate change impact assessments.Acknowledgements: This research work was carried out as part of the TRANSCEND project with funding received from the European Union Horizon Europe Research and Innovation Programme under Grant Agreement No. 10108411.
- Research Article
10
- 10.1111/exsy.13153
- Oct 5, 2022
- Expert Systems
The increase in the number of undesired SMS termed smishing message and the data imbalance problem has generated a great demand for the development of more reliable anti‐spam filters. State of the art machine learning approaches are being employed to recognize and separate spam messages. Most recent studies target message classification by using numerous properties and features of the words but fail to consider the circumstantial features like long‐range dependencies between the words that are extremely important in identifying smishing messages. The idea is to develop an intelligent model that will distinguish between smishing messages and ham messages, by adopting a combined approach of regular expression (Regex), machine learning (ML) and deep learning (DL) models. Regex rules are generated using the dataset's spam messages for the purpose of refining the dataset. Support vector machine (SVM), Multinomial Naive Bayes and Random Forest are included under machine learning models and long short‐term memory (LSTM), bidirectional long short‐term memory (Bi‐LSTM), stacked LSTM and stacked Bi‐LSTM are included under deep learning models. The comparison between machine learning models and deep learning models is also carried out based on the performance evaluation parameters namely accuracy, precision, recall and F1 score of the models. It is observed that deep learning models perform better than machine learning models and the introduction of a regular expression to the dataset increases the efficiency of both the deep learning models and machine learning models.
- Research Article
- 10.18502/japh.v10i1.18093
- Mar 9, 2025
- Journal of Air Pollution and Health
Introduction: Air pollution is a significant global health challenge, contributing to the deaths of millions of people annually. Among these pollutants, Particulate Matter (PM2.5) is the most harmful to the respiratory system causing serious health problems. This study focused on predicting PM2.5 in the air of Islamabad, capital of Pakistan by using machine learning and deep learning models. Materials and methods: Two machine learning models (Decision Tree and Random Forest) and four deep learning models including Multi-Layer Neural Network (MLNN), Long Short-Term Memory (LSTM), Recurrent Neural Network (RNN), Gated Recurrent Unit (GRU) are used in the study. Each model's performance was assessed by using statistical indicators including coefficient of determination (R2), Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Relative Root Mean Square Error (RRMSE). These models are also ranked based on their performance by compromise programming technique. Results: Machine learning models performed better in the training phase by achieving higher R2 values of 0.98 and 0.97 but couldn’t maintain the same performance in the testing phase. Whereas the deep learning models performed best in both the training and testing phases. MLNN model attained higher R2 value of 0.98 in training and 0.88 in testing and is evaluated as top-ranked prediction model in predicting particulate matter PM2.5. Whereas,LSTM, GRU, RNN, Decision Tree, and Random Forest are placed at the 2nd,3rd, 4th, 5th, and 6th positions having R2 values of 0.86, 0.87, 0.82, 0.99, and0.97 during training and 0.71, 0.69, 0.69, 0.75, and 0.85 respectively during testing. Conclusion: Deep learning models, especially MLNN, showed strong performance in predicting PM2.5 as compared to the machine learning models.
- Research Article
- 10.1016/j.cmpb.2025.108657
- Apr 1, 2025
- Computer methods and programs in biomedicine
Methods for estimating resting energy expenditure in intensive care patients: A comparative study of predictive equations with machine learning and deep learning approaches.
- Research Article
2
- 10.24237/djes.2025.18112
- Mar 1, 2025
- Diyala Journal of Engineering Sciences
Air pollution is a significant global concern that is continually increasing and threatening both the environment and human health. Air pollution is the principal factor leading to the deterioration of Indoor Air Quality (IAQ) in buildings. Carbon dioxide (CO2) significantly contributes to indoor pollution intensifying, primarily from human activities. The demand for effective IAQ systems has increased due to the necessity for sustainable building development. The artificial intelligence (AI) models presented in this work utilized Machine Learning (ML) and Deep Learning (DL) methodologies to train the available dataset. This dataset was collected by the indoor sensors in Shanghai from November 2016 to March 2017 to predict CO2 concentration and obtain pertinent information. The accuracy and the result of Ml and DL algorithms may differ depending on the datasets used and the algorithms' suitability for the specific data and application domain. Therefore, a significant benefit would be achieved by finding the best-fitted ML and DL models concerning the actual datasets and the application area. This necessity was fulfilled through an intensive review of the already existing DL and ML models. This analysis aims to implement the specified models and assess the efficiency of their prediction by computing several performance metrics like Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), Median Absolute Error (Median AE), and Coefficient of determination (R2). Among implemented models, the Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) have performed better results in forecasting IAQ.
- Research Article
12
- 10.1038/s41598-024-66481-4
- Jul 8, 2024
- Scientific Reports
The need for intubation in methanol-poisoned patients, if not predicted in time, can lead to irreparable complications and even death. Artificial intelligence (AI) techniques like machine learning (ML) and deep learning (DL) greatly aid in accurately predicting intubation needs for methanol-poisoned patients. So, our study aims to assess Explainable Artificial Intelligence (XAI) for predicting intubation necessity in methanol-poisoned patients, comparing deep learning and machine learning models. This study analyzed a dataset of 897 patient records from Loghman Hakim Hospital in Tehran, Iran, encompassing cases of methanol poisoning, including those requiring intubation (202 cases) and those not requiring it (695 cases). Eight established ML (SVM, XGB, DT, RF) and DL (DNN, FNN, LSTM, CNN) models were used. Techniques such as tenfold cross-validation and hyperparameter tuning were applied to prevent overfitting. The study also focused on interpretability through SHAP and LIME methods. Model performance was evaluated based on accuracy, specificity, sensitivity, F1-score, and ROC curve metrics. Among DL models, LSTM showed superior performance in accuracy (94.0%), sensitivity (99.0%), specificity (94.0%), and F1-score (97.0%). CNN led in ROC with 78.0%. For ML models, RF excelled in accuracy (97.0%) and specificity (100%), followed by XGB with sensitivity (99.37%), F1-score (98.27%), and ROC (96.08%). Overall, RF and XGB outperformed other models, with accuracy (97.0%) and specificity (100%) for RF, and sensitivity (99.37%), F1-score (98.27%), and ROC (96.08%) for XGB. ML models surpassed DL models across all metrics, with accuracies from 93.0% to 97.0% for DL and 93.0% to 99.0% for ML. Sensitivities ranged from 98.0% to 99.37% for DL and 93.0% to 99.0% for ML. DL models achieved specificities from 78.0% to 94.0%, while ML models ranged from 93.0% to 100%. F1-scores for DL were between 93.0% and 97.0%, and for ML between 96.0% and 98.27%. DL models scored ROC between 68.0% and 78.0%, while ML models ranged from 84.0% to 96.08%. Key features for predicting intubation necessity include GCS at admission, ICU admission, age, longer folic acid therapy duration, elevated BUN and AST levels, VBG_HCO3 at initial record, and hemodialysis presence. This study as the showcases XAI's effectiveness in predicting intubation necessity in methanol-poisoned patients. ML models, particularly RF and XGB, outperform DL counterparts, underscoring their potential for clinical decision-making.
- Research Article
1
- 10.11591/ijict.v14i1.pp164-173
- Apr 1, 2025
- International Journal of Informatics and Communication Technology (IJ-ICT)
Rice, a staple food source globally, is in high demand and production across the world. Its consumption varies in different countries, with each nation having its unique way of incorporating rice into its diet. Recognizing the global nature of rice, its production is a crucial aspect of ensuring its availability, agriculture forecasting, economic stability, and food security. By predicting its production, we can develop a global plan for its production and stock, thereby preventing issues like famine. This paper proposes machine learning (ML) and deep learning (DL) models like linear regression, ridge regression, random forest (RF), adaptive boosting (AdaBoost), categorical boosting (CatBoost), extreme gradient boosting (XGBoost), gradient boosting, decision tree, and long short-term memory (LSTM) to predict international rice production. A total of nine ML and one DL models are trained and tested on the international dataset, which contains the rice production details of 192 countries over the last 62 years. Notably, linear regression and the LSTM algorithm predict rice production with the highest percentage of R-squared (R2 ), 98.40% and 98.19%, respectively. These predictions and the developed models can play a vital role in resolving crop-related international problems, uniting the global agricultural community in a common cause.
- Research Article
17
- 10.32604/csse.2023.034324
- Jan 1, 2023
- Computer Systems Science and Engineering
Food choice motives (i.e., mood, health, natural content, convenience, sensory appeal, price, familiarities, ethical concerns, and weight control) have an important role in transforming the current food system to ensure the healthiness of people and the sustainability of the world. Researchers from several domains have presented several models addressing issues influencing food choice over the years. However, a multidisciplinary approach is required to better understand how various aspects interact with one another during the decision-making procedure. In this paper, four Deep Learning (DL) models and one Machine Learning (ML) model are utilized to predict the weight in pounds based on food choices. The Long Short-Term Memory (LSTM) model, stacked-LSTM model, Conventional Neural Network (CNN) model, and CNN-LSTM model are the used deep learning models. While the applied ML model is the K-Nearest Neighbor (KNN) regressor. The efficiency of the proposed model was determined based on the error rate obtained from the experimental results. The findings indicated that Mean Absolute Error (MAE) is 0.0087, the Mean Square Error (MSE) is 0.00011, the Median Absolute Error (MedAE) is 0.006, the Root Mean Square Error (RMSE) is 0.011, and the Mean Absolute Percentage Error (MAPE) is 21. Therefore, the results demonstrated that the stacked LSTM achieved improved results compared with the LSTM, CNN, CNN-LSTM, and KNN regressor.
- Research Article
6
- 10.1093/eurheartj/ehz748.0670
- Oct 1, 2019
- European Heart Journal
Abstract Background Advances in precision medicine will require an increasingly individualized prognostic evaluation of patients in order to provide the patient with appropriate therapy. The traditional statistical methods of predictive modeling, such as SCORE, PROCAM, and Framingham, according to the European guidelines for the prevention of cardiovascular disease, not adapted for all patients and require significant human involvement in the selection of predictive variables, transformation and imputation of variables. In ROC-analysis for prediction of significant cardiovascular disease (CVD), the areas under the curve for Framingham: 0.62–0.72, for SCORE: 0.66–0.73 and for PROCAM: 0.60–0.69. To improve it, we apply for approaches to predict a CVD event rely on conventional risk factors by machine learning and deep learning models to 10-year CVD event prediction by using longitudinal electronic health record (EHR). Methods For machine learning, we applied logistic regression (LR) and recurrent neural networks with long short-term memory (LSTM) units as a deep learning algorithm. We extract from longitudinal EHR the following features: demographic, vital signs, diagnoses (ICD-10-cm: I21-I22.9: I61-I63.9) and medication. The problem in this step, that near 80 percent of clinical information in EHR is “unstructured” and contains errors and typos. Missing data are important for the correct training process using by deep learning & machine learning algorithm. The study cohort included patients between the ages of 21 to 75 with a dynamic observation window. In total, we got 31517 individuals in the dataset, but only 3652 individuals have all features or missing features values can be easy to impute. Among these 3652 individuals, 29.4% has a CVD, mean age 49.4 years, 68,2% female. Evaluation We randomly divided the dataset into a training and a test set with an 80/20 split. The LR was implemented with Python Scikit-Learn and the LSTM model was implemented with Keras using Tensorflow as the backend. Results We applied machine learning and deep learning models using the same features as traditional risk scale and longitudinal EHR features for CVD prediction, respectively. Machine learning model (LR) achieved an AUROC of 0.74–0.76 and deep learning (LSTM) 0.75–0.76. By using features from EHR logistic regression and deep learning models improved the AUROC to 0.78–0.79. Conclusion The machine learning models outperformed a traditional clinically-used predictive model for CVD risk prediction (i.e. SCORE, PROCAM, and Framingham equations). This approach was used to create a clinical decision support system (CDSS). It uses both traditional risk scales and models based on neural networks. Especially important is the fact that the system can calculate the risks of cardiovascular disease automatically and recalculate immediately after adding new information to the EHR. The results are delivered to the user's personal account.
- Research Article
6
- 10.3390/f15050839
- May 10, 2024
- Forests
Satellite remote sensing plays a significant role in the detection of smoke from forest fires. However, existing methods for detecting smoke from forest fires based on remote sensing images rely solely on the information provided by the images, overlooking the positional information and brightness temperature of the fire spots in forest fires. This oversight significantly increases the probability of misjudging smoke plumes. This paper proposes a smoke detection model, Forest Smoke-Fire Net (FSF Net), which integrates wildfire smoke images with the dynamic brightness temperature information of the region. The MODIS_Smoke_FPT dataset was constructed using a Moderate Resolution Imaging Spectroradiometer (MODIS), the meteorological information at the site of the fire, and elevation data to determine the location of smoke and the brightness temperature threshold for wildfires. Deep learning and machine learning models were trained separately using the image data and fire spot area data provided by the dataset. The performance of the deep learning model was evaluated using metric MAP, while the regression performance of machine learning was assessed with Root Mean Square Error (RMSE) and Mean Absolute Error (MAE). The selected machine learning and deep learning models were organically integrated. The results show that the Mask_RCNN_ResNet50_FPN and XGR models performed best among the deep learning and machine learning models, respectively. Combining the two models achieved good smoke detection results (Precisionsmoke=89.12%). Compared with wildfire smoke detection models that solely use image recognition, the model proposed in this paper demonstrates stronger applicability in improving the precision of smoke detection, thereby providing beneficial support for the timely detection of forest fires and applications of remote sensing.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.