An Effective Forest Fire Detection Framework Using Heterogeneous Wireless Multimedia Sensor Networks

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

With improvements in the area of Internet of Things (IoT), surveillance systems have recently become more accessible. At the same time, optimizing the energy requirements of smart sensors, especially for data transmission, has always been very important and the energy efficiency of IoT systems has been the subject of numerous studies. For environmental monitoring scenarios, it is possible to extract more accurate information using smart multimedia sensors. However, multimedia data transmission is an expensive operation. In this study, a novel hierarchical approach is presented for the detection of forest fires. The proposed framework introduces a new approach in which multimedia and scalar sensors are used hierarchically to minimize the transmission of visual data. A lightweight deep learning model is also developed for devices at the edge of the network to improve detection accuracy and reduce the traffic between the edge devices and the sink. The framework is evaluated using a real testbed, network simulations, and 10-fold cross-validation in terms of energy efficiency and detection accuracy. Based on the results of our experiments, the validation accuracy of the proposed system is 98.28%, and the energy saving is 29.94%. The proposed deep learning model’s validation accuracy is very close to the accuracy of the best performing architectures when the existing studies and lightweight architectures are considered. In terms of suitability for edge computing, the proposed approach is superior to the existing ones with reduced computational requirements and model size.

Similar Papers
  • Research Article
  • Cite Count Icon 2
  • 10.1155/2022/1623462
Optimization of Dechlorination Experiment Design Using Lightweight Deep Learning Model
  • Jun 25, 2022
  • Computational Intelligence and Neuroscience
  • Jianghua Peng + 1 more

This exploration intends to remove chloride ions in production and life, enhance buildings' durability, and protect the natural environment from pollution. The current dechlorination technology is discussed based on the relevant theories, such as the lightweight deep learning (DL) model and chloride ion characteristics. Next, data statistics and comparative analysis methods are used to study the adsorption and desorption performance of dechlorination adsorbents. Finally, the lightweight DL model is introduced into the chloride diffusion prediction experiment of slag powder and fly ash concrete. The results show that in the study of dechlorination adsorption performance, the chloride ion concentration decreases gradually with the extension of adsorption time. However, with the increasing temperature, the chloride ion removal rate is increasing. The removal rate of chloride ions in water can decrease slowly with the increase of adsorbent. Therefore, selecting the 2 mol/L sodium hydroxide as the alkali concentration for adsorbent regeneration is the most appropriate. Besides, the regeneration performance of the adsorbent gradually declines with the increase of sodium chloride concentration in the solution. The lightweight DL model is applied to the chloride diffusion prediction experiment of slag powder and fly ash concrete. It is found that when the curing age is selected at 18 days, 90 days, and 180 days, respectively, the error between the lightweight DL model and the experimental results is about 0.2. It shows that the lightweight DL model is feasible for predicting the diffusion of chloride ions. Therefore, this exploration designs and studies the dechlorination experiment based on the lightweight DL model, which provides a new theoretical basis and optimization direction for removing chloride ions in the future industry.

  • Research Article
  • Cite Count Icon 5
  • 10.1155/2022/4670523
Evaluation of the Physical Education Teaching and Training Efficiency by the Integration of Ideological and Political Courses with Lightweight Deep Learning
  • Jun 11, 2022
  • Computational Intelligence and Neuroscience
  • Shuaiqi Zhang

The purpose is to improve the training effect of physical education (PE) based on the teaching concept of ideological and political courses. The research is supported by the lightweight deep learning (DL) model of the Internet of things (IoT). Through intelligent recognition and classification of human action and images, it discusses the PE and training scheme based on the lightweight DL model. In addition, by the optimization of the accelerated compression algorithm and the evaluation of the PE and training effect of the Openpose algorithm, an optimization model of the PE and training effect has been successfully established. The research data results indicate that after 120 iterations of the model, the system recognition accuracy of the convolutional neural network (CNN) algorithm can only be improved to about 75%, while the recognition accuracy of the Openpose algorithm can reach about 85%. Compared with the CNN algorithm under the same number of iterations, the recognition accuracy can be improved by 9.8%. In addition, when the number of nodes in the network layer is 60, the system delay time of the proposed Openpose algorithm is smaller. At this time, the system delay of the algorithm is only 10.8s. Compared with the CNN algorithm under the same conditions, the proposed algorithm can save at least 1.2s in system delay time. The advantage of the algorithm is that it can improve the efficiency of physical training and teaching, and this research has important reference significance for the digital and intelligent development of the teaching mode of PE.

  • Research Article
  • Cite Count Icon 5
  • 10.70917/ijcisim-2025-0014
Lightweight Deep Learning Models For Edge Devices—A Survey
  • Jan 6, 2025
  • International Journal of Computer Information Systems and Industrial Management Applications
  • Aminu Musa + 5 more

As edge computing gains attention across various domains, the demand for lightweight deep learning models capable of running efffciently on resource-constrained edge devices has surged. This survey investigates the landscape of lightweight deep learning models tailored for edge computing environments. The survey explores various model compression techniques used to design and optimize deep learning models for edge deployment, including model quantization, pruning, and knowledge distillation. Emphasis is placed on strategies to reduce model size, computational complexity, and memory footprint while maintaining satisfactory performance levels. Additionally, the study examines the performances of these techniques on three real-life datasets evaluating lightweight deep learning models, highlighting the importance of balanced datasets representative of edge device deployment scenarios. Furthermore, this survey provides a comprehensive overview of the current state of lightweight deep learning models for edge devices, offering insights into design considerations, optimization techniques, and performance evaluation methodologies. The ffndings show that most of the compression techniques suffer from performance degradation, proving the existence of a trade-off between compression and performance. Therefore, we proposed a hybrid losslesscompressed model by combining pruning quantization, and knowledge distillation, to reduce parameters and weights, resulting in a lightweight model. The proposed model is three times smaller than the vanilla CNN model and achieved a state-of-the-art accuracy of 97% after compression, which shows the effectiveness of our approach. These results will serve as a valuable resource for researchers and practitioners aiming to develop efffcient and scalable deep learning solutions for edge computing applications.

  • Research Article
  • Cite Count Icon 10
  • 10.70917/2025014
Lightweight Deep Learning Models For Edge Devices—A Survey
  • Jan 6, 2025
  • International Journal of Computer Information Systems and Industrial Management Applications
  • Aminu Musa + 5 more

As edge computing gains attention across various domains, the demand for lightweight deep learning models capable of running efffciently on resource-constrained edge devices has surged. This survey investigates the landscape of lightweight deep learning models tailored for edge computing environments. The survey explores various model compression techniques used to design and optimize deep learning models for edge deployment, including model quantization, pruning, and knowledge distillation. Emphasis is placed on strategies to reduce model size, computational complexity, and memory footprint while maintaining satisfactory performance levels. Additionally, the study examines the performances of these techniques on three real-life datasets evaluating lightweight deep learning models, highlighting the importance of balanced datasets representative of edge device deployment scenarios. Furthermore, this survey provides a comprehensive overview of the current state of lightweight deep learning models for edge devices, offering insights into design considerations, optimization techniques, and performance evaluation methodologies. The ffndings show that most of the compression techniques suffer from performance degradation, proving the existence of a trade-off between compression and performance. Therefore, we proposed a hybrid losslesscompressed model by combining pruning quantization, and knowledge distillation, to reduce parameters and weights, resulting in a lightweight model. The proposed model is three times smaller than the vanilla CNN model and achieved a state-of-the-art accuracy of 97% after compression, which shows the effectiveness of our approach. These results will serve as a valuable resource for researchers and practitioners aiming to develop efffcient and scalable deep learning solutions for edge computing applications.

  • Research Article
  • Cite Count Icon 4
  • 10.11591/ijece.v13i6.pp6904-6912
Compact optimized deep learning model for edge: a review
  • Dec 1, 2023
  • International Journal of Electrical and Computer Engineering (IJECE)
  • Soumyalatha Naveen + 1 more

<p>Most real-time computer vision applications, such as pedestrian detection, augmented reality, and virtual reality, heavily rely on convolutional neural networks (CNN) for real-time decision support. In addition, edge intelligence is becoming necessary for low-latency real-time applications to process the data at the source device. Therefore, processing massive amounts of data impact memory footprint, prediction time, and energy consumption, essential performance metrics in machine learning based internet of things (IoT) edge clusters. However, deploying deeper, dense, and hefty weighted CNN models on resource-constraint embedded systems and limited edge computing resources, such as memory, and battery constraints, poses significant challenges in developing the compact optimized model. Reducing the energy consumption in edge IoT networks is possible by reducing the computation and data transmission between IoT devices and gateway devices. Hence there is a high demand for making energy-efficient deep learning models for deploying on edge devices. Furthermore, recent studies show that smaller compressed models achieve significant performance compared to larger deep-learning models. This review article focuses on state-of-the-art techniques of edge intelligence, and we propose a new research framework for designing a compact optimized deep learning (DL) model deployment on edge devices.</p>

  • Research Article
  • Cite Count Icon 5
  • 10.1002/rob.22285
Intelligent classifier for various degrees of coffee roasts using smart multispectral vision system
  • Jan 7, 2024
  • Journal of Field Robotics
  • Ming‐Yi Lin + 2 more

This study proposes an innovative deep learning model for use in a multispectral vision system comprising a complementary metal‐oxide semiconductor image sensor and a spectrometer. To ensure accurate color recognition, the deep learning model includes an embedded adaptive automatic color temperature correction engine. By using this color temperature correction engine, the multispectral vision system can intelligently compensate for lighting and chromatic variations. To evaluate the performance of the system, we created a nine‐dimensional data set using the IT8.7/2 color target. We then used this data set to train the deep learning model. Our deep learning model outperformed other lightweight deep learning models in experiments, making it suitable for deployment on edge devices and embedded systems. We tested the ability of the multispectral vision system to classify adulterated coffee beans into their respective classes. The overall accuracy rate was more than 99.3%, indicating that out proposed multispectral vision system is effective in identifying color differences. Considering its capabilities in agricultural screening, we suggest incorporating our adaptive automatic multispectral vision system into agricultural machines for the realization of Agriculture 4.0.

  • Research Article
  • Cite Count Icon 6
  • 10.1155/2022/9066648
Scene Classification in the Environmental Art Design by Using the Lightweight Deep Learning Model under the Background of Big Data
  • Jun 13, 2022
  • Computational Intelligence and Neuroscience
  • Lu Liu

On the basis of scene visual understanding technology, the research aims to further improve the classification efficiency and classification accuracy of art design scenes. The lightweight deep learning (DL) model based on big data is used as the main method to achieve real-time detection and recognition of multiple targets and classification of the multilabel scene. This research introduces the related foundations of the DL network and the lightweight object detection involved. The data for a multilabel scene classifier are constructed and the design of the convolutional neural network (CNN) model is described. On public datasets, the effectiveness of the lightweight object detection algorithm is verified to ensure its feasibility in the classification of actual scenes. The simulation results indicate that compared with the YOLOv3-Tiny model, the improved IRDA-YOLOv3 model reduces the number of parameters by 56.2%, the amount of computation by 46.3%, and the forward computation time of the network by 0.2 ms. It means that the IRDA-YOLOv3 network obtained after the improvement can realize the lightweight of the network. In the scene classification of complex traffic roads, the classification model of the multilabel scene can predict all kinds of semantic information of a single image and the classification accuracy for the four scenes is more than 90%. In summary, the discussed classification method based on the lightweight DL model is suitable for complex practical scenes. The constructed lightweight network improves the representational ability of the network and has certain research value for scene classification problems.

  • Research Article
  • Cite Count Icon 6
  • 10.48084/etasr.7777
Detection of QR Code-based Cyberattacks using a Lightweight Deep Learning Model
  • Aug 2, 2024
  • Engineering, Technology & Applied Science Research
  • Mousa Sarkhi + 1 more

Traditional intrusion detection systems rely on known patterns and irregularities. This study proposes an approach to reinforce security measures on QR codes used for marketing and identification. The former investigates the use of a lightweight Deep Learning (DL) model to detect cyberattacks embedded in QR codes. A model that classifies QR codes into three categories: normal, phishing, and malware, is proposed. The model achieves high precision and F1 scores for normal and phishing codes (Class 0 and 1), indicating accurate identification. However, the model's recall for malware (Class 2) is lower, suggesting potential missed detections in this category. This stresses the need for further exploration of techniques to improve the detection of malware QR codes. Despite the particular limitation, the overall accuracy of the model remains impressive at 99%, demonstrating its effectiveness in distinguishing normal and phishing codes from potentially malicious ones.

  • Research Article
  • Cite Count Icon 19
  • 10.3390/diagnostics14192225
SMOTE-Based Automated PCOS Prediction Using Lightweight Deep Learning Models
  • Oct 5, 2024
  • Diagnostics
  • Rumman Ahmad + 4 more

Background: The reproductive age of women is particularly vulnerable to the effects of polycystic ovarian syndrome (PCOS). High levels of testosterone and other male hormones are frequent contributors to PCOS. It is believed that miscarriages and ovulation problems are majorly caused by PCOS. A recent study found that 31.3% of Asian women have been afflicted with PCOS. Healing women with life-threatening disorders associated with PCOS requires more research. In prior research, methods have involved autonomously classified PCOS using a number of different machine learning techniques. ML-based approaches involve hand-crafted feature extraction and suffer from low performance issues, which cannot be ignored for the accurate prediction and identification of PCOS. Objective: Hence, predicting PCOS using cutting-edge deep learning methods for automated feature engineering with better performance is the prime focus of this study. Methods: The proposed method suggests three lightweight (LSTM-based, CNN-based, and CNN-LSTM-based) deep learning models, incorporating SMOTE for dataset balancing to obtain a valid performance. Results: The proposed three models tend to offer an accuracy of 92.04%, 96.59%, and 94.31%, an ROC-AUC of 92.0%, 96.6%, and 94.3%, the number of parameters of 6689, 297, and 13285, and a training time of 67.27 s, 10.02 s, and 18.51 s, respectively. In addition, the DeLong test is also performed to compare AUCs to assess the statistical significance of all three models. Among all three models, the SMOTE + CNN models performs better in terms of accuracy, precision, recall, AUC, number of parameters, training time, DeLong’s p-value over the other. Conclusions: Moreover, a performance comparison is also carried out with other state-of-the-art PCOS detection studies and methods, which validates the better performance of the proposed model. Thus, the proposed model provides the greatest performance, which can lead to a reduction in the number of failed pregnancies and help in finding PCOS in the early stages.

  • Research Article
  • Cite Count Icon 2
  • 10.1016/j.compbiomed.2025.110138
A new lightweight deep learning model optimized with pruning and dynamic quantization to detect freezing gait on wearable devices.
  • Jun 1, 2025
  • Computers in biology and medicine
  • Myung-Kyu Yi + 1 more

A new lightweight deep learning model optimized with pruning and dynamic quantization to detect freezing gait on wearable devices.

  • Research Article
  • Cite Count Icon 2
  • 10.13031/ja.15413
Aerial-Based Weed Detection Using Low-Cost and Lightweight Deep Learning Models on an Edge Platform
  • Jan 1, 2023
  • Journal of the ASABE
  • Nitin Rai + 4 more

Highlights Lightweight deep learning models were trained on an edge device to identify weeds in aerial images. A customized configuration file was setup to train the models. These models were deployed to detect weeds in aerial images and videos (near real-time). CSPMobileNet-v2 and YOLOv4-lite are recommended models for weed detection using edge platform. Abstract. Deep learning (DL) techniques have proven to be a successful approach in detecting weeds for site-specific weed management (SSWM). In the past, most of the research work has trained and deployed pre-trained DL models on high-end systems coupled with expensive graphical processing units (GPUs). However, only a limited number of research studies have used DL models on an edge system for aerial-based weed detection. Therefore, while focusing on hardware cost minimization, eight DL models were trained and deployed on an edge device to detect weeds in aerial-image context and videos in this study. Four large models, namely CSPDarkNet-53, DarkNet-53, DenseNet-201, and ResNet-50, along with four lightweight models, CSPMobileNet-v2, YOLOv4-lite, EfficientNet-B0, and DarkNet-Ref, were considered for training a customized DL architecture. Along with trained model performance scores (average precision score, mean average precision (mAP), intersection over union, precision, and recall), other model metrics to assess edge system performance such as billion floating-point operations/s (BFLOPS), frame rates/s (FPS), and GPU memory usage were also estimated. The lightweight CSPMobileNet-v2 and YOLOv4-lite models outperformed others in detecting weeds in aerial image context. These models were able to achieve a mAP score of 83.2% and 82.2%, delivering an FPS of 60.9 and 61.1 during near real-time weed detection in aerial videos, respectively. The popular ResNet-50 model achieved a mAP of 79.6%, which was the highest amongst all the large models deployed for weed detection tasks. Based on the results, the two lightweight models, namely, CSPMobileNet-v2 and YOLOv4-lite, are recommended, and they can be used on a low-cost edge system to detect weeds in aerial image context with significant accuracy. Keywords: Aerial image, Deep learning, Edge device, Precision agriculture, Weed detection.

  • Conference Article
  • Cite Count Icon 53
  • 10.1109/ipdpsw52791.2021.00128
Performance Evaluation of Deep Learning Compilers for Edge Inference
  • Jun 1, 2021
  • Gaurav Verma + 3 more

Recently, edge computing has received considerable attention as a promising means to provide Deep Learning (DL) based services. However, due to the limited computation capability of the data processing units (such as CPUs, GPUs, and specialized accelerators) in edge devices, using the devices’ limited resources efficiently is a challenge that affects deep learning-based analysis services. This has led to the development of several inference compilers such as TensorRT, TensorFlow Lite, Relay, and TVM, which optimize DL inference models specifically for edge devices. These compilers operate on the standard DL models available for inferencing in various frameworks, e.g., PyTorch, TensorFlow, Caffe, PaddlePaddle, and transform them into a corresponding lightweight model. TensorFlow Lite and TensorRT are considered state-of-the-art inference compilers and encompass most of the compiler optimization techniques that have been proposed for edge computing. This paper presents a detailed performance study of TensorFlow Lite (TFLite) and TensorFlow TensorRT (TF-TRT) using commonly employed DL models for edge devices on varying hardware platforms. The work compares throughput, latency performance, and power consumption. We find that the integrated TF-TRT consistently performs better at the high precision floating point on different DL architectures, especially with GPUs using tensor cores. However, it loses its edge for model compression to TFLite at low precision. TFLite which is primarily designed for mobile applications, performs better with lightweight DL models than the deep neural network-based models. It is the first detailed performance comparison of TF-TRT and TFLite inference compilers to the best of our knowledge.

  • Research Article
  • Cite Count Icon 5
  • 10.3390/fire8030085
Lightweight Deep Learning Model for Fire Classification in Tunnels
  • Feb 20, 2025
  • Fire
  • Shakhnoza Muksimova + 3 more

Tunnel fires pose a severe threat to human safety and infrastructure, necessitating the development of advanced and efficient fire detection systems. This paper presents a novel lightweight deep learning (DL) model specifically designed for real-time fire classification in tunnel environments. This model integrates MobileNetV3 for spatial feature extraction, Temporal Convolutional Networks (TCNs) for temporal sequence analysis, and advanced attention mechanisms, including Convolutional Block Attention Modules (CBAMs) and Squeeze-and-Excitation (SE) blocks, to prioritize critical features such as flames and smoke patterns while suppressing irrelevant noise. The model is trained on a custom dataset containing real tunnel fire incidents generated using a newly prepared dataset. This approach enhances the model generalization capabilities, enabling it to handle diverse fire scenarios, including those with low visibility, high smoke density, and variable ventilation conditions. Deployment optimizations, such as quantization and layer fusion, ensure computational efficiency, achieving an average inference time of 12ms/frame, making it suitable for resource-constrained environments like IoT and edge devices. The experimental results demonstrate that the proposed model achieves an accuracy of 96.5%, a precision of 95.7%, and a recall of 97.2%, significantly outperforming state-of-the-art (SOTA) models such as ResNet50 and YOLOv5 in both accuracy and real-time performance. Robustness tests under challenging conditions validate model reliability and adaptability, marking it as a critical advancement in tunnel fire detection systems. This study provides valuable insights into the design and deployment of efficient fire classification systems for safety-critical applications. The proposed model offers a scalable, high-performance solution for tunnel fire monitoring and establishes a benchmark for future research in real-time video-based classification under complex environmental conditions.

  • Research Article
  • Cite Count Icon 14
  • 10.3390/app15137533
Key Considerations for Real-Time Object Recognition on Edge Computing Devices
  • Jul 4, 2025
  • Applied Sciences
  • Nico Surantha + 1 more

The rapid growth of the Internet of Things (IoT) and smart devices has led to an increasing demand for real-time data processing at the edge of networks closer to the source of data generation. This review paper introduces how artificial intelligence (AI) can be integrated with edge computing to enable efficient and scalable object recognition applications. It covers the key considerations of employing deep learning on edge computing devices, such as selecting edge devices, deep learning frameworks, lightweight deep learning models, hardware optimization, and performance metrics. An example of an application is also presented in this article, which is about real-time power transmission line detection using edge computing devices. The evaluation results show the significance of implementing lightweight models and model compression techniques such as quantized Tiny YOLOv7. It also shows the hardware performance on some edge devices, such as Raspberry Pi and Jetson platforms. Through practical examples, readers will gain insights into designing and implementing AI-powered edge solutions for various object recognition use cases, including smart surveillance, autonomous vehicles, and industrial automation. The review concludes by addressing emerging trends, such as federated learning and hardware accelerators, which are set to shape the future of AI on edge computing for object recognition.

  • Research Article
  • Cite Count Icon 32
  • 10.1109/tbcas.2023.3281596
Flexible Gel-Free Multi-Modal Wireless Sensors With Edge Deep Learning for Detecting and Alerting Freezing of Gait Symptom.
  • Oct 1, 2023
  • IEEE transactions on biomedical circuits and systems
  • Yuhan Hou + 4 more

Freezing of gait (FoG) is a debilitating symptom of Parkinson's disease (PD). This work develops flexible wearable sensors that can detect FoG and alert patients and companions to help prevent falls. FoG is detected on the sensors using a deep learning (DL) model with multi-modal sensory inputs collected from distributed wireless sensors. Two types of wireless sensors are developed, including: 1) a C-shape central node placed around the patient's ears, which collects electroencephalogram (EEG), detects FoG using an on-device DL model, and generates auditory alerts when FoG is detected; 2) a stretchable patch-type sensor attached to the patient's legs, which collects electromyography (EMG) and movement information from accelerometers. The patch-type sensors wirelessly send collected data to the central node through low-power ultra-wideband (UWB) transceivers. All sensors are fabricated on flexible printed circuit boards. Adhesive gel-free acetylene carbon black and polydimethylsiloxane electrodes are fabricated on the flexible substrate to allow conformal wear over the long term. Custom integrated circuits (IC) are developed in 180 nm CMOS technology and used in both types of sensors for signal acquisition, digitization, and wireless communication. A novel lightweight DL model is trained using multi-modal sensory data. The inference of the DL model is performed on a low-power microcontroller in the central node. The DL model achieves a high detection sensitivity of 0.81 and a specificity of 0.88. The developed wearable sensors are ready for clinical experiments and hold great promise in improving the quality of life of patients with PD. The proposed design methodologies can be used in wearable medical devices for the monitoring and treatment of a wide range of neurodegenerative diseases.

Save Icon
Up Arrow
Open/Close