Abstract

With the rapid industrialization and technological advancements, innovative engineering technologies which are cost effective, faster and easier to implement are essential. One such area of concern is the rising number of accidents happening due to gas leaks at coal mines, chemical industries, home appliances etc. In this paper we propose a novel approach to detect and identify the gaseous emissions using the multimodal AI fusion techniques. Most of the gases and their fumes are colorless, odorless, and tasteless, thereby challenging our normal human senses. Sensing based on a single sensor may not be accurate, and sensor fusion is essential for robust and reliable detection in several real-world applications. We manually collected 6400 gas samples (1600 samples per class for four classes) using two specific sensors: the 7-semiconductor gas sensors array, and a thermal camera. The early fusion method of multimodal AI, is applied The network architecture consists of a feature extraction module for individual modality, which is then fused using a merged layer followed by a dense layer, which provides a single output for identifying the gas. We obtained the testing accuracy of 96% (for fused model) as opposed to individual model accuracies of 82% (based on Gas Sensor data using LSTM) and 93% (based on thermal images data using CNN model). Results demonstrate that the fusion of multiple sensors and modalities outperforms the outcome of a single sensor.

Highlights

  • Engineering innovation refers to the solving the social and industrial problems via use of the innovative engineering technologies and approaches

  • In our proposed framework, we have employed early fusion of features extracted by the Long shortterm memory (LSTM) model from gas sensors and by the Convolutional Neural Network (CNN) model from the thermal images data

  • The CNN architecture is applied for extracting features from the thermal images, whereas, LSTM framework is used for extracting features from the sequences of gas sensor measurements

Read more

Summary

Introduction

Engineering innovation refers to the solving the social and industrial problems via use of the innovative engineering technologies and approaches. This paper presented an AI-based methodology that employs the Deep Learning (DL) frameworks for performing a fusion of multimodality data from multiple sources to detect and classify the gasses. The proposed method can be used to detect a particular gas in a mixed environment of gases It does not require a manual operator to operate and is a more robust solution as it incorporates the measurements from multiple gas sensors and thermal imaging cameras. An innovative multimodal AI-based framework for the fusion of two separate modalities for robust and more reliable gas detection is proposed and presented the use of early fusion of the outputs of deep learning architectures CNN and LSTM is demonstrated for Gas Detection and identification of the leaked gases.

Theoretical Background
Methodologies for Multimodal Data Fusion
Convolutional Neural Network
Recurrent Neural Network
Framework for System Design and Experimentation
Gas Sensors
Thermal Camera
Data Collection and Preprocessing
Data Preprocessing
Feature Extraction from Thermal Images Using CNN
Feature Extraction from Gas Sensor Measurements Using LSTM
Multimodal Fusion of Image and Sequence Data
Results and Discussion
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call