Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/26998
Title: Gas detection and identification using multimodal artificial intelligence based sensor fusion
Authors: Narkhede, P
Walambe, R
Mandaokar, S
Chandel, P
Kotecha, K
Ghinea, G
Keywords: convolutional neural network;early fusion;gas detection;;long-short term memory;multimodal data
Issue Date: 9-Jan-2021
Publisher: MDPI
Citation: Narkhede, P. et al. (2021) 'Gas detection and identification using multimodal artificial intelligence based sensor fusion', Applied System Innovation, 4 (1), 3, pp. 1 - 14. doi: 10.3390/asi4010003.
Abstract: Copyright © 2021 by the authors. With the rapid industrialization and technological advancements, innovative engineering technologies which are cost effective, faster and easier to implement are essential. One such area of concern is the rising number of accidents happening due to gas leaks at coal mines, chemical industries, home appliances etc. In this paper we propose a novel approach to detect and identify the gaseous emissions using the multimodal AI fusion techniques. Most of the gases and their fumes are colorless, odorless, and tasteless, thereby challenging our normal human senses. Sensing based on a single sensor may not be accurate, and sensor fusion is essential for robust and reliable detection in several real-world applications. We manually collected 6400 gas samples (1600 samples per class for four classes) using two specific sensors: the 7-semiconductor gas sensors array, and a thermal camera. The early fusion method of multimodal AI, is applied The network architecture consists of a feature extraction module for individual modality, which is then fused using a merged layer followed by a dense layer, which provides a single output for identifying the gas. We obtained the testing accuracy of 96% (for fused model) as opposed to individual model accuracies of 82% (based on Gas Sensor data using LSTM) and 93% (based on thermal images data using CNN model). Results demonstrate that the fusion of multiple sensors and modalities outperforms the outcome of a single sensor.
Description: Data Availability Statement: The data presented in this study are available on request from the corresponding authors.
URI: https://bura.brunel.ac.uk/handle/2438/26998
DOI: https://doi.org/10.3390/asi4010003
Other Identifiers: ORCID Ids:Parag Narkhede https://orcid.org/0000-0003-1836-4438; Rahee Walambe https://orcid.org/0000-0003-1745-5231 ; Ketan Kotecha https://orcid.org/0000-0003-2653-3780; George Ghinea https://orcid.org/0000-0003-2578-5580.
3
Appears in Collections:Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).3.5 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons