Abstract

With advances in circuit design and sensing technology, the acquisition of data from a large number of Internet of Things (IoT) sensors simultaneously to enable more accurate inferences has become mainstream. In this work, we propose a novel convolutional neural network (CNN) model for the fusion of multimodal and multiresolution data obtained from several sensors. The proposed model enables the fusion of multiresolution sensor data, without having to resort to padding/ resampling to correct for frequency resolution differences even when carrying out temporal inferences like high-resolution event detection. The performance of the proposed model is evaluated for sleep apnea event detection, by fusing three different sensor signals obtained from UCD St. Vincent University Hospital's sleep apnea database. The proposed model is generalizable and this is demonstrated by incremental performance improvements, proportional to the number of sensors used for fusion. A selective dropout technique is used to prevent overfitting of the model to any specific high-resolution input, and increase the robustness of fusion to signal corruption from any sensor source. A fusion model with electrocardiogram (ECG), Peripheral oxygen saturation signal (SpO2), and abdominal movement signal achieved an accuracy of 99.72% and a sensitivity of 98.98%. Energy per classification of the proposed fusion model was estimated to be approximately 5.61 μJ for on-chip implementation. The feasibility of pruning to reduce the complexity of the fusion models was also studied.

Highlights

  • F USION of data obtained from multiple sensors can improve detection performance, compared to that of using dataManuscript received August 30, 2021; revised October 20, 2021 and December 4, 2021; accepted December 5, 2021

  • We propose a 1D-convolutional neural network (CNN) model for the fusion of multimodal and multiresolution signals to capture temporal information like event detection, without having to resort to resampling of the individual signals

  • We proposed an experimental study in which selective dropout of the features obtained from the sensor with a larger sampling rate is used to prevent the fusion model from overfitting to features from the sensor with the larger sampling rate

Read more

Summary

INTRODUCTION

F USION of data obtained from multiple sensors can improve detection performance, compared to that of using data. The data was resampled/ padded such that the input to the DCNN would be uniform, leading to the development of a model that does not address issues such as overfitting to signals with the larger sampling frequency. Traditional time series feature-based fusion algorithms are not suitable for multiresolution fusion tasks This leads to the contributions of this work: 1) Development of a 1D-CNN based fusion framework for the data-driven fusion of multisensor, multimodal data at different sampling frequencies for temporal inferences, without having to resort to padding or resampling, by utilizing signal based selective dropout method that ensures that the fusion model does not overfit to the signals with higher sampling frequencies (or features from models with more samples in the input), which is applicable to biomedical Internet of Things (IoT) sensors. Consider an event classification task from multi-sensor time-series data obtained

Data Driven Fusion Approach
Signal Based Selective Dropout
F F s1 s2
APPLICATION TO SLEEP APNEA DETECTION
Dataset and Models
RESULTS
Performance Evaluation With Noisy Data
Generalization to Multiple Sensors
Fusion Algorithm With Abdominal Signal
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call