Abstract

Multi-sensor fusion refers to methods used for combining information coming from several sensors (in some cases, different ones) with the aim to make one sensor compensate for the weaknesses of others or to improve the overall accuracy or the reliability of a decision-making process. Indeed, this area has made progress, and the combined use of several sensors has been so successful that many authors proposed variants of fusion methods, to the point that it is now hard to tell which of them is the best for a given set of sensors and a given application context. To address the issue of choosing an adequate fusion method, we recently proposed a machine-learning data-driven approach able to predict the best merging strategy. This approach uses a meta-data set with the Statistical signatures extracted from data sets of a particular domain, from which we train a prediction model. However, the mentioned work is restricted to the recognition of human activities. In this paper, we propose to extend our previous work to other very different contexts, such as gas detection and grammatical face expression identification, in order to test its generality. The extensions of the method are presented in this paper. Our experimental results show that our extended model predicts the best fusion method well for a given data set, making us able to claim a broad generality for our sensor fusion method.

Highlights

  • As fixed and wearable sensors become increasingly pervasive in settings, such as healthcare, where reliability and accuracy are critical, several sensors are frequently used in combination to increase the overall performance

  • Our contribution is that we proposed and experimentally validated a completely new approach for finding the best fusion method, in which we construct a meta-data set where each row corresponds to the Statistical signature of one source data set, and we train a supervised classifier with this meta-data set to predict the best fusion method

  • To the best of our knowledge, this approach has never been proposed before, and for good reason: when we map a whole source data set to only one row of the meta-data set, in order to train a classifier, we need many rows—so, many source data sets; in our experiments, this is in the order of the hundreds

Read more

Summary

Introduction

As fixed and wearable sensors become increasingly pervasive in settings, such as healthcare, where reliability and accuracy are critical, several sensors are frequently used in combination to increase the overall performance. Sensor combination aims to reach better performance than a single sensor [2] because it can improve the signal-to-noise ratio, decrease uncertainty and ambiguity, and increase reliability, robustness, resolution, accuracy, and other desirable properties [3]. We restricted our attention to scenarios where the sensors are digital or their signal is converted to digital data, so that the raw numeric stream can be further processed by algorithms, like feature. In data-driven methods, the features extracted from raw data coming from sensors are fed to the decision-making algorithms, such as classifiers [6]. Even in the restricted context of digital information integration for decision processes, many fusion methods have been developed, such as Multi-view stacking [7], AdaBoost [8], and Voting [9], to mention a few

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call