Abstract
This paper compares the three levels of data fusion with the goal of determining the optimal level of data fusion for multi-sensor human activity data. Using the data processing pipeline, gyroscope and accelerometer data was fused at the sensor-level, feature-level and decision-level. For each level of data fusion four different techniques were used with varying levels of success. This analysis was performed on four human activity publicly-available datasets along with four well-known machine learning classifiers to validate the results. The decision-level fusion (Acc = 0.7443±0.0850) outperformed the other two levels of fusion in regards to accuracy, sensor level (Acc = 0.5934 ± 0.1110) and feature level (Acc = 0.6742 ± 0.0053), but, the processing time and computational power required for training and classification were far greater than practical for a HAR system. However, Kalman filter appear to be the more efficient method, since it exhibited both good accuracy (Acc = 0.7536 ± 0.1566) and short processing time (time = 61.71ms ± 63.85); properties that play a large role in real-time applications using wearable devices. The results of this study also serve as baseline information in the HAR literature to compare future methods of data fusion.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.