Abstract

Gait activity classifications from single-modality data, e.g. acquired by separate vision, pressure, sound and inertial measurements, can be improved by complementary multi-modality fusion to capture a larger set of distinctive gait activity features. We demonstrate a feature-level based sensor fusion of spatio-temporal data obtained from a set of 116 collaborative floor sensors for spatio-temporal sampling of the ground reaction force and ambulatory inertial sensors at 3 positions on the human body. Principle Component Analysis and Canonical Correlation Analysis are used for automatic feature extraction. Fusion at feature level elucidates the balance between otherwise disproportional number of inputs from the two modalities, while reducing the overall number of inputs for classification without degrading substantially the information content. Improvement in the classification is achieved using K-Nearest Neighbor and Kernel Support Vector Machine, manifesting f-scores of 0.95 and 0.94 respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call