Abstract

In locomotion tasks like walking or stair ascending, leg joints produce mechanical energy with task-specific kinematic and kinetic patterns. Consequently, locomotion assistive devices should be active and adaptive to the task being executed; and these tasks should be detected early enough for guaranteeing smooth transitions by the device controller. Wearable vision sensors can predict future tasks by detecting locomotion affordances in the environment. We implemented such a vision-based terrain detection system for flat ground, steps, and ramps, using a depth camera mounted on the user’s chest, and a machine learning classifier. A validation study was conducted with eight participants moving through indoor and outdoor paths combining a rich set of terrains, and under two clearance conditions: clear and occluded by another walker moving ahead. Our method can predict the locomotion modes up to three steps in front of the user and can estimate the geometrical features of the terrain (i.e., step height for stairs and slope inclination for ramps and grounds). Our system achieved more than 95% of accuracy for all locomotion modes in first upcoming step in clear path condition. The paper further reports how these results degrade for next steps ahead of the user, or with partial occlusion.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call