Abstract
Recently, gait attracts attention as a practical biometric for devices that naturally possess walking pattern sensing. In the present study, we explored the feasibility of using a multimodal smart insole for identity recognition. We used sensor insoles designed and implemented by us to collect kinetic and kinematic data from 59 participants that walked outdoors. Then, we evaluated the performance of four neural network architectures, which are a baseline convolutional neural network (CNN), a CNN with a multi-stage feature extractor, a CNN with an extreme learning machine classifier using sensor-level fusion and CNN with extreme learning machine classifier using feature-level fusion. The networks were trained with segmented insole data using 0%, 50%, and 70% segmentation overlap, respectively. For 70% segmentation overlap and both-side data, we obtained mean accuracies of 72.8% ±0.038, 80.9% ±0.036, 80.1% ±0.021 and 93.3% ±0.009, for the four networks, respectively. The results suggest that multimodal sensor-enabled footwear could serve biometric purposes in the next generation of body sensor networks.
Highlights
Personal wearable devices take diverse roles in daily life, allowing for communication, entertainment, sports activity tracking, and vital signs monitoring
As human-body generated signals are unique to different individuals and available for wearable collection, they are extensively studied as candidates for biometric traits in wearable devices
Motivated by the fact that sensor footwear is expected to become prevalent in the foreseeable future, we focus on exploring the feasibility of person recognition based on data acquired from a multimodal sensor insole developed by us and intelligent processing using a 1D convolutional neural network (CNN)
Summary
Personal wearable devices take diverse roles in daily life, allowing for communication, entertainment, sports activity tracking, and vital signs monitoring. As they handle personal data, the aspects of security are a primary concern. Most successful studies that allowed capturing continuous gait information in natural settings relied on using inertial sensors integrated into mobile phones. Some limitations in the performance of mobile phone-based gait recognition arise from the limited number and types of available sensors and the lack of fixed location and alignment of sensors towards the human body and joint axes [17, 19, 20]. Multiple modalities complement each other and provide richer information about gait patterns, determining better overall recognition performance than a single modality, as elaborated in the survey [3]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.