Abstract

Wearable strain sensors that detect joint/muscle strain changes become prevalent at human–machine interfaces for full-body motion monitoring. However, most wearable devices cannot offer customizable opportunities to match the sensor characteristics with specific deformation ranges of joints/muscles, resulting in suboptimal performance. Adequate wearable strain sensor design is highly required to achieve user-designated working windows without sacrificing high sensitivity, accompanied with real-time data processing. Herein, wearable Ti3C2Tx MXene sensor modules are fabricated with in-sensor machine learning (ML) models, either functioning via wireless streaming or edge computing, for full-body motion classifications and avatar reconstruction. Through topographic design on piezoresistive nanolayers, the wearable strain sensor modules exhibited ultrahigh sensitivities within the working windows that meet all joint deformation ranges. By integrating the wearable sensors with a ML chip, an edge sensor module is fabricated, enabling in-sensor reconstruction of high-precision avatar animations that mimic continuous full-body motions with an average avatar determination error of 3.5 cm, without additional computing devices.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.