Abstract
Improving the efficiency of communication of deaf and hard of hearing people by processing sign language using artificial intelligence is an important task both socially and technologically. One of the ways to solve this problem is a fairly cheap and accessible marker method. The method is based on the registration of electromyographic (EMG) muscle signals using bracelets worn on the arm. To improve the quality of recognition of gestures recorded by the marker method, a modification of the marker method is proposed — duplication of EMG sensors in combination with a low-frame machine learning approach. We experimentally study the possibilities of improving the quality of processing of sign language by duplicating EMG sensors as well as by reducing the volume of the dataset required for training machine learning tools. In the latter case, we compare several technologies of the few-shot approach. Our experiments show that training with few-shot neural nets on 56k samples we can achieve better results than training on random forest with 160k samples. The use of a minimum number of sensors in combination with few-shot signal processing techniques provides the possibility of organizing quick and cost-effective interaction with people with hearing and speech disabilities.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Scientific and Technical Journal of Information Technologies, Mechanics and Optics
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.