Abstract

Implementation of automatic sign language translation software in the process of social inclusion of people with hearing impairment is an important task. Social inclusion for people with hearing disabilities is an acute problem that must be solved in the context of the development of IT technologies and legislative initiatives that ensure the rights of people with disabilities and their equal opportunities. This substantiates the relevance of the research of assistive technologies, in the context of software tools, such as the process of social inclusion of people with severe hearing impairment in society. The subject of research is methods of automated sign language translation using intelligent technologies. The purpose of the work is the development and research of sign language automation methods to improve the quality of life of people with hearing impairments in accordance with the "Goals of Sustainable Development of Ukraine" (in the "Reduction of Inequality" part). The main tasks of the research are the development and testing of methods of converting sign language into text, converting text into sign language, as well as automating translation from one sign language to another sign language using modern intelligent technologies. Neural network modeling and 3D animation methods were used to solve these problems. The following results were obtained in the work: the main problems and tasks of social inclusion for people with hearing impairments were identified; a comparative analysis of modern methods and software platforms of automatic sign language translation was carried out; a system combining the SL-to-Text method is proposed and investigated; the Text-to-SL method using 3D animation to generate sign language concepts; the method of generating a 3D-animated gesture from video recordings; method of implementing the Sign Language1 to Sign Language2 technology. For gesture recognition, a convolutional neural network model is used, which is trained using imported and system-generated datasets of video gestures. The trained model has a high recognition accuracy (98.52%). The creation of a 3D model for displaying the gesture on the screen and its processing took place in the Unity 3D environment. The structure of the project, executive and auxiliary files used to build 3D animation for the generation of sign language concepts includes: event handler files; display results according to which they carry information about the position of the tracked points of the body; files that store the characteristics of materials that have been added to certain body mapping points. Conclusions: the proposed methods of automated translation have practical significance, which is confirmed by the demo versions of the software applications "Sign Language to Text" and "Text to Sign Language". A promising direction for continuing research on the topic of the work is the improvement of SL1-to-SL2 methods, the creation of open datasets of video gestures, the joining of scientists and developers to fill dictionaries with concepts of various sign languages.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call