In this study, we have built an automatic sign language translation system for deaf and dumb persons to communicate with ordinary people. According to the Statistics Department of the Taiwan Ministry of Health and Welfare, there are 119,682 hearing impaired persons, and 14,831 voice function or language dysfunctions. The Deaf and dumb persons’ account for 11.7% of the population with physical and mental disabilities. However, there are only 488 qualified people with the sign language translation skill certificate, which shows the importance of automatic sign language translation systems. This system collects 11 signals including five fingers’ curvature, 3-axis gyroscope and 3-axis accelerometer from left and right hand separately. In addition, a total of 22 signals are collected by the two sensors, Flex sensor and GY-521 six-axis with single-board computer Arduino MEGA 2560; and then uploaded to server via ESP-01S Wi-Fi module. While server receives the 22 signals, it converts to a RGB picture using PHP program. As a result, we can compare the picture with the model trained by TensorFlow and the compared result is stored in the database. Meanwhile, the comparison stored in database which can be accessed by APP programs would be displayed on the screen of the mobile device and be read aloud. The TensorFlow training model collects 25 sign language gestures, each based on 100 training gesture pictures, and a sign language recognition training model is Convolutional Neural Network (CNN). In this study, the results of the sign language recognition training model are further confirmed by 10 people other than those in training database. So far, the indeed recognition rate of sign language is about 84.4%, and the system response time is about 2.243 seconds.
Read full abstract