Abstract

The communication of the deaf person has been considered as a set of manual movements, however many have ignored the importance of facial expression in the different communicative manifestations of this population; this is the reason because a group of Phonoaudiology, physiotherapy and engineering researchers integrate their knowledge to perform image processing and classification of facial expressions used in the Colombian Sign Language (LSC) analysis. The Objective was to establish the processing of images of facial expressions as a complementary means to manual movements for the interpretation of Colombian sign language. Qualitative study, descriptive, non-experimental design, for which four phases were considered, in the first, data collection was carried out through recordings of deaf people as linguistic models producing facial expressions corresponding to the vocabulary of the clinical scenario. In phase two, the images are processed to identify the characteristic patterns of each sign. In the third phase, two Deep Learning techniques are used to classify the captured gestures. In the fourth phase, the accuracy of the images was validated techniques used. For the classification process, six facial expressions corresponding to the words pain, inflammation, fracture, irritable colon, dizziness, diabetes were analyzed. In this process, two Deep Learning techniques were validated, obtaining that the Single Shot Multibox Detector SSD technique has a precision of 94, 2% compared to the Convolutional Neural Network (CNN) technique, which has a degree of accuracy of 89.05%. The development of technologies of this nature allows analyzing the facial expression in the communication of the deaf person as a distinctive feature for interaction with others. The algorithms of artificial vision based on Deep learning present a high level of efficiency in the classification of facial expression, this being an important factor to generate tools that facilitate the communication of deaf and hearing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call