In present scenario, it is difficult for people who are unable to speak due to various reasons. It may not be a permanent condition, as muteness and deafness can be caused or manifest due to several different phenomena, such as physiological injury, illness, medical side effects, psychological trauma, developmental disorders, or neurological disorders. People who are suffering from this typically use the Sign Language to communicate with other people. However, it is not possible for everyone to be acquainted with communication using sign language. This project focuses on developing a solution which can be used by regular people to understand the people having mutism. Hand Sign Recognition Systems have evolved significantly with advancements in machine learning and deep learning techniques. These systems are designed to recognize gestures or signs, making them useful in various applications such as sign language interpretation, human-computer interaction, and virtual reality. This paper gives an overview of the current state of hand sign recognition systems, highlighting the techniques used, challenges, and possible future directions. The focus of the paper is on deep learning-based methods and their impact on the accuracy and usability of hand sign recognition in real-time applications [1]
Read full abstract