People worldwide have severe hearing loss and there is little to no technological involvement to empower diversity and inclusion in the education system. This study aims to provide an understanding of the needs of the community of such individuals and a possible solution to the problem using current advances in artificial intelligence technologies. In particular, we will present a solution to the problem through a real-time sign language detection web application. First, the structural and functional aspects of the application are described, followed by the software architecture needs. The technological aspect of the application is entirely defined by expressing all the significant implementations of each programming language contained behind the software diagrams and architecture. This study will provide the latest technologies in the market combined with a well-planned and developed image detection application that is scalable, portable, maintainable, and efficient in empowering diversity in education. This technologically up-to-date application is delivered as a full-stack web application. The application produces results that waver between 85%-95% quality off detection; it consists of a bi-partite backend, one for creating and compiling models, namely Python, TensorFlow, and Jupyter Notebooks, and the other part held by Node.JS to power the front-end built with React and TensorFlow's JS. This study presents a fully developed web-based application that has the potential to impact and improve diversity in education significantly.
Read full abstract