Augmentative and alternative communication (AAC) systems play a crucial role in supporting individuals with severe communication disabilities by providing accessible means of expression and engagement. However, many conventional AAC devices rely on manual input or basic predictive functions, which can limit communication efficiency and responsiveness. The application of machine learning (ML) to AAC offers new opportunities to enhance these systems, enabling them to provide faster, more accurate, and contextually relevant communication assistance. Advances in ML, particularly in predictive text, speech recognition, and gesture interpretation, allow AAC systems to adapt more intuitively to user needs, predicting intent based on usage patterns and multimodal data, such as voice and gestures. Current research highlights the potential of ML to address key gaps in AAC technology by creating more responsive, personalized systems that align with individual user behaviours. This study proposes a novel ML framework designed to integrate these capabilities, promising improvements in communication speed, user autonomy, and accuracy. By addressing the challenges and limitations of traditional AAC devices, this research aims to advance accessible communication solutions that empower users and improve quality of life.
Read full abstract