Abstract

Speech interaction systems are currently highly demanded for quick hands-free interactions. Conventional speech interaction systems (SISs) are trained to the user’s voice whilst most modern systems learn from interaction experience overtime. However, because speech expresses a human computer natural interaction (HCNI) with the world, SIS design must lead to interface computer system that can receive spoken information and act appropriately upon that information. In spite of significant advancements in recent years SISs, there still remain a large number of problems which must be solved in order to successfully apply the SISs in practice and also comfortably accepted by the users. Among many other problems, problems of devising and efficient modeling are considered the primary and important step in the speech recognition deployment in hands-free applications. Meanwhile, the brain–computer interfaces (BCIs) allow users to control applications by brain activity. The work presented in this paper emphasizes an improved implementation of SIS by integrating BCI in order to associate the brain signals for a list of commands as identification criteria for each specific command for controlling the wheelchair with spoken commands.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call