Abstract
This work presents the design and validation of a voice assistant to command robotic tasks in a residential environment, as a support for people who require isolation or support due to body motor problems. The preprocessing of a database of 3600 audios of 8 different categories of words like “paper”, “glass” or “robot”, that allow to conform commands such as "carry paper" or "bring medicine", obtaining a matrix array of Mel frequencies and its derivatives, as inputs to a convolutional neural network that presents an accuracy of 96.9% in the discrimination of the categories. The command recognition tests involve recognizing groups of three words starting with "robot", for example, "robot bring glass", and allow identifying 8 different actions per voice command, with an accuracy of 88.75%.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IAES International Journal of Artificial Intelligence (IJ-AI)
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.