Abstract

Voice control is an important way of controlling mobile devices; however, using it remains a challenge for dysarthric patients. Currently, there are many approaches, such as automatic speech recognition (ASR) systems, being used to help dysarthric patients control mobile devices. However, the large computation power requirement for the ASR system increases implementation costs. To alleviate this problem, this study proposed a convolution neural network (CNN) with a phonetic posteriorgram (PPG) speech feature system to recognize speech commands, called CNN–PPG; meanwhile, the CNN model with Mel-frequency cepstral coefficient (CNN–MFCC model) and ASR-based systems were used for comparison. The experiment results show that the CNN–PPG system provided 93.49% accuracy, better than the CNN–MFCC (65.67%) and ASR-based systems (89.59%). Additionally, the CNN–PPG used a smaller model size comprising only 54% parameter numbers compared with the ASR-based system; hence, the proposed system could reduce implementation costs for users. These findings suggest that the CNN–PPG system could augment a communication device to help dysarthric patients control the mobile device via speech commands in the future.

Highlights

  • Rudzicz et al [16,17] investigated acoustic models of GMM–HMM, conditional random field, support vector machines (SVMs), and artificial neural networks (ANNs) [17], and the results showed that the ANNs provided higher accuracy than other models

  • Chen et al [39] used the convolution neural network (CNN)–Mel-frequency cepstral coefficient (MFCC) structure to predict the tones of Mandarin from input speech, and the results showed that this approach provided higher accuracy than classical approaches

  • Che et al [41] used a similar concept to CNN–MFCC in a partial discharge recognition task, and the results showed that the MFCC and CNN may be a promising event recognition method for this application too

Read more

Summary

Introduction

Dysarthric speaking is often associated with aging as well as with medical conditions, such as cerebral palsy (CP) and amyotrophic lateral sclerosis (ALS) [1] It is a motor speech disorder caused by muscle weakness or lack of control and often makes someone’s speech unclear; patients cannot communicate well with people (or machines). Communication using these devices is often slow and unnatural for dysarthric patients [6]; it affects the communication performance of dysarthric patients directly To overcome these issues, many studies [7] have proposed speech command recognition (SCR) systems that can help patients control devices via their voice, such as automatic speech recognition (ASR) systems [8] and acoustic pattern recognition technologies [9]

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call