Abstract

A high-level abstract idea of speech is created in previously autonomous systems using unsupervised models. Given that these representations are typically acquired by input reconstruction, it cannot be said with assurance that they are resistant to cues unrelated to disease. Pathology diagnosis cannot usually be reliably performed using unsupervised representations. As a result, in this research, we use pathological voice recognition using deep convolutional neural networks (DCNNs). Even though DCNNs have many acknowledged benefits, selecting the best structure for them can be challenging. This work examines the use of the whale optimization algorithm (WOA) to automatically choose the best architecture for DCNNs in an effort to address this constraint. In order to achieve the goal, three canonical WOA-based innovations are suggested. First, a special encoding technique based on Internet Protocol Addresses (IPA) is created to make it easier to encode the DCNN layers with whale vectors. The development of variable-length DCNNs is then suggested using an enfeebled layer that has particular whale vector dimensions. The final step in the learning process involves splitting huge datasets into smaller ones and then randomly reviewing them. Pathological audio signals captured from patients are used to assess the performance of the proposed model. In this regard, five measures were used to conduct thorough research, including ROC and precision-recall curves, F1-Score, sensitivity, specificity, accuracy, and precision. Up to 95.77 percent of the two disordered speech signals are correctly classified by the suggested model, which outperforms the second-best algorithm, VLNSGA-II, by 1.02 percent in terms of accuracy.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.