Abstract

The determination and classification of a recognized spoken language based on certain contents and datasets is known as the process of language identification (LID). The common process in carrying out LID entails the mandatory processing of data which enables the extraction of the necessary features for the process. The extraction involves a mature process whereby the development of the standard LID features have been conducted much earlier by means of a mel-frequency cepstral coefficients, shifted delta cepstral, Gaussian mixture model and i-vector-based framework. Despite that, improvement or rather optimisation still needs to be done on the learning process based on the extracted features so as to obtain all the knowledge embedded within them. The classification and regression analysis can benefit tremendously from the use of the extreme learning machine (ELM) which is a particularly effective and useful learning model for training a single-hidden layer neural network. However, owing to the randomly selected weights embedded in the input’s hidden layers, the model’s learning process is rendered to be ineffective or not optimised in its entirety. In this study, the ELM is employed as the learning model for LID due to the standard feature extraction. In addition, this study proposes a new optimised genetic algorithm (OGA) with three different selection criteria (i.e., roulette wheel, K-tournament and random) to select the appropriate initial weights and biases of the input hidden layer of the ELM, thereby minimising the classification error and improving the general performance of the ELM for LID. Results show the excellent performance of the proposed OGA–ELM with three different selection criteria, namely, roulette wheel, K-tournament and random, with the highest accuracies of 99.50%, 100% and 99.38%, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call