Abstract

The determination and classification of natural language based on specified content and data set involves a process known as spoken language identification (LID). To initiate the process, useful features of the given data need to be extracted first in a mature process where the standard LID features have been previously developed by employing the use of MFCC, SDC, GMM and the i-vector-based framework. Nevertheless, optimisation of the learning process is still required to enable a comprehensive capturing of the extracted features’ embedded knowledge. The training of a single hidden layer neural network can be done using the extreme learning machine (ELM), which is an effective learning model for conducting classification and regression analysis. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. This study employs ELM as the LID learning model centred upon the extraction of the standard features. The enhanced self-adjusting extreme learning machine (ESA–ELM) is one of the ELM’s optimisation techniques which has been chosen as the benchmark and is enhanced by adopting a new alternative optimisation approach (PSO) instead of (EATLBO) in terms of achieving high performance. The improved ESA–ELM is named particle swarm optimisation–extreme learning machine (PSO–ELM). The generated results are based on LID with the same benchmarked data set derived from eight languages, which indicated the superior performance of the particle swarm optimisation–extreme learning machine LID (PSO–ELM LID) with an accuracy of 98.75% in comparison with the ESA–ELM LID which only achieved 96.25%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call