Abstract

This paper investigated two end-to-end approaches for the identification of spoken language from webcast sources. Long short-term memory (LSTM) and self-attention mechanism architectures are adopted and compared against a deep convolution network baseline model. These methods focused on the performance of spoken language identification (LID) on variable length utterance. The dataset used for experimental evaluation contains five language data collected from webcast (Webcast-5) and ten Chinese dialect language datasets from IFLYTEK (IFLYTEK-10). The end-to-end LID system was trained using five kinds of acoustic features: Mel-frequency cepstral coefficients (MFCCs), shifted delta cepstral coefficients (SDC), Perceptual Linear Prediction (PLP), log Mel-scale filter bank energies (Fbank) and spectrogram energies. The best model using a single feature set achieves an accuracy of 79.6% and a Cavg of 10.87%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call