Abstract

This paper is an attempt to address to the problem of native language in a mixed voice environment. G- Cocktail would aid these applications in identifying commands given in Gujarati, even from a mixed voice stream. There are two phases of G-cocktail in the first phase, it creates features after filtering the voices and in the second it trains and classifies the dataset. This trained dataset helps in recognizing the new voice signal. The challenge in training a native language is the availability of a small dataset. A single-word input is used in model and phrase benchmark dataset from Microsoft and the Linguistic Data Consortium for Indian Languages (LDC-IL). To overcome the over fitting problem due to smaller dataset we used CatBoost algorithm. And fine-tuned the classification model to avoid the over fitting issue. Mel-frequency cepstral coefficients (MFCCs) are coefficients that collectively make up an MFC. They are derived from a type of cepstral representation of the audio clip (a nonlinear "spectrum-of-a-spectrum"). MFCC is good for human voices but noises in the sound makes it less productive. To avoid this shortcoming of MFCC, first filtered the voices are used and then calculated the MFCC. The most relevant features are retained to make it more robust. With MFCC features, the pitch of the voices is also added, as pitch could vary with regional changes, mood of the person, age, and knowledge of the language to the speaker. A voice print of the whole sound files is constructed and fed it as features to the classification model. For training and testing 70% and 30% ratio is used in algorithms like K-means, Naïve Bayes, and Light GBM. Proposed model is compared with given data set and results proved that G-cocktail using XBoost performed better than the others under the given scenario in all parameters.

Highlights

  • The pandemic in 2020 increased the internet usage and increased the use of native languages

  • The current paper focuses on separating Gujarati voices from a mixed signal

  • The plot is of the original voice before applying any filters

Read more

Summary

Introduction

The pandemic in 2020 increased the internet usage and increased the use of native languages. Expressions spoken in a natural language is a constantly shifting acoustic signal. These ’acoustic- phonetic parts’ (APS) are demarcated based on distinct variations in time and frequency domains [1]. We can associate the same signal that is demarcated in APSs with linguistic abstractions such as allophones, phonemes, morphophonemes, etc. Phoneme is a sound element that differentiates one word from another in each language. Allophones are the sounds of the same phoneme that vary but do not cause a significant difference in the expression. The similarity of strings with one or more APSs with allophones, may never be one-to-one.

Objectives
Methods
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.