Abstract

In the modern world, when people use their texting apps to communicate with each other, they find their phones are trying to predict what they want to type. The process of this word prediction is actually not a new topic, previous researchers have involved the Natural Language Processing (NLP) word prediction algorithms in many studies. Thus, in this article, a project implementing the Skip-gram model and the Continuous Bags of Words (CBOW) model is created in this study to find out whichever of the two newest and reversed word prediction models would be better to utilize. By comparing the efficiency items including total training time, total effective words, training time per epoch, and effective words per epoch in the experimental results, the project discovered that the CBOW model could be better utilized than the Skip-gram model, which means that the CBOW model can be considered more under this kind of task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call