Abstract

In the era of Large Language Models, there is still potential for improvement in current Natural Language Processing (NLP) methods in terms of verifiability and consistency. NLP classical approaches are computationally expensive due to their high-power consumption, computing power, and storage requirements. Another computationally efficient approach to NLP is categorical quantum mechanics, which combines grammatical structure and individual word meaning to deduce the sentence meaning. As both quantum theory and natural language use vector space to describe states which are more efficient on quantum hardware, QNLP models can achieve up to quadratic speedup over classical direct calculation methods. In recent years, there is significant progress in utilizing quantum features such as superposition and entanglement to represent linguistic meaning on quantum hardware. Earlier research work has already demonstrated QNLP’s potential quantum advantage in terms of speeding up search, enhancing classification tasks’ accuracy and providing an exponentially large quantum state space in which complex linguistic structures can be efficiently embedded. In this work, a QNLP model is used to determine if two sentences are related to the same topic or not. By comparing our QNLP model to a classical tensor network-based one, our model improved training accuracy by up to 45% and validation accuracy by 35%, respectively. The QNLP model convergence is also studied when varying: first, the problem size, second, parametrized quantum circuits used for model’s training, and last, the backend quantum simulator noise model. The experimental results show that strongly entangled ansatz designs result in fastest model convergence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call