Abstract

This study is related to the fate of entanglement for initial separable states in a quantum neural network (QNN) model, which is in contact with the data environments locally. The duration of entanglement in quantum systems becomes extremely important when we consider it as a valuable resource. Thus, the effects of various initial states on the occurrence or decay of entanglement are investigated in the presence of information reservoirs. Especially in this study, central spin model has been examined as a quantum version of neural networks by taking inspiration from the biological models. Our model consists of a central spin system with two nodes to which the nodes are coupled to independent spin baths. Numerical results clearly show that different initial states have a profound effect on the fate of the entanglement. It also shows that the entanglement lifetime can be adjusted by regulating the reservoir states. The results can be used in realistic communication network situations to improve the performance of entanglement formation or distribution.

Highlights

  • IN RECENT studies in the field of artificial intelligence, especially machine learning and artificial neural networks (ANN) have become popular

  • When classical learning rules address the dynamics in a statistical information environment, they manage them in the form of probability density functions [5]

  • We have connected the quantum neural network (QNN) unit, which we consider as open system dynamics, to two reservoirs by the conventional flipflop Hamiltonian

Read more

Summary

Introduction

IN RECENT studies in the field of artificial intelligence, especially machine learning and artificial neural networks (ANN) have become popular. A field of computer science at the beginning of ANN, provides the ability to learn without explicit programming to computers [1,2,3]. Neural networks, which are interconnected computing structures based on binary McCulloch-Pitts neurons, are inspired by biological foundations [2]. The Hebb's learning rule, based on biological and neurophysiological basis, aims to obtain the best learning by changing the weight of the relevant units [4]. When classical learning rules address the dynamics in a statistical information environment, they manage them in the form of probability density functions [5]. The formulations and constraints of learning laws are based on the relationships between the global and local information environments of each transaction item

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call