Abstract

Distributing entanglement over long distances is one of the central tasks in quantum networks. An important problem, especially for near-term quantum networks, is to develop optimal entanglement distribution protocols that take into account the limitations of current and near-term hardware, such as quantum memories with limited coherence time. We address this problem by initiating the study of quantum network protocols for entanglement distribution using the theory of decision processes, such that optimal protocols (referred to as policies in the context of decision processes) can be found using dynamic programming or reinforcement learning algorithms. As a first step, in this work we focus exclusively on the elementary link level. We start by defining a quantum decision process for elementary links, along with figures of merit for evaluating policies. We then provide two algorithms for determining policies, one of which we prove to be optimal (with respect to fidelity and success probability) among all policies. Then we show that the previously-studied memory-cutoff protocol can be phrased as a policy within our decision process framework, allowing us to obtain several new fundamental results about it. The conceptual developments and results of this work pave the way for the systematic study of the fundamental limitations of near-term quantum networks, and the requirements for physically realizing them.

Highlights

  • As mentioned above, once the heralding procedure succeeds, the nodes store their quantum systems in their local quantum memory

  • We rearranged the sum with respect to the set {ht ∈ {0, 1}2t−1 : Xe(t)(ht) = 1} so that the sum is with respect to the possible values of the memory time m, which in general depends on the policy π

  • By observing that vtπ depends only on the policy from time t onwards, i.e., on π(t), we find that max π

Read more

Summary

Introduction

The quantum internet [1–5] is one of the frontiers of quantum information science. It has the potential to revolutionize the way we communicate and do other tasks, and it will allow for tasks that are not possible using the current, classical internet alone, such as quantum teleportation [6–8], quantum key distribution [9–12], quantum clock synchronization [13–16], distributed quantum computation [17], and distributed quantum metrology and sensing [18– 23]. Several software tools for simulating quantum networks have been released in order to probe these questions [44– 50], it is of interest to develop a formal and systematic theoretical framework for entanglement distribution protocols in near-term quantum networks that can allow us to address these questions in full generality. In addition to being natural, one of the other advantages of the approach taken in this work is that optimal protocols can be discovered using reinforcement learning algorithms This is due to the fact that decision processes form the theoretical foundation for reinforcement learning [52] and artificial intelligence [53]. (See [54] for related work on machine learning for quantum communication.) Another advantage of our approach is that, even though reinforcement learning techniques cannot always be applied efficiently to large-scale problems, decision processes provide us with a systematic framework for combining optimal small-scale protocols in order to create large-scale protocols; see [55] for similar ideas. This work represents the starting point towards this ultimate goal

The entanglement distribution task
Quantum decision process for elementary links
Quantum state of an elementary link
Figures of merit
Examples of policies
Policy optimization
Backward recursion
Forward recursion
The memory-cutoff policy
Expected quantum state
Short-term behavior
Expected fidelity
Summary and outlook
Other figures of merit
Waiting time
Success rate

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.