Abstract

With their constantly increasing peak performance and memory capacity, modern supercomputers offer new perspectives on numerical studies of open many-body quantum systems. These systems are often modeled by using Markovian quantum master equations describing the evolution of the system density operators. In this paper, we address master equations of the Lindblad form, which are a popular theoretical tools in quantum optics, cavity quantum electrodynamics, and optomechanics. By using the generalized Gell–Mann matrices as a basis, any Lindblad equation can be transformed into a system of ordinary differential equations with real coefficients. Recently, we presented an implementation of the transformation with the computational complexity, scaling as for dense Lindbaldians and for sparse ones. However, infeasible memory costs remains a serious obstacle on the way to large models. Here, we present a parallel cluster-based implementation of the algorithm and demonstrate that it allows us to integrate a sparse Lindbladian model of the dimension and a dense random Lindbladian model of the dimension by using 25 nodes with 64 GB RAM per node.

Highlights

  • High-performance computation technologies are becoming more and more important for the modeling of complex quantum systems, both as a tool of theoretical research [1,2] and a means to explore possible technological applications [3,4]

  • We presented a parallel version of the algorithm to model evolution of open quantum systems described with a master equation of the Gorini–Kossakowski–Sudarshan–Lindblad (GKSL) type

  • The algorithm first transforms the equation into a system of real-valued ordinary differential eqautions (ODEs) and integrates the obtained ordinary differential equations (ODEs) system forward in time

Read more

Summary

Introduction

High-performance computation technologies are becoming more and more important for the modeling of complex quantum systems, both as a tool of theoretical research [1,2] and a means to explore possible technological applications [3,4]. Numerical simulations of open systems require large memory and longer computation time than in the case of coherent models of equal size. This is a strong motivation for the development of new algorithms and implementations that can utilize possibilities provided by modern supercomputers. In this respect, it is necessary to go for large N in order to capture universal asymptotic properties (including scaling relations). V (t) can be converted back into the density operator ρ(t)

Models
Algorithm
Initialization
Finalization
Computation of Expansion Coefficients h j and l j for H and L
Algorithm Performance
Data Preparation Step
The ODE Integration Step
Discussion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call