Abstract

To address the issue of the quantum approximate optimization algorithm frequently encountering local minima and the cost of parameter optimization within complex non-convex optimization energy landscapes, we consider a warm-start method. This approach leverages the characteristics of transition states in the enhanced optimizer, specifically descending along unique negative curvature directions, to find smaller local minima. Our research results indicate that with the assistance of an enhanced pre-training structure of the AlphaZero AI model, the initialization generalization ability of the new optimizer is significantly enhanced across various test sets. We train on 2-SAT training sets with clause densities between α ≈ 2.6 and α ≈ 2.89, and transfer to more complex test sets. Additionally, the average residual energy density in transfer learning consistently remains below 0.01, even achieving a high transfer success probability of 98% in hard instances with α ≈ 3.7. The search efficiency, pre-trained by ensemble learning, was significantly enhanced, while only requiring simple interpolation of a few transition points to transfer on the global optimal solutions at higher sample clause densities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call