Abstract

Recent progress in the design and optimization of neural-network quantum states (NQSs) has made them an effective method to investigate ground-state properties of quantum many-body systems. In contrast to the standard approach of training a separate NQS from scratch at every point of the phase diagram, we demonstrate that the optimization of a NQS at a highly expressive point of the phase diagram (i.e., close to a phase transition) yields features that can be reused to accurately describe a wide region across the transition. We demonstrate the feasibility of our approach on different systems in one and two dimensions by initially pretraining a NQS at a given point of the phase diagram, followed by fine-tuning only the output layer for all other points. Notably, the computational cost of the fine-tuning step is very low compared to the pretraining stage. We argue that the reduced cost of this paradigm has significant potential to advance the exploration of strongly correlated systems using NQS, mirroring the success of fine-tuning in machine learning and natural language processing. Published by the American Physical Society 2024

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.