Abstract
To unravel pairing mechanism of a superconductor from limited, indirect experimental data is always a difficult task. It is common but sometimes dubious to explain by a theoretical model with some tuning parameters. In this work, we propose that the machine learning might infer pairing mechanism from observables like superconducting gap functions. For superconductivity within the Migdal–Eliashberg theory, we perform supervised learning between superconducting gap functions and electron–boson spectral functions. For simple spectral functions, the neural network can easily capture the correspondence and predict perfectly. For complex spectral functions, an autoencoder is utilized to reduce the complexity of the spectral functions to be compatible to that of the gap functions. After this complexity-reduction process, relevant information of the spectral function is extracted and good performance restores. Our proposed method can extract relevant information from data and can be applied to general function-to-function mappings with asymmetric complexities either in physics or other fields.
Highlights
The mechanism of superconductivity has been one of hottest topics for more than one century since the first experimental evidence of superconductivity in mercury was observed by H
We propose that the machine learning might infer pairing mechanism from observables like superconducting gap functions
For superconductivity within the Migdal-Eliashberg theory, we perform supervised learning between superconducting gap functions and electron-boson spectral functions
Summary
The mechanism of superconductivity has been one of hottest topics for more than one century since the first experimental evidence of superconductivity in mercury was observed by H. Our goal is to infer the EBSF from the gap function using machine learning techniques, and this strategy may be extended to other mechanisms including unconventional superconductivity. By using the AE-smoothed EBSFs as the new labels and the AE-transformed gap functions as the new inputs, the performance is greatly improved, which indicates that the new EBSFs preserve the essential information relevant to the new gap function so that they have one-to-one correspondence between them This approach proposed a process of complexity reduction which can generally extract the key information of the data relevant to our input functions, and can be utilized to improve SL tasks in physics or other fields. The results are similar to those for the gap functions so only results for the gap functions are presented unless otherwise specified
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.