Abstract

We study machine learning of phenomenologically relevant properties of string compactifications, which arise in the context of heterotic line bundle models. Both supervised and unsupervised learning are considered. We find that, for a fixed compactification manifold, relatively small neural networks are capable of distinguishing consistent line bundle models with the correct gauge group and the correct chiral asymmetry from random models without these properties. The same distinction can also be achieved in the context of unsupervised learning, using an autoencoder. Learning nontopological properties, specifically the number of Higgs multiplets, turns out to be more difficult, but is possible using sizeable networks and feature-enhanced datasets.

Highlights

  • Techniques from machine learning have recently been introduced into string theory [1,2,3,4,5] and have been explored for a variety of different machine learning architectures and string theory settings

  • We study machine learning of phenomenologically relevant properties of string compactifications, which arise in the context of heterotic line bundle models

  • For a fixed compactification manifold, relatively small neural networks are capable of distinguishing consistent line bundle models with the correct gauge group and the correct chiral asymmetry from random models without these properties

Read more

Summary

INTRODUCTION

Techniques from machine learning have recently been introduced into string theory [1,2,3,4,5] and have been explored for a variety of different machine learning architectures and string theory settings (for reviews see Refs. [6,7] and references therein). In this paper we will deal with the second problem and the main question we will address is whether machine learning can distinguish string vacua which lead to phenomenologically attractive models from those which do not. We will be addressing this question in the context of heterotic line bundle models [11,12,13], a class of models which has the virtue of being conceptually relatively simple and for which sizeable sets of phenomenologically promising models are known. This means that training sets for machine learning can readily be constructed.

HETEROTIC LINE BUNDLE MODELS
DATASETS
LEARNING STANDARD MODELS WITH SUPERVISED LEARNING
AUTOENCODING STANDARD MODELS
LEARNING ABOUT HIGGS MULTIPLETS
Findings
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.