Abstract

We discuss deep learning inference for the neutron star equation of state (EoS) using the real observational data of the mass and the radius. We make a quantitative comparison between the conventional polynomial regression and the neural network approach for the EoS parametrization. For our deep learning method to incorporate uncertainties in observation, we augment the training data with noise fluctuations corresponding to observational uncertainties. Deduced EoSs can accommodate a weak first-order phase transition, and we make a histogram for likely first-order regions. We also find that our observational data augmentation has a byproduct to tame the overfitting behavior. To check the performance improved by the data augmentation, we set up a toy model as the simplest inference problem to recover a double-peaked function and monitor the validation loss. We conclude that the data augmentation could be a useful technique to evade the overfitting without tuning the neural network architecture such as inserting the dropout.

Highlights

  • Towards the model-independent equation of state (EoS) determination, we should solve quantum chromodynamics (QCD), which is the first-principles theory of the strong interaction

  • The intermediate density region around 2–10 n0, which is relevant for the neutron star structure, still lacks trustable QCD predictions

  • Some of these neutron star quantities are connected through the universal relations that are insensitive to the EoS details

Read more

Summary

Supervised learning for the EoS inference problem

We explicitly define our problem and summarize the basic strategy of our approach with the supervised machine learning. We will explain the concrete setup in each subsection where we adjust the strategy in accordance with the goal of each subsection. In the present study we want to constrain the EoS from the stellar observables. The EoS and the observables are non-trivially linked by the TOV equation which gives a means to calculate the neutron star structure from the EoS input. Constraining the EoS from the observables is the inverse process of solving the TOV equation, but this inverse problem encounters difficulties from the nature of the observations.

TOV mapping between the EoS and the M -R relation
General strategy for the machine learning implementation
Random EoS generation with parametrization by the speed of sound
Neural network design: a general introduction
Loss function and training the neural network
Extensive methodology study and the performance tests
Mock data generation and training
Learning curves with the observational data augmentation
Typical examples — EoS reconstruction from the neural network
Error correlations
Conventional polynomial regression
Comparison between the neural network and the polynomial regression
EoS estimation from the real observational data
Compilation of the neutron star data
Training data generation with observational uncertainty
Two ways for uncertainty quantification
The most likely neutron star EoS
Possible EoSs with a weak first-order phase transition
More on the performance test: taming the overfitting
A toy model
Numerical tests
Findings
Summary and outlooks
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call