Abstract

Context. The proximity to the Eddington luminosity has been attributed as the cause of several observed effects in massive stars. Computationally, if the luminosity carried through radiation exceeds the local Eddington luminosity in the low-density envelopes of massive stars, it can result in numerical difficulties, inhibiting further computation of stellar models. This problem is exacerbated by the fact that very few massive stars are observed beyond the Humphreys-Davidson limit, the same region in the Hertzsprung-Russell diagram where the aforementioned numerical issues relating to the Eddington luminosity occur in stellar models. Aims. One-dimensional stellar evolution codes have to use pragmatic solutions to evolve massive stars through this computationally difficult phase. In this work, we quantify the impact of these solutions on the evolutionary properties of massive stars. Methods. We used the stellar evolution code MESA with commonly used input parameters for massive stellar models to compute the evolution of stars in the initial mass range of 10–110 M⊙ at one-tenth of solar metallicity. Results. We find that numerical difficulties in stellar models with initial masses greater than or equal to 30 M⊙ cause these models to fail before the end of core helium burning. Recomputing these models using the same physical inputs but three different pragmatic solutions to treat the numerical instability, we find that the maximum radial expansion achieved by stars can vary by up to 2000 R⊙, while the remnant mass of the stars can vary by up to 14 M⊙ between the sets. These differences can have implications on studies such as binary population synthesis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call