A recent comparison by Merritt of simulated and observed Milky Way–mass galaxies has identified a significant tension between the outskirts (r > 20 kpc) of the stellar halos in simulated and observed galaxies. Using observations from the Dragonfly telescope and simulated galaxies from the Illustris-TNG100 project, Merritt found that the outskirts of stellar halos in simulated galaxies have surface densities 1–2 dex higher than those of observed galaxies. In this paper, we compare two suites of 15 simulated Milky Way–like galaxies, each drawn from the same initial conditions, simulated with the same hydrodynamical code, but with two different models for feedback from supernovae. We find that the McMaster Unbiased Galaxy Simulations (MUGS), which use an older “delayed-cooling” model for feedback, also produce too much stellar mass in the outskirts of the halo, with median surface densities well above observational constraints. The MUGS2 simulations, which instead use a new, physically motivated “superbubble” model for stellar feedback, have 1–2 dex lower outer stellar halo masses and surface densities. The MUGS2 simulations generally match both the median surface density profile as well as the scatter in stellar halo surface density profiles seen in observed stellar halos. We conclude that there is no “missing outskirts” problem in cosmological simulations, provided that supernova feedback is modeled in a way that allows it to efficiently regulate star formation in the low-mass progenitor environments of stellar halo outskirts.
Read full abstract