Abstract

ABSTRACT With the advent of integral field units (IFUs), surveys can now measure metallicities across the discs of nearby galaxies at scales ≲100 pc. At such small scales, many of these regions contain too few stars to fully sample all possible stellar masses and evolutionary states, leading to stochastic fluctuations in the ionizing continuum. The impact of these fluctuations on the line diagnostics used to infer galaxy metallicities is poorly understood. In this paper, we quantify this impact for six most commonly used diagnostics. We generate stochastic stellar populations for galaxy patches with star formation rates varying over a factor of 1000, compute the nebular emission that results when these stars ionize gas at a wide range of densities, metallicities, and determine how much inferred metallicities vary with fluctuations in the driving stellar spectrum. We find that metallicities derived from diagnostics that measure multiple ionization states of their target elements (e.g. electron temperature methods) are weakly affected (variation <0.1 dex), but that larger fluctuations (∼0.4 dex) occur for diagnostics that depend on a single ionization state. Scatter in the inferred metallicity is generally largest at low star formation rate and metallicity, and is larger for more sensitive observations than for shallower ones. The main cause of the fluctuations is stochastic variation in the ionization state in the nebula in response to the absence of Wolf–Rayet stars, which dominate the production of ≳2−3 Ryd photons. Our results quantify the trade-off between line brightness and diagnostic accuracy, and can be used to optimise observing strategies for future IFU campaigns.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call