We used a one-zone chemical evolution model to address the question of how many masses and metallicities are required in grids of massive stellar models in order to ensure reliable galactic chemical evolution predictions. We used a set of yields that includes seven masses between 13 and 30 Msun, 15 metallicities between 0 and 0.03 in mass fraction, and two different remnant mass prescriptions. We ran several simulations where we sampled subsets of stellar models to explore the impact of different grid resolutions. Stellar yields from low- and intermediate-mass stars and from Type Ia supernovae have been included in our simulations, but with a fixed grid resolution. We compared our results with the stellar abundances observed in the Milky Way for O, Na, Mg, Si, Ca, Ti, and Mn. Our results suggest that the range of metallicity considered is more important than the number of metallicities within that range, which only affects our numerical predictions by about 0.1 dex. We found that our predictions at [Fe/H] < -2 are very sensitive to the metallicity range and the mass sampling used for the lowest metallicity included in the set of yields. Variations between results can be as high as 0.8 dex, for any remnant mass prescription. At higher [Fe/H], we found that the required number of masses depends on the element of interest and on the remnant mass prescription. With a monotonic remnant mass prescription where every model explodes as a core-collapse supernova, the mass resolution induces variations of 0.2 dex on average. But with a remnant mass prescription that includes islands of non-explodability, the mass resolution can cause variations of about 0.2 to 0.7 dex depending on the choice of metallicity range. With such a prescription, explosive or non-explosive models can be missed if not enough masses are selected, resulting in over- or under-estimations of the mass ejected by massive stars.