Abstract

Abstract We discuss the statistical foundations of morphological star–galaxy separation. We show that many of the star–galaxy separation metrics in common use today (e.g., by Sloan Digital Sky Survey or SExtractor) are closely related both to each other, and to the model odds ratio derived in a Bayesian framework by Sebok. While the scaling of these algorithms with the noise properties of the sources varies, these differences do not strongly differentiate their performance. We construct a model of the performance of a star–galaxy separator in a realistic survey to understand the impact of observational signal-to-noise ratio (S/N) (or equivalently, 5σ limiting depth) and seeing on classification performance. The model quantitatively demonstrates that, assuming realistic densities and angular sizes of stars and galaxies, 10% worse seeing can be compensated for by approximately 0.4 mag deeper data to achieve the same star–galaxy classification performance. We discuss how to probabilistically combine multiple measurements, either of the same type (e.g., subsequent exposures), or differing types (e.g., multiple bandpasses), or differing methodologies (e.g., morphological and color-based classification). These methods are increasingly important for observations at faint magnitudes, where the rapidly rising number density of small galaxies makes star–galaxy classification a challenging problem. However, because of the significant role that the S/N plays in resolving small galaxies, surveys with large-aperture telescopes, such as LSST, will continue to see improving star–galaxy separation as they push to these fainter magnitudes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call