Abstract

ABSTRACTInteractions between galaxies leave distinguishable imprints in the form of tidal features, which hold important clues about their mass assembly. Unfortunately, these structures are difficult to detect because they are low surface brightness features, so deep observations are needed. Upcoming surveys promise several orders of magnitude increase in depth and sky coverage, for which automated methods for tidal feature detection will become mandatory. We test the ability of a convolutional neural network to reproduce human visual classifications for tidal detections. We use as training ∼6000 simulated images classified by professional astronomers. The mock Hyper Suprime Cam Subaru (HSC) images include variations with redshift, projection angle, and surface brightness (μlim = 26–35 mag arcsec−2). We obtain satisfactory results with accuracy, precision, and recall values of Acc = 0.84, P = 0.72, and R = 0.85 for the test sample. While the accuracy and precision values are roughly constant for all surface brightness, the recall (completeness) is significantly affected by image depth. The recovery rate shows strong dependence on the type of tidal features: we recover all the images showing shell features and 87 per cent of the tidal streams; these fractions are below 75 per cent for mergers, tidal tails, and bridges. When applied to real HSC images, the performance of the model worsens significantly. We speculate that this is due to the lack of realism of the simulations, and take it as a warning on applying deep learning models to different data domains without prior testing on the actual data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.