Abstract

ABSTRACT In the upcoming decades, large facilities, such as the SKA, will provide resolved observations of the kinematics of millions of galaxies. In order to assist in the timely exploitation of these vast data sets, we explore the use of a self-supervised, physics-aware neural network capable of Bayesian kinematic modelling of galaxies. We demonstrate the network’s ability to model the kinematics of cold gas in galaxies with an emphasis on recovering physical parameters and accompanying modelling errors. The model is able to recover rotation curves, inclinations and disc scale lengths for both CO and H i data which match well with those found in the literature. The model is also able to provide modelling errors over learned parameters, thanks to the application of quasi-Bayesian Monte Carlo dropout. This work shows the promising use of machine learning, and in particular, self-supervised neural networks, in the context of kinematically modelling galaxies. This work represents the first steps in applying such models for kinematic fitting and we propose that variants of our model would seem especially suitable for enabling emission-line science from upcoming surveys with e.g. the SKA, allowing fast exploitation of these large data sets.

Highlights

  • In studying galaxy evolution, astronomers often use the atomic Hydrogen (H I) 21-cm line to trace the outermost regions of galactic discs (e.g. Warren, Jerjen & Koribalski 2004; Begum, Chengalur & Karachentsev 2005; Sancisi et al 2008; Heald et al 2011; Koribalski et al 2018)

  • In order to assist in the timely exploitation of these vast data sets, we explore the use of a self-supervised, physicsaware neural network capable of Bayesian kinematic modelling of galaxies

  • This work represents the first steps in applying such models for kinematic fitting and we propose that variants of our model would seem especially suitable for enabling emission-line science from upcoming surveys with e.g. the Square Kilometre Array (SKA), allowing fast exploitation of these large data sets

Read more

Summary

INTRODUCTION

Astronomers often use the atomic Hydrogen (H I) 21-cm line to trace the outermost regions of galactic discs (e.g. Warren, Jerjen & Koribalski 2004; Begum, Chengalur & Karachentsev 2005; Sancisi et al 2008; Heald et al 2011; Koribalski et al 2018). Breiman 2001; Krizhevsky, Sutskever & Hinton 2012), they are often highlighted for their slow training times (Lim, Loh & Shih 2000) and, in some cases, reluctance to generalize to unseen data sets (Dinh et al 2017; Kawaguchi, Pack Kaelbling & Bengio 2017) These qualities are unsuitable for survey tasks proposed for the SKA and we are required to look at alternative methods that incorporate the benefits of ML, without the drawbacks associated with standard ML practice. Such an approach may exist in the form of self-supervised learning (Liu et al 2020), whereby models train themselves without the need for an isolated training set.

Input data
Model aim
The encoder subnets
The decoder subnet
Model training procedure
Model testing procedure
Monte Carlo dropout
Input–output
The effect of resolution
Fill factor
H I examples
CO examples
Testing speed
Caveats
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call