Gravitational waves from the coalescences of black hole and neutron stars afford us the unique opportunity to determine the sources' properties, such as their masses and spins, with unprecedented accuracy. To do so, however, theoretical models of the emitted signal that are i) extremely accurate and ii) computationally highly efficient are necessary. The inclusion of more detailed physics such as higher-order multipoles and relativistic spin-induced orbital precession increases the complexity and hence also computational cost of waveform models, which presents a severe bottleneck to the parameter inference problem. A popular method to generate waveforms more efficiently is to build a fast surrogate model of a slower one. In this paper, we show that traditional surrogate modelling methods combined with artificial neural networks can be used to build a computationally highly efficient while still accurate emulation of multipolar time-domain waveform models of precessing binary black holes. We apply this method to the state-of-the-art waveform model SEOBNRv4PHM and find significant computational improvements: On a traditional CPU, the typical generation of a single waveform using our neural network surrogate SEOBNN_v4PHM_4dq2 takes 18ms for a binary black hole with a total mass of $44 M_{\odot}$ when generated from 20Hz. In comparison to SEOBNRv4PHM itself, this amounts to an improvement in computational efficiency by two orders of magnitude. Utilising additional GPU acceleration, we find that this speed-up can be increased further with the generation of batches of waveforms simultaneously. Even without additional GPU acceleration, this dramatic decrease in waveform generation cost can reduce the inference timescale from weeks to hours.
Read full abstract