Abstract
In the last years, deep learning has solved seemingly intractable problems, boosting the hope to find (approximate) solutions to problems that now are considered unsolvable. Earthquake prediction - a recognized moonshot challenge - is obviously worthwhile exploring with deep learning. Although encouraging results have been obtained recently, deep neural networks (DNN) may sometimes create the illusion that patterns hidden in data are complex when this is not necessarily the case. We investigate the results of De Vries et al. [Nature, vol. 560, 2018] who defined a DNN of 6 hidden layers with 50 nodes each, and with an input layer of 12 stress features, to predict aftershock patterns in space. The performance of their DNN was assessed using ROC with AUC = 0.85 obtained. We first show that a simple artificial neural network (ANN) of 1 hidden layer yields a similar performance, suggesting that aftershock patterns are not necessarily highly abstract objects. Following first principle guidance, we then bypass the elastic stress change tensor computation, making profit of the tensorial nature of neural networks. AUC = 0.85 is again reached with an ANN, now with only two geometric and kinematic features. Not only seems deep learning to be “excessive” in the present case, the simpler ANN streamlines the process of aftershock forecasting, limits model bias, and provides better insights into aftershock physics and possible model improvement. Complexification is a controversial trend in all of Science and first principles should be applied wherever possible to gain physical interpretations of neural networks.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have