Surrogate models (also referenced as metamodels) are recognized as powerful, data-driven, predictive tools for the approximation (emulation) of storm surge. For this application, they are developed using synthetic storm simulations, for example, leveraging databases created during regional flood studies. Once calibrated, they can efficiently emulate the expected storm surge for new storms. These surge predictions pertain to different locations (nodes) of interest and, for this study, to time-series estimation. The developed metamodel approximation in this setting needs to capture both the spatio-temporal variability as well as the variability across the storm features (i.e., the storm parametric description). Two different approaches are examined to address this multi-faceted variability using Gaussian processes as emulation technique. The first approach addresses the spatio-temporal variability through the metamodel output, leveraging the fact that predictions need to always be established for the same nodes and time-series instances. This allows the storm surge for each instance and node to be considered as a separate output, leaving the storm features as the only component determining the surrogate model input. To improve the computational efficiency for the metamodel calibration and predictions, principal component analysis is adopted to address the high dimensionality of the output. The second approach considers all three aforementioned sources of variability as part of the metamodel input, establishing a surrogate model that simultaneously predicts the storm surge (scalar output) across space, time, and storm features. The number of available data points across these three inputs becomes extremely high in this case (product of number of nodes, time-instances, and storms), and in order to establish computational tractability, a separable covariance function is considered for the Gaussian process. Comparison of computational efficiency and accuracy across the two approaches is established utilizing the Coastal Hazards System–North Atlantic (CHS-NA) database. These comparisons show that all approaches perform well, with average root mean squared error of around 5 cm and an average correlation coefficient of over 96% across the majority of the geographic domain. The computational efficiency for establishing predictions is also similar (less than a second per storm), while for the calibration, the computational demand for the approach that addresses spatio-temporal variability through the metamodel input is higher, unless some approximate formulation is adopted. Additionally, the use of distance to landfall (instead of time to landfall) is examined for describing the time-series variability of the surge. It is shown that this distance-based definition can offer a better correlation of the response for specific nodes across storms with different forward speedsbut similar remaining features (size, intensity, and track), something that can ultimately improve the accuracy of the surrogate model calibrated using such a database description.
Read full abstract