1,156 publications found
Sort by
The joint application of a metaheuristic algorithm and a Bayesian statistics approach for uncertainty and stability assessment of nonlinear magnetotelluric data

Abstract. In this paper, we have developed three algorithms, namely hybrid weighted particle swarm optimization (wPSO) with the gravitational search algorithm (GSA), known as wPSOGSA; GSA; and PSO in MATLAB to interpret one-dimensional magnetotelluric (MT) data for some corrupted and non-corrupted synthetic data, as well as two examples of MT field data over different geological terrains: (i) geothermally rich area, island of Milos, Greece, and (ii) southern Scotland due to the occurrence of a significantly high electrical conductivity anomaly under crust and upper mantle, extending from the Midland Valley across the Southern Uplands into northern England. Even though the fact that many models provide a good fit in a large predefined search space, specific models do not fit well. As a result, we used a Bayesian statistical technique to construct and assess the posterior probability density function (PDF) rather than picking the global model based on the lowest misfit error. The study proceeds using a 68.27 % confidence interval for selecting a region where the PDF is more prevalent to estimate the mean model which is more accurate and close to the true model. For illustration, correlation matrices show a significant relationship among layer parameters. The findings indicate that wPSOGSA is less sensitive to model parameters and produces more stable and reliable results with the least uncertainty in the model, compatible with existing borehole samples. Furthermore, the present methods resolve two additional geologically significant layers, one highly conductive (less than 1.0 Ωm) and another resistive (300.0 Ωm), over the island of Milos, Greece, characterized by alluvium and volcanic deposits, respectively, as corroborated by borehole stratigraphy.

Open Access
Relevant
Review article: Dynamical systems, algebraic topology and the climate sciences

Abstract. The definition of climate itself cannot be given without a proper understanding of the key ideas of long-term behavior of a system, as provided by dynamical systems theory. Hence, it is not surprising that concepts and methods of this theory have percolated into the climate sciences as early as the 1960s. The major increase in public awareness of the socio-economic threats and opportunities of climate change has led more recently to two major developments in the climate sciences: (i) the Intergovernmental Panel on Climate Change's successive Assessment Reports and (ii) an increasing understanding of the interplay between natural climate variability and anthropogenically driven climate change. Both of these developments have benefited from remarkable technological advances in computing resources, relating throughput as well as storage, and in observational capabilities, regarding both platforms and instruments. Starting with the early contributions of nonlinear dynamics to the climate sciences, we review here the more recent contributions of (a) the theory of non-autonomous and random dynamical systems to an understanding of the interplay between natural variability and anthropogenic climate change and (b) the role of algebraic topology in shedding additional light on this interplay. The review is thus a trip leading from the applications of classical bifurcation theory to multiple possible climates to the tipping points associated with transitions from one type of climatic behavior to another in the presence of time-dependent forcing, deterministic as well as stochastic.

Open Access
Relevant
How far can the statistical error estimation problem be closed by collocated data?

Abstract. Accurate specification of the error statistics required for data assimilation remains an ongoing challenge, partly because their estimation is an underdetermined problem that requires statistical assumptions. Even with the common assumption that background and observation errors are uncorrelated, the problem remains underdetermined. One natural question that could arise is as follows: can the increasing amount of overlapping observations or other datasets help to reduce the total number of statistical assumptions, or do they introduce more statistical unknowns? In order to answer this question, this paper provides a conceptual view on the statistical error estimation problem for multiple collocated datasets, including a generalized mathematical formulation, an illustrative demonstration with synthetic data, and guidelines for setting up and solving the problem. It is demonstrated that the required number of statistical assumptions increases linearly with the number of datasets. However, the number of error statistics that can be estimated increases quadratically, allowing for an estimation of an increasing number of error cross-statistics between datasets for more than three datasets. The presented generalized estimation of full error covariance and cross-covariance matrices between datasets does not necessarily accumulate the uncertainties of assumptions among error estimations of multiple datasets.

Open Access
Relevant
Review article: Scaling, dynamical regimes, and stratification. How long does weather last? How big is a cloud?

Abstract. Until the 1980s, scaling notions were restricted to self-similar homogeneous special cases. I review developments over the last decades, especially in multifractals and generalized scale invariance (GSI). The former is necessary for characterizing and modelling strongly intermittent scaling processes, while the GSI formalism extends scaling to strongly anisotropic (especially stratified) systems. Both of these generalizations are necessary for atmospheric applications. The theory and some of the now burgeoning empirical evidence in its favour are reviewed. Scaling can now be understood as a very general symmetry principle. It is needed to clarify and quantify the notion of dynamical regimes. In addition to the weather and climate, there is an intermediate “macroweather regime”, and at timescales beyond the climate regime (up to Milankovitch scales), there is a macroclimate and megaclimate regime. By objectively distinguishing weather from macroweather, it answers the question “how long does weather last?”. Dealing with anisotropic scaling systems – notably atmospheric stratification – requires new (non-Euclidean) definitions of the notion of scale itself. These are needed to answer the question “how big is a cloud?”. In anisotropic scaling systems, morphologies of structures change systematically with scale even though there is no characteristic size. GSI shows that it is unwarranted to infer dynamical processes or mechanisms from morphology. Two “sticking points” preventing more widespread acceptance of the scaling paradigm are also discussed. The first is an often implicit phenomenological “scalebounded” thinking that postulates a priori the existence of new mechanisms, processes every factor of 2 or so in scale. The second obstacle is the reluctance to abandon isotropic theories of turbulence and accept that the atmosphere's scaling is anisotropic. Indeed, there currently appears to be no empirical evidence that the turbulence in any atmospheric field is isotropic. Most atmospheric scientists rely on general circulation models, and these are scaling – they inherited the symmetry from the (scaling) primitive equations upon which they are built. Therefore, the real consequence of ignoring wide-range scaling is that it blinds us to alternative scaling approaches to macroweather and climate – especially to new models for long-range forecasts and to new scaling approaches to climate projections. Such stochastic alternatives are increasingly needed, notably to reduce uncertainties in climate projections to the year 2100.

Open Access
Relevant
Review article: Towards strongly coupled ensemble data assimilation with additional improvements from machine learning

Abstract. We assessed different coupled data assimilation strategies with a hierarchy of coupled models, ranging from a simple coupled Lorenz model to the state-of-the-art coupled general circulation model CFSv2 (Climate Forecast System version 2). With the coupled Lorenz model, we assessed the analysis accuracy by strongly coupled ensemble Kalman filter (EnKF) and 4D-Variational (4D-Var) methods with varying assimilation window lengths. The analysis accuracy of the strongly coupled EnKF with a short assimilation window is comparable to that of 4D-Var with a long assimilation window. For 4D-Var, the strongly coupled approach with the coupled model produces more accurate ocean analysis than the Estimating the Circulation and Climate of the Ocean (ECCO)-like approach using the uncoupled ocean model. Experiments with the coupled quasi-geostrophic model conclude that the strongly coupled approach outperforms the weakly coupled and uncoupled approaches for both the full-rank EnKF and 4D-Var, with the strongly coupled EnKF and 4D-Var showing a similar level of accuracy higher than other coupled data assimilation approaches such as outer-loop coupling. A strongly coupled EnKF software framework is developed and applied to the intermediate-complexity coupled model SPEEDY-NEMO and the state-of-the-art operational coupled model CFSv2. Experiments assimilating synthetic or real atmospheric observations into the ocean through strongly coupled EnKF show that the strongly coupled approach improves the analysis of the atmosphere and upper ocean but degrades observation fits in the deep ocean, probably due to the unreliable error correlation estimated by a small ensemble. The correlation-cutoff method is developed to reduce the unreliable error correlations between physically irrelevant model states and observations. Experiments with the coupled Lorenz model demonstrate that strongly coupled EnKF informed by the correlation-cutoff method produces more accurate coupled analyses than the weakly coupled and plain strongly coupled EnKF regardless of the ensemble size. To extend the correlation-cutoff method to operational coupled models, a neural network approach is proposed to systematically acquire the observation localization functions for all pairs between the model state and observation types. The following strongly coupled EnKF experiments with an intermediate-complexity coupled model show promising results with this method.

Open Access
Relevant
Data-driven methods to estimate the committor function in conceptual ocean models

Abstract. In recent years, several climate subsystems have been identified that may undergo a relatively rapid transition compared to the changes in their forcing. Such transitions are rare events in general, and simulating long-enough trajectories in order to gather sufficient data to determine transition statistics would be too expensive. Conversely, rare events algorithms like TAMS (trajectory-adaptive multilevel sampling) encourage the transition while keeping track of the model statistics. However, this algorithm relies on a score function whose choice is crucial to ensure its efficiency. The optimal score function, called the committor function, is in practice very difficult to compute. In this paper, we compare different data-based methods (analog Markov chains, neural networks, reservoir computing, dynamical Galerkin approximation) to estimate the committor from trajectory data. We apply these methods on two models of the Atlantic Ocean circulation featuring very different dynamical behavior. We compare these methods in terms of two measures, evaluating how close the estimate is from the true committor and in terms of the computational time. We find that all methods are able to extract information from the data in order to provide a good estimate of the committor. Analog Markov Chains provide a very reliable estimate of the true committor in simple models but prove not so robust when applied to systems with a more complex phase space. Neural network methods clearly stand out by their relatively low testing time, and their training time scales more favorably with the complexity of the model than the other methods. In particular, feedforward neural networks consistently achieve the best performance when trained with enough data, making this method promising for committor estimation in sophisticated climate models.

Open Access
Relevant