Fully Discrete Continuous Data Assimilation Algorithms for Semilinear Parabolic Equations with Small Noisy Data
Fully Discrete Continuous Data Assimilation Algorithms for Semilinear Parabolic Equations with Small Noisy Data
- Research Article
5
- 10.1007/s10596-022-10180-4
- Dec 5, 2022
- Computational Geosciences
Obtaining accurate high-resolution representations of model outputs is essential to describe the system dynamics. In general, however, only spatially- and temporally-coarse observations of the system states are available. These observations can also be corrupted by noise. Downscaling is a process/scheme in which one uses coarse scale observations to reconstruct the high-resolution solution of the system states. Continuous Data Assimilation (CDA) is a recently introduced downscaling algorithm that constructs an increasingly accurate representation of the system states by continuously nudging the large scales using the coarse observations. We introduce a Discrete Data Assimilation (DDA) algorithm as a downscaling algorithm based on CDA with discrete-in-time nudging. We then investigate the performance of the CDA and DDA algorithms for downscaling noisy observations of the Rayleigh-Bénard convection system in the chaotic regime. In this computational study, a set of noisy observations was generated by perturbing a reference solution with Gaussian noise before downscaling them. The downscaled fields are then assessed using various error- and ensemble-based skill scores. The CDA solution was shown to converge towards the reference solution faster than that of DDA but at the cost of a higher asymptotic error. The numerical results also suggest a quadratic relationship between the ℓ2 error and the noise level for both CDA and DDA. Cubic and quadratic dependences of the DDA and CDA expected errors on the spatial resolution of the observations were obtained, respectively.
- Research Article
6
- 10.2118/22301-pa
- Nov 1, 1991
- Journal of Petroleum Technology
Summary This paper shows how some simple 3D graphics tools can be combined to provide efficient soft-ware for visualizing and analyzing data obtained from reservoir simulators and geological simulations. The animation and interactive capabilities of the software quickly provide a deep understanding of the fluid-flow behavior and an accurate idea of the internal architecture of a reservoir. Introduction Reservoir geologists and engineers often deal with 3D data from simulation programs. These data may represent lithofacies programs. These data may represent lithofacies or petrophysical values in 3D space. Some data may also vary in time. Without an adaptable way to represent-these numbers, it is difficult to understand their underlying physical phenomena in depth. Without physical phenomena in depth. Without visualization tools one must imagine a complex 3D structure (e.g., the architecture of a reservoir) or try to understand a movement within this structure (e.g., the fluid flow). We describe a software environment that allows scientists to explore, manipulate, and visualize their data interactively and dynamically on a workstation. With this software, the user can view precomputed data from any angle, can sort through the volume of data, and can even see a film from the data, if these data are evolving in time. Our aim is not to make general-purpose graphics software for visualization of data from molecular chemistry, computational fluid dynamics, or oil reservoir engineering. Our concern is to provide reservoir-simulation, reservoir-geology, and basin-modeling scientists with attractive, standard, 3D-graphics tools that meet their specific needs. Examples of the specific needs encountered are described below. To handle lithofacies and petrophysical values, we introduce the distinction between discrete and continuous scalars; i.e., no interpolation between data is required for discrete scalars (interpretation between clay and sandstone makes no sense), but interpolation is authorized for continuous scalars. This implies that the calculations and the graphics representation of isosurfaces or isocontours will differ for continuous and discrete data. In 3D basin modeling, the simulation of the deposition of sedimentary bodies may need time-varying grids (e.g., the number of layers may depend on time). A specific grid management is then designed so that an animated visualization of the deposition process is available on the workstation process is available on the workstation screen. An interpolation in time is introduced for reservoir-simulation data to avoid having to store all simulation steps and to allow visualization of a smooth movement of fluid flow. Visualization of wells, cutting planes defined by two vertical wells, and successions of cutting planes defined by successions of interactively chosen wells are also among the specific needs encountered in the scientific fields for which our visualization software is designed. Data To Be Visualized Concept of Discrete and Continuous Data. We handle scalar data (lithofacies, pressure, saturation, etc.), not vector data. Data are computed on a grid that is described later. We distinguish two kinds of data: discrete and continuous. Discrete data are lithofacies values, and continuous data include such scalars as pressure, saturation, and porosity. Note that most general-purpose porosity. Note that most general-purpose visualization packages handle only continuous data, such as oil saturation, which may take an infinity of values within some authorized range (say between 0 and 1). Moreover, in such packages, it is implicitly assumed that the numerical values to be visualized are point samples of a continuous function point samples of a continuous function (continuous in space), which cannot be the case for lithofacies values. The set of lithofacies values the user deals with is finite, and there is no possible continuity. Lithofacies values actually are attributes rather than numerical values, although the simulation programs provide them to the visualization system as provide them to the visualization system as numerical values. Because discrete and continuous data are intrinsically different, their graphics representations do not have the same meaning. As a matter of fact, when continuous data are visualized with colors on a computer screen, each color represents a class of valuesnot just onebecause the number of available colors is finite. Even though the colored picture on the screen may look smooth, an infinite range of numerical values is approximated by a finite set of colors. On the contrary, the color representation for discrete values is not an approximation because there is an exact correspondence between a color and a value.
- Research Article
46
- 10.3390/e20100764
- Oct 5, 2018
- Entropy
Sample entropy (SE) has relative consistency using biologically-derived, discrete data >500 data points. For certain populations, collecting this quantity is not feasible and continuous data has been used. The effect of using continuous versus discrete data on SE is unknown, nor are the relative effects of sampling rate and input parameters m (comparison vector length) and r (tolerance). Eleven subjects walked for 10-minutes and continuous joint angles (480 Hz) were calculated for each lower-extremity joint. Data were downsampled (240, 120, 60 Hz) and discrete range-of-motion was calculated. SE was quantified for angles and range-of-motion at all sampling rates and multiple combinations of parameters. A differential relationship between joints was observed between range-of-motion and joint angles. Range-of-motion SE showed no difference; whereas, joint angle SE significantly decreased from ankle to knee to hip. To confirm findings from biological data, continuous signals with manipulations to frequency, amplitude, and both were generated and underwent similar analysis to the biological data. In general, changes to m, r, and sampling rate had a greater effect on continuous compared to discrete data. Discrete data was robust to sampling rate and m. It is recommended that different data types not be compared and discrete data be used for SE.
- Research Article
12
- 10.1137/100808824
- Jan 1, 2012
- SIAM Journal on Discrete Mathematics
Induced Matchings in Subcubic Planar Graphs
- Research Article
237
- 10.1137/0118025
- Mar 1, 1970
- SIAM Journal on Applied Mathematics
Perfect Codes in the Lee Metric and the Packing of Polyominoes
- Research Article
26
- 10.1142/s1465876302000575
- Jun 1, 2002
- International Journal of Computational Engineering Science
International Journal of Computational Engineering ScienceVol. 03, No. 02, pp. 103-116 (2002) No AccessAN OPTIMAL ALGORITHM FOR SOLVING ALL-PAIRS SHORTEST PATHS ON TRAPEZOID GRAPHSSUKUMAR MONDAL, MADHUMANGAL PAL, and TAPAN K. PALSUKUMAR MONDALDepartment of Applied Mathematics with Oceanology and Computer Programming, Vidyasagar University, Midnapore - 721 102, West Bengal, India Search for more papers by this author , MADHUMANGAL PALDepartment of Applied Mathematics with Oceanology and Computer Programming, Vidyasagar University, Midnapore - 721 102, West Bengal, India Search for more papers by this author , and TAPAN K. PALDepartment of Applied Mathematics with Oceanology and Computer Programming, Vidyasagar University, Midnapore - 721 102, West Bengal, India Search for more papers by this author https://doi.org/10.1142/S1465876302000575Cited by:14 Next AboutSectionsPDF/EPUB ToolsAdd to favoritesDownload CitationsTrack CitationsRecommend to Library ShareShare onFacebookTwitterLinked InRedditEmail AbstractThe shortest-paths problem is an important problem in graph theory and finds diverse applications in various fields. This is why shortest path algorithms have been designed more thoroughly than any other algorithm in graph theory. A large number of optimization problems are mathematically equivalent to the problem of finding shortest paths in a graph. The Shortest-path between a pair of vertices is defined as the path with shortest length between the pair of vertices. The shortest path from one node to another often gives the best way to route message between the nodes. This paper presents an O(n2) time algorithm for solving all pairs shortest path problems on trapezoid graphs which are extensions of interval graphs and permutation graphs. The space complexity of this algorithm is of O(n2). This problem has been solved by constructing n breadth-first search (BFS) trees with each of the n vertices as root. As the lower bound of time complexity for computing the all pairs shortest paths is known to be of O(n2), this proposed algorithm is optimal.Keywords:Design of algorithmsanalysis of algorithmsbreadth-first searchshortest-pathstrapezoid graph FiguresReferencesRelatedDetailsCited By 14Computation of diameter, radius and center of permutation graphsShaoli Nandi, Sukumar Mondal, and Sambhu Charan Barman6 December 2021 | Discrete Mathematics, Algorithms and Applications, Vol. 14, No. 08An optimal algorithm to find minimum k-hop connected dominating set of permutation graphsAmita Samanta Adhya, Sukumar Mondal, and Sambhu Charan Barman25 March 2020 | Asian-European Journal of Mathematics, Vol. 14, No. 04An Introduction to Intersection GraphsMadhumangal Pal1 Jan 2020Computation of Inverse 1-Center Location Problem on the Weighted Trapezoid GraphsBiswanath Jana, Sukumar Mondal and Madhumangal Pal1 May 2019 | Missouri Journal of Mathematical Sciences, Vol. 31, No. 1An optimal algorithm to find minimum k-hop dominating set of interval graphsSambhu Charan Barman, Madhumangal Pal, and Sukumar Mondal24 April 2019 | Discrete Mathematics, Algorithms and Applications, Vol. 11, No. 02Surjective L (2, 1)-labeling of cycles and circular-arc graphsSk. Amanathulla and Madhumangal Pal27 Jul 2018 | Journal of Intelligent & Fuzzy Systems, Vol. 35, No. 1L(0, 1)-Labelling of Trapezoid GraphsSatyabrata Paul, Madhumangal Pal and Anita Pal8 June 2017 | International Journal of Applied and Computational Mathematics, Vol. 3, No. S1Scheduling Algorithm to Select Optimal Programme Slots in Television Channels: A Graph Theoretic ApproachMadhumangal Pal and Anita Pal20 July 2016 | International Journal of Applied and Computational Mathematics, Vol. 3, No. 3Unrestricted and complete Breadth-First Search of trapezoid graphs in timeChristophe Crespelle and Philippe Gambette1 Jun 2010 | Information Processing Letters, Vol. 110, No. 12-13A linear time algorithm to construct a tree 4-spanner on trapezoid graphsSambhu Charan Barman, Sukumar Mondal and Madhumangal Pal1 Mar 2010 | International Journal of Computer Mathematics, Vol. 87, No. 4Counting the number of vertex covers in a trapezoid graphMin-Sheng Lin and Yung-Jui Chen1 Oct 2009 | Information Processing Letters, Vol. 109, No. 21-22An efficient algorithm to find next-to-shortest path on permutation graphsSambhu Charan Barman, Sukumar Mondal and Madhumangal Pal11 December 2008 | Journal of Applied Mathematics and Computing, Vol. 31, No. 1-2Efficient algorithms for the minimum connected domination on trapezoid graphsYin-Te Tsai, Yaw-Ling Lin and F.R. Hsu1 Jun 2007 | Information Sciences, Vol. 177, No. 12An optimal parallel algorithm for solving all-pairs shortest paths problem on circular-arc graphsAnita Saha, Madhumangal Pal and Tapan K. Pal1 Mar 2005 | Journal of Applied Mathematics and Computing, Vol. 17, No. 1-2 Recommended Vol. 03, No. 02 Metrics History KeywordsDesign of algorithmsanalysis of algorithmsbreadth-first searchshortest-pathstrapezoid graphPDF download
- Conference Article
22
- 10.1109/icmla.2008.29
- Jan 1, 2008
While data could be discrete and continuous (defined as ordinal numerical features), some classifiers, like Naive Bayes (NB), work only with or may perform better with the discrete data. We focus on NB due to its popularity and linear training time. We investigate the impact of eight discretization algorithms (Equal Width, Equal Frequency, Maximum Entropy, IEM, CADD, CAIM, MODL, and CACC) on the classification with NB and two modern semi-NB classifiers, LBR and AODE.Our comprehensive empirical study indicates that unsupervised discretization algorithms are the fastest while among the supervised algorithms the fastest is maximum entropy, followed by CAIM and IEM. The CAIM and MODL discretizers generate the lowest and the highest number of discrete values, respectively.We compare the time to build the classification model and classification accuracy when using raw and discretized data. We show that discretization helps to improve the classification with the NB when compared with flexible NB which models continuous features using Gaussian kernels. The AODE classifier obtains on average the best accuracy, while the best performing setup includes discretization with IEM and classification with AODE. The runner-up setups include CAIM and CACC coupled with AODE and CAIM and IEM coupled with LBR. IEM and CAIM are shown to provide statistically significant improvements across all considered datasets for LBR and AODE classifiers when compared with using NB on the continuous data. We also show that the improved accuracy comes at the trade-off of substantially increased runtime.
- Research Article
90
- 10.1089/cmb.2008.0023
- Jun 1, 2010
- Journal of Computational Biology
An increasing number of algorithms for biochemical network inference from experimental data require discrete data as input. For example, dynamic Bayesian network methods and methods that use the framework of finite dynamical systems, such as Boolean networks, all take discrete input. Experimental data, however, are typically continuous and represented by computer floating point numbers. The translation from continuous to discrete data is crucial in preserving the variable dependencies and thus has a significant impact on the performance of the network inference algorithms. We compare the performance of two such algorithms that use discrete data using several different discretization algorithms. One of the inference methods uses a dynamic Bayesian network framework, the other-a time-and state-discrete dynamical system framework. The discretization algorithms are quantile, interval discretization, and a new algorithm introduced in this article, SSD. SSD is especially designed for short time series data and is capable of determining the optimal number of discretization states. The experiments show that both inference methods perform better with SSD than with the other methods. In addition, SSD is demonstrated to preserve the dynamic features of the time series, as well as to be robust to noise in the experimental data. A C++ implementation of SSD is available from the authors at http://polymath.vbi.vt.edu/discretization .
- Research Article
2
- 10.4304/jnw.9.6.1380-1387
- Jun 9, 2014
- Journal of Networks
The data processed by intrusion detection systems usually is vague, uncertainty, imprecise and incomplete. Rough Set theory is one of the best methods to process this kind of data. But Rough Set theory can only process some discrete data. So the data with continuous numerical attributes must be discretized before they are used. Some current discretization algorithms are classified and reviewed in detail. The mathematical descriptions of the discretization problem and intrusion detection are given by means of Rough Set theory. By fusing Rough Set theory with entropy theory we propose a simple and fast discretization algorithm based on information loss. The algorithm is applied to different samples with the same attributes from KDDcup99 and intrusion detection systems. The discretized data is used to reduce attributes so as to relieve the payload of intrusion detection systems. The experimental results show that the proposed discretization algorithm is sensitive to the initial samples only for part of all condition attributes. But the algorithm dose not compromise the effect of intrusion detection and it improves the response performance of the intrusion detection model remarkably.
- Research Article
4
- 10.1111/j.1467-9671.2010.01231.x
- Dec 1, 2010
- Transactions in GIS
The integrated management of heterogeneous spatial data, such as continuous fields and discrete data, is an important issue for the Geographic Information (GI) community. Indeed, GI users are forced to navigate among and operate with several tools in order to solve their spatial problems, due to the lack of systems capable of integrating different components, each meant to provide a specific solution. The aim of this article is to propose an OpenGeospatial-compliant solution which supports expert users in handling problems involving heterogeneous data by means of a seamless approach. A class hierarchy modeling spatial discrete objects, continuous data, relationships, and operations, is described, whereby data are organized in agreement with the binary representation. A running example is illustrated to support readers' understanding of the proposed solution. Finally, some guidelines about an implementation modality are given, to demonstrate the applicability of the proposal to an existing DBMS.
- Research Article
100
- 10.1137/16m1076526
- Jan 1, 2016
- SIAM Journal on Applied Dynamical Systems
We adapt a previously introduced continuous in time data assimilation (downscaling) algorithm for the two-dimensional Navier--Stokes equations to the more realistic case when the measurements are obtained discretely in time and may be contaminated by systematic errors. Our algorithm is designed to work with a general class of observables, such as low Fourier modes and local spatial averages over finite volume elements. Under suitable conditions on the relaxation (nudging) parameter, the spatial mesh resolution, and the time step between successive measurements, we obtain an asymptotic in time estimate of the difference between the approximating solution and the unknown reference solution corresponding to the measurements, in an appropriate norm, which shows exponential convergence up to a term which depends on the size of the errors. A stationary statistical analysis of our discrete data assimilation algorithm is also provided.
- Book Chapter
15
- 10.1017/9781108367639.006
- May 2, 2019
Author(s): Farhat, A; Lunasin, E; Titi, ES | Abstract: In this paper we survey the various implementations of a new data assimilation (downscaling) algorithm based on spatial coarse mesh measurements. As a paradigm, we demonstrate the application of this algorithm to the 3D Leray-α subgrid scale turbulence model. Most importantly, we use this paradigm to show that it is not always necessary to collect coarse mesh measurements of all the state variables that are involved in the underlying evolutionary system, in order to recover the corresponding exact reference solution. Specifically, we show that in the case of the 3D Leray-α model of turbulence, the solutions of the algorithm, constructed using only coarse mesh observations of any two components of the three-dimensional velocity field, and without any information on the third component, converge, at an exponential rate in time, to the corresponding exact reference solution of the 3D Leray-α model. This study serves as an addendum to our recent work on abridged continuous data assimilation for the 2D Navier-Stokes equations. Notably, similar results have also been recently established for the 3D viscous Planetary Geostrophic circulation model in which we show that coarse mesh measurements of the temperature alone are sufficient for recovering, through our data assimilation algorithm, the full solution; i.e. the three components of velocity vector field and the temperature. Consequently, this proves the Charney conjecture for the 3D Planetary Geostrophic model; namely, that the history of the large spatial scales of temperature is sufficient for determining all the other quantities (state variables) of the model.
- Research Article
24
- 10.1016/j.eswa.2021.115540
- Jul 7, 2021
- Expert Systems with Applications
Application of Chi-square discretization algorithms to ensemble classification methods
- Research Article
1
- 10.1088/1361-6420/ad0e25
- Dec 1, 2023
- Inverse Problems
The purpose of this study is to recover the diffuse interface width parameter for nonlinear Allen–Cahn equation by a continuous data assimilation algorithm proposed recently. We obtain the large-time error between the true solution of the Allen–Cahn equation and the data assimilated solution produced by implicit–explicit one-leg fully discrete finite element methods due to discrepancy between an approximate diffuse interface width and the physical interface width. The strongly A-stability of the one-leg methods plays key roles in proving the exponential decay of initial error. Based on the long-time error estimates, we develop several algorithms to recover both the true solution and the true diffuse interface width using only spatially discrete phase field function measurements. Numerical experiments confirm our theoretical results and verify the effectiveness of the proposed methods.
- Research Article
2
- 10.1007/s00348-025-03969-3
- Feb 1, 2025
- Experiments in Fluids
Within the framework of the European Union Horizon 2020 project HOMER (Holistic Optical Metrology for Aero-Elastic Research), data assimilation (DA) algorithms for dense flow field reconstructions developed by different research teams, hereafter referred to as the participants, were comparatively assessed. The assessment is performed using a synthetic database that reproduces the turbulent flow in the wake of a cylinder in ground effect, placed at the distance of one diameter from a lower wall. Downstream of the cylinder, this wall continues either in the form of a flat steady wall, or of a flexible panel undergoing periodic oscillations; these two situations correspond to two different test cases, the latter being introduced to extend the evaluation to fluid–structure interaction problems. The input data for the data assimilation algorithms were datasets containing the particle locations and their trajectories identification numbers, at increasing tracer concentrations from 0.04 to 1.4 particles/mm3 (equivalent image density values between 0.005 and 0.16 particles per pixel, ppp). The outputs of the DA algorithms considered for the assessment were the three components of the velocity, the nine components of the velocity gradient tensor and the static pressure, defined in the flow field on a Cartesian grid, as well as the static pressure on the wall surface, and its position in the deformable wall case. The results were analysed in terms of errors of the output quantities with respect to the ground-truth values and their distributions. Additionally, the performances of the different DA algorithms were compared with that of a standard linear interpolation approach. The velocity errors were found in the range between 3 and 11% of the bulk velocity; furthermore, the use of the DA algorithms enabled an increase of the measurement spatial resolution by a factor between 3 and 4. The errors of the velocity gradients were of the order of 10–15% of the peak vorticity magnitude. Accurate pressure reconstruction was achieved in the flow field, whereas the evaluation of the surface pressure revealed more challenging. As expected, lower errors were obtained for increasing seeding concentration. The difference of accuracy among the results of the different data assimilation algorithms was noticeable especially for the pressure field and the compliance with governing equations of fluid motion, and in particular mass conservation. The analysis of the flexible panel test case showed that the panel position could be reconstructed with micrometric accuracy, rather independently of the data assimilation algorithm and the seeding concentration. The accurate evaluation of the static pressure field and of the surface pressure proved to be a challenge, with typical errors between 3 and 20% of the free-stream dynamic pressure.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.