Continuous stochastic processes
A general review of stochastic processes is given in the introduction; definitions, properties and a rough classification are presented together with the position and scope of the author's work as it fits into the general scheme. The first section presents a brief summary of the pertinent analytical properties of continuous stochastic processes and their probability-theoretic foundations which are used in the sequel. The remaining two sections (II and III), comprising the body of the work, are the author's contribution to the theory. It turns out that a very inclusive class of continuous stochastic processes are characterized by a fundamental partial differential equation and its adjoint (the Fokker-Planck equations). The coefficients appearing in those equations assimilate, in a most concise way, all the salient properties of the process, freed from boundary value considerations. The writer’s work consists in characterizing the processes through these coefficients without recourse to solving the partial differential equations. First, a class of coefficients leading to a unique, continuous process is presented, and several facts are proven to show why this class is restricted. Then, in terms of the coefficients, the unconditional statistics are deduced, these being the mean, variance and covariance. The most general class of coefficients leading to the Gaussian distribution is deduced, and a complete characterization of these processes is presented. By specializing the coefficients, all the known stochastic processes may be readily studied, and some examples of these are presented; viz. the Einstein process, Bachelier process, Ornstein-Uhlenbeck process, etc. The calculations are effectively reduced down to ordinary first order differential equations, and in addition to giving a comprehensive characterization, the derivations are materially simplified over the solution to the original partial differential equations. In the last section the properties of the integral process are presented. After an expository section on the definition, meaning, and importance of the integral process, a particular example is carried through starting from basic definition. This illustrates the fundamental properties, and an inherent paradox. Next the basic coefficients of the integral process are studied in terms of the original coefficients, and the integral process is uniquely characterized. It is shown that the integral process, with a slight modification, is a continuous Markoff process. The elementary statistics of the integral process are deduced: means, variances, and covariances, in terms of the original coefficients. It is shown that an integral process is never temporally homogeneous in a non-degenerate process. Finally, in terms of the original class of admissible coefficients, the statistics of the integral process are explicitly presented, and the integral process of all known continuous processes are specified.
- Research Article
- 10.4233/uuid:4228873d-ce6d-464f-ac06-735b6da3ea4d
- Feb 6, 2015
Fourier Methods for Multidimensional Problems and Backward SDEs in Finance and Economics
- Research Article
1
- 10.2139/ssrn.3680838
- Jan 1, 2020
- SSRN Electronic Journal
Conditional expectations (or probabilities) of events with multiple time steps into the future provide information about the likelihood of future events and, as such, are central to all decision making processes. The knowledge of such formulae is of fundamental importance, and has applications in all sectors of the economy. Hence, we are going to explore the problem of predicting conditional expectations for some path-dependent events driven by stochastic processes. That is, multi-step prediction problems where the correctness of the prediction is not revealed until more than one step after the prediction is made. On one hand, parametric models for computing conditional expectations of path-dependent events are tied to their ability of capturing the dynamics of the underlying stochastic processes. As a result, their misspecification will lead to computation errors, since their formulae depend on the particular form of the dynamics of the stochastic processes chosen. On the other hand, non-parametric models, such as neural networks (NNs), use data to estimate the implicit stochastic processes driving the dynamics of the future events and their relationship with conditional expectations. However, while these methods are good at solving single-step prediction problems, they cannot solve multi-step prediction problems. One solution is to consider methods specifically designed for solving multi-step prediction problems, such as temporal difference (TD) procedures, which are widely used in reinforcement learning (RL). We show that these methods are particularly well suited for capturing the conditional probability distribution function of stochastic processes. To infer the characteristics of conditional expectations for path-dependent events, we propose to combine the error backpropagation algorithm with the TD methods, getting the temporal difference backpropagation (TDBP) model. We propose a framework for learning conditional expectations, both in discrete and continuous space, and perform a thorough analysis of our model by testing it on well known continuous stochastic processes, namely, the Black-Scholes-model, the Merton model, the Heston model, and the Bates model. We show that directly feeding the continuous stochastic processes into Deep Networks improve results, leading to smoother and more accurate conditional expectations. Further, since Deep Networks generalise learning across similar states, they can be used for learning more complex nonlinear functions. Another advantage of using Deep Networks with the TDBP model is that the continuous process can be a vector, allowing for the computation of multiple processes conditional expectations. We will show that, in the continuous space, the TDBP-model recover nearly exactly the forward price, the call option price, and the digital option generated with Monte Carlo simulations. Further, we will extend our analysis to strong path-dependent payoffs such as forward starting contracts, barrier options, and American options. Finally, we will compute multiunderlying contingent claims such as basket options. In conclusion, we can approximately predict conditional expectations for path-dependent events with a single TDBP-model or an ensemble of such models.
- Research Article
- 10.22812/jetem.2015.26.1.003
- Jan 1, 2015
- Journal of Economic Theory and Econometrics
By building on the work of Conley et al. (1997), we investigate the stationarity of riskless short-term interest rate processes, analyzing generalized stochastic volatility models with level effects and examine the compatibility of stationarity of short-term interest rates with the popular dynamic term structure of models of interest rates, such as ATSM and QTSM. We extend extant stochastic volatility models with level effects crucial in characterizing the stationarity of a continuous time stochastic process, estimate the extended models using an efficient simulation-based MCML(Monte Carlo Maximum Likelihood) estimation method using importance sampling and implement model diagnostics using the inverse of standard normal distribution of the dynamic probability integral transform obtained via an auxiliary particle filter. Empirical estimation results indicate that TB3M and Call1d exhibit drift-induced stationarity compatible with both ATSM and QTSM, and that ED1M, KTB3M, MMF7d, CD91d and CP91d are of volatility-induced stationarity. Consequently, the results imply that, without careful consideration for the nature of stationarity of a short-term interest rate, indiscriminate application of theoretical models assuming the drift-induced stationarity of short-term interest rates may cause serious failure in derivative pricing and risk management.
- Research Article
12
- 10.1109/irepgit.1954.6373399
- Mar 1, 1954
- Transactions of the IRE Professional Group on Information Theory
This morning you have heard excellent presentations of two fields of endeavor, the results and techniques of which could be basic to a statistical theory of communication engineering. On the one hand, the field of statistical inference, as applied to discrete stochastic processes, has developed to a refined point due to the efforts of many statisticians. — The work of the late Professor Wald in his successful application of Von Neumann's game theory to the construction of a general theory of decision functions has played a dominant role in the development of these refinements. On the other hand, the theory of stochastic processes depending upon a discrete or continuous time parameter has been developed during the last three deoades by various mathematicians, only during the last few years has the study of statistical inference problems for continuous stochastic processes received much attention. Here the outstanding contribution is the thesis of Ulf Grenander, published in the Arkiv för matematik, Band 1, Häfte 3, 1950. In attempting to apply the techniques of statistical inference to continuous processes, it is evident that the central problem is to obtain a coordinate system for the process which allows one to actually carry out. the computations called for by various statistical methods. As far as I am aware, there are at present only two types of continuous stochastic processes for which a coordinate system has been obtained with which one can carry through some of the computations necessary in the testing of statistical hypotheses. One process is a projection on to the real axis of a finite dimensional Markoff process, Gaussian or non Gaussian. The other process is a Gaussian process with a continuous covariance function. The restriction of a continuous covariance function is not serious, since this property applies to all of the stochastic models which have been set up to study continuous processes occurring in communication engineering. (The assumption that the spectrum of a process is a "pure white noise" is not consistent with continuity of the covariance function, but a pure white noise is merely a mathematical idealization. The process with a flat band limited spectrum — a model often used in application — does possess a continuous covariance function.) On the other hand, the restriction to Gaussian processes is one which would be desirable to remove in some cases.
- Book Chapter
- 10.1016/b978-044452798-1/50022-1
- Jan 1, 2007
- Physics of Life
22 - Diffusion and continuous stochastic processes
- Research Article
2
- 10.6100/ir657524
- Jan 1, 2006
Many industrial chemical processes are complex, multi-phase and large scale in nature. These processes are characterized by various nonlinear physiochemical effects and fluid flows. Such processes often show coexistence of fast and slow dynamics during their time evolutions. The increasing demand for a flexible operation of a complex process, a pressing need to improve the product quality, an increasing energy cost and tightening environmental regulations make it rewarding to automate a large scale manufacturing process. Mathematical tools used for process modeling, simulation and control are useful to meet these challenges. Towards this purpose, development of process models, either from the first principles (conservation laws) i.e. the rigorous models or the input-output data based models constitute an important step. Both types of models have their own advantages and pitfalls. Rigorous process models can approximate the process behavior reasonably well. The ability to extrapolate the rigorous process models and the physical interpretation of their states make them more attractive for the automation purpose over the input-output data based identified models. Therefore, the use of rigorous process models and rigorous model based predictive control (R-MPC) for the purpose of online control and optimization of a process is very promising. However, due to several limitations e.g. slow computation speed and the high modeling efforts, it becomes difficult to employ the rigorous models in practise. This thesis work aims to develop a methodology which will result in smaller, less complex and computationally efficient process models from the rigorous process models which can be used in real time for online control and dynamic optimization of the industrial processes. Such methodology is commonly referred to as a methodology of Model (order) Reduction. Model order reduction aims at removing the model redundancy from the rigorous process models. The model order reduction methods that are investigated in this thesis, are applied to two benchmark examples, an industrial glass manufacturing process and a tubular reactor. The complex, nonlinear, multi-phase fluid flow that is observed in a glass manufacturing process offers multiple challenges to any model reduction technique. Often, the rigorous first principle models of these benchmark examples are implemented in a discretized form of partial differential equations and their solutions are computed using the Computational Fluid Dynamics (CFD) numerical tools. Although these models are reliable representations of the underlying process, computation of their dynamic solutions require a significant computation efforts in the form of CPU power and simulation time. The glass manufacturing process involves a large furnace whose walls wear out due to the high process temperature and aggressive nature of the molten glass. It is shown here that the wearing of a glass furnace walls result in change of flow patterns of the molten glass inside the furnace. Therefore it is also desired from the reduced order model to approximate the process behavior under the influence of changes in the process parameters. In this thesis the problem of change in flow patterns as result of changes in the geometric parameter is treated as a bifurcation phenomenon. Such bifurcations exhibited by the full order model are detected using a novel framework of reduced order models and hybrid detection mechanisms. The reduced order models are obtained using the methods explained in the subsequent paragraphs. The model reduction techniques investigated in this thesis are based on the concept of Proper Orthogonal Decompositions (POD) of the process measurements or the simulation data. The POD method of model reduction involves spectral decomposition of system solutions and results into arranging the spatio-temporal data in an order of increasing importance. The spectral decomposition results into spatial and temporal patterns. Spatial patterns are often known as POD basis while the temporal patterns are known as the POD modal coefficients. Dominant spatio-temporal patterns are then chosen to construct the most relevant lower dimensional subspace. The subsequent step involves a Galerkin projection of the governing equations of a full order first principle model on the resulting lower dimensional subspace. This thesis can be viewed as a contribution towards developing the databased nonlinear model reduction technique for large scale processes. The major contribution of this thesis is presented in the form of two novel identification based approaches to model order reduction. The methods proposed here are based on the state information of a full order model and result into linear and nonlinear reduced order models. Similar to the POD method explained in the previous paragraph, the first step of the proposed identification based methods involve spectral decomposition. The second step is different and does not involve the Galerkin projection of the equation residuals. Instead, the second step involves identification of reduced order models to approximate the evolution of POD modal coefficients. Towards this purpose, two different methods are presented. The first method involves identification of locally valid linear models to represent the dynamic behavior of the modal coefficients. Global behavior is then represented by ‘blending’ the local models. The second method involves direct identification of the nonlinear models to represent dynamic evolution of the model coefficients. In the first proposed model reduction method, the POD modal coefficients, are treated as outputs of an unknown reduced order model that is to be identified. Using the tools from the field of system identification, a blackbox reduced order model is then identified as a linear map between the plant inputs and the modal coefficients. Using this method, multiple local reduced LTI models corresponding to various working points of the process are identified. The working points cover the nonlinear operation range of the process which describes the global process behavior. These reduced LTI models are then blended into a single Reduced Order-Linear Parameter Varying (ROLPV) model. The weighted blending is based on nonlinear splines whose coefficients are estimated using the state information of the full order model. Along with the process nonlinearity, the nonlinearity arising due to the wear of the furnace wall is also approximated using the RO-LPV modeling framework. The second model reduction method that is proposed in this thesis allows approximation of a full order nonlinear model by various (linear or nonlinear) model structures. It is observed in this thesis, that, for certain class of full order models, the POD modal coefficients can be viewed as the states of the reduced order model. This knowledge is further used to approximate the dynamic behavior of the POD modal coefficients. In particular, reduced order nonlinear models in the form of tensorial (multi-variable polynomial) systems are identified. In the view of these nonlinear tensorial models, the stability and dissipativity of these models is investigated. During the identification of the reduced order models, the physical interpretation of the states of the full order rigorous model is preserved. Due to the smaller dimension and the reduced complexity, the reduced order models are computationally very efficient. The smaller computation time allows them to be used for online control and optimization of the process plant. The possibility of inferring reduced order models from the state information of a full order model alone i.e. the possibility to infer the reduced order models in the absence of access to the governing equations of a full order model (as observed for many commercial software packages) make the methods presented here attractive. The resulting reduced order models need further system theoretic analysis in order to estimate the model quality with respect to their usage in an online controller setting.
- Research Article
- 10.5075/epfl-thesis-4681
- Jan 1, 2010
With technological advances, the sources of available information have become more and more diverse. Recently, a new source of information has gained growing importance: sensor data. Sensors are devices sensing their environment in various ways and reporting in general a numeric result. A sensor continuously reports values, thus the flow of information is also continuous, like a stream. As the field has developed, the usage paradigm has shifted from stand-alone sensors to interconnected sensors, or sensor networks. Sensors became more complex, generating larger quantities of data and having wireless communication modules for transmitting their data. Initially, data from sensor networks was first stored, and then processed. Thus, classical database technologies could be used. However, the focus has soon shifted towards reacting to sensor data in real time. A user query reacting in real time to a stream of data is called a continuous query, and to answer such a query requires that it is continuously processed, as new values appear in the sensor stream. As sensor networks and sensor based applications become more popular, users identified the need to query sensor data pertaining to different sensor networks. This setting, of interconnected sensor networks, consists of more powerful computational devices, connected with a wired communication, which can process and relay sensor data. Users can launch queries at any node to query sensor events coming from any part of the interconnected network. In this setting, the number of data sources (sensors) is orders of magnitude smaller than the number of user queries, which themselves are orders of magnitude smaller than the full content of the (sensor) data streams, and the communication becomes by far the greatest communication bottleneck. In this thesis, we present our research for reducing communication cost generated by applications accessing large scale interconnected sensor networks. Our first contribution is a probabilistic algorithm for detecting and exploiting subsumption of queries over correlated data sources. This technique reduces the communication traffic generated by query forwarding in an interconnected sensor network, by filtering out queries subsumed by a set of existing queries. In addition, this reduces the number of results that need to be transmitted. We propose an efficient forwarding algorithm of the elements of the result sets, by employing a publish/subscribe data dissemination. To support the general setting of distributed data sources in an interconnected sensor network, we propose a Filter-Split-Forward approach that adapts set subsumption to the case of join queries over distributed data sources. We base our approach on the concept of filter-split-forward phases for efficient query filtering and placement inside the network, and an efficient, publish/subscribe forwarding of matching events. We also propose distributed adaptations of state of the art solutions for continuous query processing over multiple data sources. We adapt these techniques to require only local interactions between nodes, without relying on global knowledge or a centralized server. We show how our approach achieves lower traffic through query subsumption and efficient event dissemination. In many applications using sensor data, users are only interested in the most relevant events. To that end, we present our solutions for processing top-k queries over distributed sensor data streams in the presence of query subsumption. We analyze the impact of query subsumption on top-k processing. We propose different strategies for incorporating query subsumption into top-k processing, in order to obtain sufficiently accurate result sets, while keeping network traffic low. We show that the best tradeoff is achieved by updating throughout the network the values of k for the queries resulting from splitting a query between nodes and also for the set of queries subsuming a query. By this work we contribute a framework for increasing the efficiency of continuous query processing over distributed data sources for a wide range of applications, such as environmental and living spaces monitoring, network and traffic monitoring, and in general for all sensor enhanced monitoring applications.
- Research Article
2
- 10.5075/epfl-thesis-3460
- Jan 1, 2006
The present PhD thesis deals with the high temperature polymerization of methyl methacrylate in a continuous pilot scale process. The major aim is to investigate the feasibility of a polymerization process for the production of PMMA molding compound at temperatures in the range from 140 °C to 170 °C. Increasing the process temperature has the advantage of decreasing molecular weight and viscosity of the reaction mixture, thus allowing to reduce the addition of chain transfer agent and to increase the polymer content in the reactor. At the same time, the reaction rates are higher and the devolatilization is facilitated compared to low conversion polymerizations. Altogether, it leads to an improved space time yield of the process. However, increasing the process temperature also has an important impact on both, polymerization kinetics and polymer properties. The first two parts of this work are, therefore, dedicated to the self-initiation respectively the high temperature gel effect observed for the polymerization of MMA at the given temperature range. The self-initiation of MMA is mostly caused by polymeric peroxides that form from physically dissolved oxygen and the monomer, itself. The formation, decomposition and constitution of these peroxides are intensively studied and a formal kinetic is proposed for the formation and decomposition reaction. The polymerization of MMA is subject to a rather strong auto-acceleration, called gel effect, the intensity of which depends on process conditions and solvent content. There are several models proposed in the specialized literature to describe this phenomenon by modifying the termination rate constant as a function of conversion and temperature. The second part of this study contains the evaluation of these models with regards to their applicability to high temperature MMA polymerization as well as the development of a new variant of an existing model, which correctly describes the gel effect in the temperature range of interest as a function of polymer content, temperature and molecular weight. The advantage of this new variant is that it includes all other factors influencing the gel effect, i.e. chain transfer agent, initiator load, comonomer and solvent content, and that it is suitable for the description of batch and continuous processes. A complete kinetic model for the description of the high temperature copolymerization of MMA and MA, containing the results from the first two parts of this work, is established within the software package PREDICI® and validated by means of several series of batch polymerizations. In the third part of this work, a complete pilot plant installation for the continuous polymerization of MMA is designed and constructed in order to study the impact of increasing the reaction temperature on process properties and product quality under conditions similar to those of an industrial-scale polymerization. The pilot plant is based on a combination of recycle loop and consecutive tube reactor, equipped with SULZER SMXL® / SMX® static mixing technology. Furthermore, it is equipped with a static one-step flash devolatilization and a pelletizer for polymer granulation. At the same time, a refined method for inline conversion monitoring by speed of sound measurement is developed and tested in the pilot plant. By means of this technique it is possible to follow the dynamic behavior of the reactor and to measure directly the monomer conversion without taking a sample. The results of several pilot plant polymerizations carried out under different conditions are presented and the impact of temperature, comonomer and chain transfer agent on the thermal stability of the product is analyzed. From these results, the r-parameters for the copolymerization of MMA and MA at 160 °C as well as the chain transfer constant for n-dodecanethiol at 140 °C are determined. Finally, the pilot plant experiments are used to validate the kinetic model established beforehand in PREDICI® for the continuous copolymerization.
- Single Report
4
- 10.3386/t0023
- May 1, 1982
Introductory lectures on capital theory often begin by analyzing the following problem: I have a tree which will be worth X(t) if cut down at time t. If the discount rate is r, when should the tree be cut down? What is the present value of such a tree? The answers to these questions are straightforward. Since at time t a tree which I plan to cut down at time T is worth e[to the power of rt]e[to the power of ?rT]X(T), I should choose the cutting date T* to maximize e[to the power of -rT]X(T); at t < T* a tree is worth e[to the power of rt]e[to the power of -rT*]X(T*). In this paper we analyze how the answers to these questions of timing and evaluation change when the tree's growth is stochastic rather than deterministic. Suppose a tree will be worth X(t,w) if cut down at time t when X(t,w) is a stochastic process. When should it be cut down? What is its present value? We study these questions for trees which grow according to both discrete and continuous stochastic processes. The approach to continuous time stochastic processes contrasts with much of the finance literature in two respects. First, we obtain sharp aomparative statics results without restricting ourselves to particu,ar stochastic specifications. Second, while the option pricing literature seems to imply that increases in variance always increase value, we show that an increase in the variance of a Tree's growth has ambiguous effects on its value.
- Research Article
1
- 10.11588/heidok.00006549
- Jan 1, 2006
Diffusion processes are widely used for mathematical modeling in finance e.g. in modeling foreign exchange rates. This paper presents a non-linear stochastic continuous time model that captures the main characteristics of price dynamics. The generalized mean reversion process discloses various features of observed price movements such as multi-modality of the distributions, multiple equilibria, and regime switching. The attractors depend substantially on the economic environment. The model reveals a significant connection between exchange rates and its fundamentals. Furthermore, it is consistent with traditional flexible exchange rate models. Stochastic differential equations describing diffusion processes are directly linked to the forward Kolmogorov equation. In order to calibrate the models, efficient algorithms identifying the system parameters are in demand. Taking into account nonlinear effects in volatility and drift and dependence on observed economic data, which are not directly modeled, one obtains problems which cannot be treated by standard numerical methods. The coefficients are rapidly oscillatory and strong instabilities may arise. To handle these problems we develop numerical methods, which are used to simulate the nonlinear dynamics of exchange rates depending on economic data.
- Research Article
- 10.6092/unibo/amsdottorato/8579
- Apr 20, 2018
This thesis is centered on the theory of stochastic processes and their applications in biological systems characterized by a complex environment. Three case studies have been modeled by the use of the three fundamental tools of stochastic processes: the master equation (ME), the stochastic differential equation (SDE) and the partial differential equation (PDE). The principal approach here applied to deal with complexity is the characterization of the system by means of probability distributions describing each a parameter of the model or the introduction of fractional order derivatives to include non-local and memory effects maintaining the linearity in the equations. In Chapter 1 we briefly review the theory of stochastic processes. In Chapter 2 we derive a birth-death process master equation to test if Long Interspersed Elements (LINEs) can be modeled according to the neutral theory of biodiversity. In Chapter 3 we derive a model of anomalous diffusion based on a Langevin approach in which anomalous behavior arise in the asymptotic intermediate state as a consequence of the heterogeneity of the system, from the superposition of Ornstein-Uhlenback processes. In Chapter 4 we propose an extension of the cable equation, used to describe anomalous diffusion phenomena as the signal conduction in spiny dendrites, by introducing a Caputo time fractional derivative.
- Research Article
- 10.12691/jfe-7-2-4
- Jun 20, 2019
- Journal of finance and economics
The dynamics of the asset process and variance process are driven by continuous time processes in the Information Based Asset Pricing Framework as proposed by Brody, Hughson and Macrina, also known as the BHM Model. To make use of numerical simulation, the continuous time processes can be discretized to discrete time processes. Here, two discretization schemes will be looked at: Euler scheme and Milstein scheme. The main objective of this study is to apply the two discretization schemes to the Information Based Asset Picing Framework. The two schemes will first be applied to the Black-Scholes and the Heston models and then extended to the BHM model. Studies have shown that the Euler scheme approach to discretization can be inefficient which makes the use of the Milstein scheme approach to discretization more accurate due to the expansion of the coefficients involved in the stochastic differential equation.
- Dissertation
- 10.7907/h3we-rd54.
- Jan 1, 1967
The response of a dynamical system modelled by differential equations with white noise as the forcing term may be represented by a Markov process with incremental moments simply related to the differential equation. The structure of such Markov processes is completely characterized by a transition probability density function which satisfies a partial differential equation known as the Fokker-Planck equation. Sufficient conditions for the uniqueness and convergence of the transition probability density function to the steady-state are established. Exact solutions for the transition probability density function are known only for linear stochastic differential equations and certain special first order nonlinear systems. Exact solutions for the steady-state density are known for special nonlinear systems. Eigenfunction expansions are shown to provide a convenient vehicle for obtaining approximate solutions for first order systems and for self-excited oscillators. The first term in an asymptotic expansion of the transition probability density function is found for self-excited oscillators. A class of first passage problems for oscillators, which includes the zero crossing problem, is formulated.
- Research Article
- 10.17169/refubium-26409
- Jul 12, 2019
In the analysis of metastable diffusion processes, Transition Path Theory (TPT) provides a way to quantify the probability of observing a given transition between two disjoint metastable subsets of state space. However, many TPT-based methods for diffusion processes compute the primary objects from TPT, such as the committor and probability current, by solving partial differential equations. The computational performance of these methods is limited by the need for mesh-based computations, the need to estimate the coefficients of the stochastic differential equation that defines the diffusion process, and the use of Markovian processes to approximate the diffusion process. We propose a Monte Carlo method for approximating the primary objects from TPT from sample trajectory data of the diffusion process, without estimating drift or diffusion coefficients. We discretise the state space of the diffusion process using Voronoi tessellations and construct a non-Markovian jump process on the dual Delaunay graph. For the jump process, we define committors, probability currents, and streamlines, and use these to define piecewise constant approximations of the corresponding objects from TPT for diffusion processes. Rigorous error bounds and convergence theorems establish the validity of our approach. A comparison of our method with TPT for Markov chains (Metzner et al., Multiscale Model Simul. 2009) on a triple-well 2D potential provides proof of principle.
- Conference Article
- 10.1109/hipcw.2015.18
- Dec 16, 2015
When surveying an environment, it is often beneficial to construct empirical models that realistically define how phenomena might unfold. One reason is that such forecasts can help with the placement of either fixed or mobile sensors in widespread domains. In particular, if investigators can determine locations where crucial measurements can be made to either validate the model or extend its forecasting capabilities, then sensors can be accordingly allocated. It is also possible to use details from such models to limit the overall size of the network. This capability is crucial when the cost of physically deploying sensor nodes is prohibitive. Another reason is that empirical models can help lower the energy consumption of battery-operated sensor networks. If it is known in advance that no informative observations will likely be made in certain regions, then the devices in those regions can be remotely disabled. The selective switching of sensors can improve the lifespan of the network.Due to the utility of empirical, predictive models, a number of modeling approaches have been proposed. One popular approach entails constructing differential-equation-based dynamical systems from available sensor observation ensembles. Such dynamical systems permit the simulation of many types of phenomena. They also routinely allow for principled interpolations and extrapolations of the model forecasts to times and places not examined. Furthermore, statistics from the dynamical systems can offer insight into where the sensors should be positioned to collect meaningful information.A defining trait of these dynamical-systems-based approaches is that they are deterministic: the state of a phenomenon is uniquely determined by the system parameters and by sets of previous states. Since there is no element of randomness to these models, many of the inherent uncertainties associated with sensing and characterizing phenomena are completely ignored. Consequently, it is difficult to determine if the predicted outcome has either a high, moderate, or low chance of occurring in the real world. Phrased another way, investigators have little to no advance knowledge of if they should believe a model's forecasts. Not accounting for uncertainty can additionally yield models that fit the observations well yet provide poor predictions.We develop a novel, empirical modeling approach for constructing stochastic, (non-)linear dynamical systems that describe and predict the evolution of phenomena. Under this model, we assume that the phenomena dynamics is encoded within sensor observations of the environment.We also assume that the sensor observations are modeled by a series of stochastic processes composed of stochastic partial differential equations. These stochastic processes have been designed to capture both continuous and discrete dynamics behaviors. Capturing both types of behaviors helps characterize the type of phenomena that we consider and predict its behaviors.We favor a modeling approach that relies on a series of simple, extensible representations of the dynamics versus a single, complex representation.We assume that the phenomena dynamics can be temporally and spatially segmented into one or more phase space regions. The number of regions is determined automatically by our model in a dynamics-driven manner. That is, phase space regions with simple, slowly-changing dynamics are coarsely partitioned, while regions with complicated, quickly-changing dynamics are finely partitioned.We assign one or more stochastic processes to describe the evolution of the phenomena dynamics for each of these regions. The associated parameters for each of these processes are derived automatically from statistics of the sensor observations. This functionality helps ensure that the model predictions will be accurate. It also relieves the investigator of needing to manually specify model parameter values, which can be a time-consuming an error-prone task.The stochastic processes that we learn typically have several redundant degrees of freedom. Each of these redundancies decreases their evaluation rate and hence how quickly the model can make predictions. To improve the evaluation rate, we propose to uncover reduced-order versions of the stochastic processes. Our simplification strategy is based on non-linearly projecting the stochastic processes onto a domain with fewer degrees of freedom. These projections are derived in a dynamics-sensitive manner. That is, our model ensures that the projections retain as much as possible about the dynamics when removing any unnecessary details.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.