Arrangements and Likelihood

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

Abstract We develop novel tools for computing the likelihood correspondence of an arrangement of hypersurfaces in a projective space. This uses the module of logarithmic derivations. This object is well-studied in the linear case, when the hypersurfaces are hyperplanes. We here focus on nonlinear scenarios and their applications in statistics and physics.

Similar Papers
  • Research Article
  • Cite Count Icon 2
  • 10.1002/andp.200551711-1201
Symmetries and pre‐metric electromagnetism
  • Dec 1, 2005
  • Annalen der Physik
  • D.H Delphenich

The equations of pre‐metric electromagnetism are formulated as an exterior differential system on the bundle of exterior differential 2‐forms over the spacetime manifold. The general form for the symmetry equations of the system is computed and then specialized to various possible forms for an electromagnetic constitutive law, namely, uniform linear, non‐uniform linear, and uniform nonlinear. It is shown that in the uniform linear case, one has four possible ways of prolonging the symmetry Lie algebra, including prolongation to a Lie algebra of infinitesimal projective transformations of a real four‐dimensional projective space. In the most general non‐uniform linear case, the effect of non‐uniformity on symmetry seems inconclusive in the absence of further specifics, and in the uniform nonlinear case, the overall difference from the uniform linear case amounts to a deformation of the electromagnetic constitutive tensor by the electromagnetic field strengths, which induces a corresponding deformation of the symmetry Lie algebra that was obtained in the linear uniform case.

  • Research Article
  • Cite Count Icon 11
  • 10.1016/j.jde.2008.12.022
Dichotomy spectra and Morse decompositions of linear nonautonomous differential equations
  • Jan 29, 2009
  • Journal of Differential Equations
  • Martin Rasmussen

Dichotomy spectra and Morse decompositions of linear nonautonomous differential equations

  • Research Article
  • Cite Count Icon 1
  • 10.3390/math11194076
An Extended Zeta Function with Applications in Model Building and Bayesian Analysis
  • Sep 26, 2023
  • Mathematics
  • Arak M Mathai

In certain problems in model building and Bayesian analysis, the results end up in forms connected with generalized zeta functions. This necessitates the introduction of an extended form of the generalized zeta function. Such an extended form of the zeta function is introduced in this paper. In model building situations and in various types of applications in physical, biological and social sciences and engineering, a basic model taken is the Gaussian model in the univariate, multivariate and matrix-variate situations. A real scalar variable logistic model behaves like a Gaussian model but with a thicker tail. Hence, for many of industrial applications, a logistic model is preferred to a Gaussian model. When we study the properties of a logistic model in the multivariate and matrix-variate cases, in the real and complex domains, invariably the problem ends up in the extended zeta function defined in this paper. Several such extended logistic models are considered. It is also found that certain Bayesian considerations also end up in the extended zeta function introduced in this paper. Several such Bayesian models in the multivariate and matrix-variate cases in the real and complex domains are discussed. It is stated in a recent paper that “Quantum Mechanics is just the Bayesian theory generalized to the complex Hilbert space”. Hence, the models developed in this paper are expected to have applications in quantum mechanics, communication theory, physics, statistics and related areas.

  • Research Article
  • Cite Count Icon 17
  • 10.1142/s021797922350008x
Inspection of hybrid nanoparticles flow across a nonlinear/linear stretching surface when heat sink/source and thermophoresis particle deposition impacts are significant
  • Sep 5, 2022
  • International Journal of Modern Physics B
  • G K Ramesh + 4 more

The aim of this paper is to highlight the impact of thermophoretic particle deposition (TPD) and heat source/sink on the steady two-dimensional laminar motion of Casson hybrid-type nanoliquid through a nonlinear stretched surface. Ordinary differential equations (ODEs) are created by taking a collection of partial differential equations (PDEs) and simplifying them using an appropriate similarity component. The reduced ODEs are then evaluated using the shooting method and Runge–Kutta–Fehlberg’s fourth and fifth orders. Finally, tables and graphs are used to display the numerical data. It is seen that the fluid velocity step-downs when the porous parametric quantity and solid nanoparticle values increase. Heat distribution is enhanced with an enhancement in the heat source/sink constraint. Concentration goes down with an enhancement in thermophoretic constraint. The use of nanoparticles improves heat dispersion but reduces concentration in the linear case while increasing axial velocity in the nonlinear scenario.

  • Research Article
  • Cite Count Icon 4
  • 10.1063/5.0094667
A nonlinear model of diffusive particle acceleration at a planar shock
  • Jul 1, 2022
  • Physics of Plasmas
  • Dominik Walter + 3 more

We study the process of nonlinear shock acceleration based on a nonlinear diffusion–advection equation. The nonlinearity is introduced via a dependence of the spatial diffusion coefficient on the distribution function of accelerating particles. This dependence reflects the interaction of energetic particles with self-generated waves. After thoroughly testing the grid-based numerical setup with a well-known analytical solution for linear shock acceleration at a specific shock transition, we consider different nonlinear scenarios, assess the influence of various parameters, and discuss the differences of the solutions to those of the linear case. We focus on the following observable features of the acceleration process, for which we quantify the differences in the linear and nonlinear cases: (1) the shape of the momentum spectra of the accelerated particles, (2) the time evolution of the solutions, and (3) the spatial number density profiles.

  • Single Report
  • 10.2172/1114834
Robust parallel iterative solvers for linear and least-squares problems, Final Technical Report
  • Jan 16, 2014
  • Yousef Saad

The primary goal of this project is to study and develop robust iterative methods for solving linear systems of equations and least squares systems. The focus of the Minnesota team is on algorithms development, robustness issues, and on tests and validation of the methods on realistic problems. 1. The project begun with an investigation on how to practically update a preconditioner obtained from an ILU-type factorization, when the coefficient matrix changes. 2. We investigated strategies to improve robustness in parallel preconditioners in a specific case of a PDE with discontinuous coefficients. 3. We explored ways to adapt standard preconditioners for solving linear systems arising from the Helmholtz equation. These are often difficult linear systems to solve by iterative methods. 4. We have also worked on purely theoretical issues related to the analysis of Krylov subspace methods for linear systems. 5. We developed an effective strategy for performing ILU factorizations for the case when the matrix is highly indefinite. The strategy uses shifting in some optimal way. The method was extended to the solution of Helmholtz equations by using complex shifts, yielding very good results in many cases. 6. We addressed the difficult problem of preconditioning sparse systems of equations onmore » GPUs. 7. A by-product of the above work is a software package consisting of an iterative solver library for GPUs based on CUDA. This was made publicly available. It was the first such library that offers complete iterative solvers for GPUs. 8. We considered another form of ILU which blends coarsening techniques from Multigrid with algebraic multilevel methods. 9. We have released a new version on our parallel solver - called pARMS [new version is version 3]. As part of this we have tested the code in complex settings - including the solution of Maxwell and Helmholtz equations and for a problem of crystal growth.10. As an application of polynomial preconditioning we considered the problem of evaluating f(A)v which arises in statistical sampling. 11. As an application to the methods we developed, we tackled the problem of computing the diagonal of the inverse of a matrix. This arises in statistical applications as well as in many applications in physics. We explored probing methods as well as domain-decomposition type methods. 12. A collaboration with researchers from Toulouse, France, considered the important problem of computing the Schur complement in a domain-decomposition approach. 13. We explored new ways of preconditioning linear systems, based on low-rank approximations.« less

  • Research Article
  • Cite Count Icon 13
  • 10.1063/1.4793406
Self-assembly of cyclic rod-coil diblock copolymers
  • Mar 7, 2013
  • The Journal of Chemical Physics
  • Linli He + 4 more

The phase behavior of cyclic rod-coil diblock copolymer melts is investigated by the dissipative particle dynamics simulation. In order to understand the effect of chain topological architecture better, we also study the linear rod-coil system. The comparison of the calculated phase diagrams between the two rod-coil copolymers reveals that the order-disorder transition point (χN)ODT for cyclic rod-coil diblock copolymers is always higher than that of equivalent linear rod-coil diblocks. In addition, the phase diagram for cyclic system is more "symmetrical," due to the topological constraint. Moreover, there are significant differences in the self-assembled overall morphologies and the local molecular arrangements. For example, frod = 0.5, both lamellar structures are formed while rod packing is different greatly in cyclic and linear cases. The lamellae with rods arranged coplanarly into bilayers occurs in cyclic rod-coil diblocks, while the lamellar structure with rods arranged end by end into interdigitated bilayers appears in linear counterpart. In both the lamellar phases, the domain size ratio of cyclic to linear diblocks is ranged from 0.63 to 0.70. This is attributed to that the cyclic architecture with the additional junction increases the contacts between incompatible blocks and prevents the coil chains from expanding as much as the linear cases. As frod = 0.7, the hexagonally packed cylinder is observed for cyclic rod-coil diblocks, while liquid-crystalline smectic A lamellar phase is formed in linear system. As a result, the cyclization of a linear rod-coil block copolymer can induce remarkable differences in the self-assembly behavior and also diversify its physical properties and applications greatly.

  • Research Article
  • Cite Count Icon 1
  • 10.1007/s11222-022-10173-4
Interpolating log-determinant and trace of the powers of matrix textbf{A} + ttextbf{B}
  • Nov 10, 2022
  • Statistics and Computing
  • Siavash Ameli + 1 more

We develop heuristic interpolation methods for the functions tmapsto log det left( textbf{A} + ttextbf{B} right) and tmapsto {{,textrm{trace},}}left( (textbf{A} + ttextbf{B})^{p} right) where the matrices textbf{A} and textbf{B} are Hermitian and positive (semi) definite and p and t are real variables. These functions are featured in many applications in statistics, machine learning, and computational physics. The presented interpolation functions are based on the modification of sharp bounds for these functions. We demonstrate the accuracy and performance of the proposed method with numerical examples, namely, the marginal maximum likelihood estimation for Gaussian process regression and the estimation of the regularization parameter of ridge regression with the generalized cross-validation method.

  • Supplementary Content
  • 10.48550/arxiv.2009.07385
Interpolating Log-Determinant and Trace of the Powers of Matrix $\mathbf{A} + t \mathbf{B}$
  • Sep 15, 2020
  • arXiv (Cornell University)
  • Siavash Ameli + 1 more

We develop heuristic interpolation methods for the functions $t \mapsto \log \det \left( \mathbf{A} + t \mathbf{B} \right)$ and $t \mapsto \operatorname{trace}\left( (\mathbf{A} + t \mathbf{B})^{p} \right)$ where the matrices $\mathbf{A}$ and $\mathbf{B}$ are Hermitian and positive (semi) definite and $p$ and $t$ are real variables. These functions are featured in many applications in statistics, machine learning, and computational physics. The presented interpolation functions are based on the modification of sharp bounds for these functions. We demonstrate the accuracy and performance of the proposed method with numerical examples, namely, the marginal maximum likelihood estimation for Gaussian process regression and the estimation of the regularization parameter of ridge regression with the generalized cross-validation method.

  • Research Article
  • Cite Count Icon 4
  • 10.1007/s11222-014-9491-z
Monte Carlo algorithms for computing $$\alpha $$ α -permanents
  • Jul 29, 2014
  • Statistics and Computing
  • Junshan Wang + 1 more

We consider the computation of the $$\alpha $$?-permanent of a non-negative $$n \times n$$n×n matrix. This appears in a wide variety of real applications in statistics, physics and computer-science. It is well-known that the exact computation is a #P complete problem. This has resulted in a large collection of simulation-based methods, to produce randomized solutions whose complexity is only polynomial in $$n$$n. This paper will review and develop algorithms for both the computation of the permanent $$\alpha =1$$?=1 and $$\alpha >0$$?>0 permanent. In the context of binary $$n \times n$$n×n matrices a variety of Markov chain Monte Carlo (MCMC) computational algorithms have been introduced in the literature whose cost, in order to achieve a given level of accuracy, is $$\mathcal {O}(n^7\log ^4(n))$$O(n7log4(n)); see Bezakova (Faster Markov chain Monte Carlo algorithms for the permanent and binary contingency tables. University of Chicago, Chicago, 2008), Jerrum et al. (J Assoc Comput Mach 51:671---697, 2004). These algorithms use a particular collection of probability distributions, the `ideal' of which, (in some sense) are not known and need to be approximated. In this paper we propose an adaptive sequential Monte Carlo (SMC) algorithm that can both estimate the permanent and the ideal sequence of probabilities on the fly, with little user input. We provide theoretical results associated to the SMC estimate of the permanent, establishing its convergence. We also analyze the relative variance of the estimate, associated to an `ideal' algorithm (related to our algorithm) and not the one we develop, in particular, computating explicit bounds on the relative variance which depend upon $$n$$n. As this analysis is for an ideal algorithm, it gives a lower-bound on the computational cost, in order to achieve an arbitrarily small relative variance; we find that this cost is $$\mathcal {O}(n^4\log ^4(n))$$O(n4log4(n)). For the $$\alpha $$?-permanent, perhaps the gold standard algorithm is the importance sampling algorithm of Kou and McCullagh (Biometrika 96:635---644, 2009); in this paper we develop and compare new algorithms to this method; apriori one expects, due to the weight degeneracy problem, that the method of Kou and McCullagh (Biometrika 96:635---644, 2009) might perform very badly in comparison to the more advanced SMC methods we consider. We also present a statistical application of the $$\alpha $$?-permanent for statistical estimation of boson point process and MCMC methods to fit the associated model to data.

  • Conference Article
  • Cite Count Icon 1
  • 10.1117/12.718128
Image segmentation techniques for improved processing of landmine responses in ground-penetrating radar data
  • Apr 27, 2007
  • Proceedings of SPIE, the International Society for Optical Engineering/Proceedings of SPIE
  • Peter A Torrione + 1 more

As ground penetrating radar sensor phenomenology improves, more advanced statistical processing approaches become applicable to the problem of landmine detection in GPR data. Most previous studies on landmine detection in GPR data have focused on the application of statistics and physics based prescreening algorithms, new feature extraction approaches, and improved feature classification techniques. In the typical framework, prescreening algorithms provide spatial location information of anomalous responses in down-track / cross-track coordinates, and feature extraction algorithms are then tasked with generating low-dimensional information-bearing feature sets from these spatial locations. However in time-domain GPR, a significant portion of the data collected at prescreener flagged locations may be unrelated to the true anomaly responses - e.g. ground bounce response, responses either temporally before or after the anomalous response, etc. The ability to segment the information-bearing region of the GPR image from the background of the image may thus provide improved performance for feature-based processing of anomaly responses. In this work we will explore the application of Markov random fields (MRFs) to the problem of anomaly/background segmentation in GPR data. Preliminary results suggest the potential for improved feature extraction and overall performance gains via application of image segmentation approaches prior to feature extraction.

  • Research Article
  • Cite Count Icon 20
  • 10.2118/65-03-02
Fundamentals and Applications of the Monte Carlo Method
  • Jul 1, 1965
  • Journal of Canadian Petroleum Technology
  • E Stoian

Perhaps no industry is more vitally concerned with risk than the oil and gas industry, and few professional men other than petroleum engineers are required to recommend higher investments on the basis of such uncertain and limited information. In recent years, the number of methods dealing with risk and uncertainty has grown extensively so that the classical approach, using analytical procedures and single-valued parameters, has undergone a significant transformation. The use of stochastic variables, such as those frequently encountered in the oil industry, is now economically feasible in the evaluation of an increasing number of problems by the application of Monte Carlo techniques. This paper defines the Monte Carlo method as a subset of simulation techniques and a combination of sampling theory and numerical analysis. Briefly, the basic technique of Monte Carlo simulation involves the representation of a situation in logical terms so that, when the pertinent dataare inserted, a mathematical solution becomes possible. Using random numbers generated by an "automatic penny-tossing machine" and a cumulative frequency distribution, the behaviour pattern of the particular case can be determined bya process of statistical experimentation. In practical applications, the probabilistic data expressed in one or several distributions may pertain to geological exploration, discovery processes, oil-in-place evaluations or the productivity of heterogeneous reservoirs. The great variety of probability models used to date (e.g., normal, log-normal, skewed log-normal, linear, multi-modal, discontinuous, theoretical, experimental) confirms a broad rangeof experimental computations and a genuine interest in realistic representations of random impacts encountered in practice. Emphasis in this paper is directed to the salient characteristics of the Monte Carlo method, with particular reference to applications in areas relatedto the oil and gas industry. Attention is focused on reservoir engineering models. Nevertheless. management facets of the oil and gas business are considered along with other applications in statistics, mathematics, physics and engineering. Sample size reducing techniques and the use of digitalcomputers are also discussed.

  • Book Chapter
  • 10.1017/cbo9780511609107.007
Von Mises' frequentist probabilities
  • Jan 28, 1994
  • Jan Von Plato

MECHANICS, PROBABILITY, AND POSITIVISM Richard von Mises was an applied mathematician. He first specialized in mechanics, hydrodynamics especially. By applied, he really meant it: A book of 1918, for example, dealt with the ‘elements of technical hydromechanics.’ Another related specialty was the theory of flight, much in vogue early on in the century. His work on probability starts properly around 1918, and from the same time are his first writings on foundational problems in science: on foundations of probability in 1919, and on classical mechanics in 1920. Von Mises' philosophical book Wahrscheinlichkeit, Statistik und Wahrheit of 1928 was the third volume in the Vienna Circle series ‘Schriften zur wissenschaftlichen Weltauffassung’, edited by Philipp Frank and Moritz Schlick. The year 1931 marked the publication of von Mises' big book on probability theory, Wahrschein-lichkeitsrechnung , whose exact title adds, ‘and its application in statistics and theoretical physics.’ The posthumous Mathematical Theory of Probability and Statistics is based on lectures from the early 1950s. Von Mises was a declared positivist, identifying himself with the philosophy of the Berlin group, the Vienna Circle, and the Unity of Science Movement. His Kleines Lehrbuch des Positivismus (1939) appeared in an English version in 1951 as Positivism: A Study in Human Understanding . It attempts to give a broad presentation of the logical empiricist world view, from foundations of knowledge and the sciences to morals and society.

  • Research Article
  • Cite Count Icon 10
  • 10.1002/rsa.20311
A boundary corrected expansion of the moments of nearest neighbor distributions
  • Jul 13, 2010
  • Random Structures & Algorithms
  • Elia Liitiäinen + 2 more

In this article, the moments of nearest neighbor distance distributions are examined. While the asymptotic form of such moments is well‐known, the boundary effect has this far resisted a rigorous analysis. Our goal is to develop a new technique that allows a closed‐form high order expansion, where the boundaries are taken into account up to the first order. The resulting theoretical predictions are tested via simulations and found to be much more accurate than the first order approximation obtained by neglecting the boundaries.While our results are of theoretical interest, they definitely also have important applications in statistics and physics. As a concrete example, we mention estimating Rényi entropies of probability distributions. Moreover, the algebraic technique developed may turn out to be useful in other, related problems including estimation of the Shannon differential entropy.© 2010 Wiley Periodicals, Inc. Random Struct. Alg., 2010

  • Research Article
  • Cite Count Icon 1
  • 10.1080/16583655.2020.1840855
On the quaternion projective space
  • Jan 1, 2020
  • Journal of Taibah University for Science
  • Y Omar + 4 more

Apart from being a vital and exciting field in mathematics with interesting results, projective spaces have various applications in design theory, coding theory, physics, combinatorics, number theory and extremal combinatorial problems. In this paper, we consider real, complex and quaternion projective spaces. We focus on the geometric feature of the sectional curvatures. We first study the real and complex projective spaces. We prove that their sectional curvatures are constant. Then, we consider the quaternion projective space. Specifically, we prove that the quaternion projective space has a positive constant sectional curvature. We also determine the pinching constant for the complex and quaternion projective spaces.

Save Icon
Up Arrow
Open/Close