Modified Engel Algorithm and Applications in Absorbing/Non-Absorbing Markov Chains and Monopoly Game
The Engel algorithm was created to solve chip-firing games and can be used to find the stationary distribution for absorbing Markov chains. Kaushal et al. developed a matlab-based version of the generalized Engel algorithm based on Engel’s probabilistic abacus theory. This paper introduces a modified version of the generalized Engel algorithm, which we call the modified Engel algorithm, or the mEngel algorithm for short. This modified version is designed to address issues related to non-absorbing Markov chains. It achieves this by breaking down the transition matrix into two distinct matrices, where each entry in the transition matrix is calculated from the ratio of the numerator and denominator matrices. In a nested iteration setting, these matrices play a crucial role in converting non-absorbing Markov chains into absorbing ones and then back again, thereby providing an approximation of the solutions of non-absorbing Markov chains until the distribution of a Markov chain converges to a stationary distribution. Our results show that the numerical outcomes of the mEngel algorithm align with those obtained from the power method and the canonical decomposition of absorbing Markov chains. We provide an example, Torrence’s problem, to illustrate the application of absorbing probabilities. Furthermore, our proposed algorithm analyzes the Monopoly transition matrix as a form of non-absorbing probabilities based on the rules of the Monopoly game, a complete information dynamic game, particularly the probability of landing on the Jail square, which is determined by the order of the product of the movement, Jail, Chance, and Community Chest matrices. The Long Jail strategy, the Short Jail strategy, and the strategy for getting out of Jail by rolling consecutive doubles three times have been formulated and tested. In addition, choosing which color group to buy is also an important strategy. By comparing the probability distribution of each strategy and the profit return for each property and color group of properties, and the color group property, we find which one should be used when playing Monopoly. In conclusion, the mEngel algorithm, implemented using R codes, offers an alternative approach to solving the Monopoly game and demonstrates practical value.
- 10.1201/9781003011583
- May 6, 2021
- 10.3390/math13121994
- Jun 17, 2025
- Mathematics
1
- 10.1080/002073901753124646
- Sep 1, 2001
- International Journal of Mathematical Education in Science and Technology
8
- 10.1080/07468342.1997.11973857
- May 1, 1997
- The College Mathematics Journal
3
- 10.1080/10724117.2018.1518840
- Oct 1, 2018
- Math Horizons
12
- 10.1080/0025570x.1997.11996573
- Dec 1, 1997
- Mathematics Magazine
7
- 10.1080/0025570x.2003.11953165
- Apr 1, 2003
- Mathematics Magazine
9
- 10.1080/00029890.1959.11989252
- Feb 1, 1959
- The American Mathematical Monthly
1
- 10.1080/09332480.1999.10542173
- Sep 1, 1999
- CHANCE
41
- 10.1007/bf00590021
- Mar 1, 1975
- Educational Studies in Mathematics
- Research Article
- 10.1016/j.jtbi.2025.112086
- May 1, 2025
- Journal of theoretical biology
Absorbing Markov chain model of PrEP drug adherence to estimate adherence decay rate and probability distribution in clinical trials.
- Research Article
13
- 10.1137/100798776
- Oct 1, 2011
- SIAM Journal on Matrix Analysis and Applications
This work presents an approach for reducing the number of arithmetic operations involved in the computation of a stationary distribution for a finite Markov chain. The proposed method relies on a particular decomposition of a transition-probability matrix called stochastic factorization. The idea is simple: when a transition matrix is represented as the product of two stochastic matrices, one can swap the factors of the multiplication to obtain another transition matrix, potentially much smaller than the original. We show in the paper that the stationary distributions of both Markov chains are related through a linear transformation, which opens up the possibility of using the smaller chain to compute the stationary distribution of the original model. In order to support the application of stochastic factorization, we prove that the model derived from it retains all the properties of the original chain which are relevant to the stationary distribution computation. Specifically, we show that (i) for each recurrent class in the original Markov chain there is a corresponding class in the derived model with the same period and, given some simple assumptions about the factorization, (ii) the original chain is irreducible if and only if the derived chain is irreducible and (iii) the original chain is regular if and only if the derived chain is regular. The paper also addresses some issues associated with the application of the proposed approach in practice and briefly discusses how stochastic factorization can be used to reduce the number of operations needed to compute the fundamental matrix of an absorbing Markov chain.
- Research Article
- 10.1002/rsa.21119
- Nov 8, 2022
- Random Structures & Algorithms
We introduce a deterministic analogue of Markov chains that we call the hunger game. Like rotor‐routing, the hunger game deterministically mimics the behavior of both recurrent Markov chains and absorbing Markov chains. In the case of recurrent Markov chains with finitely many states, hunger game simulation concentrates around the stationary distribution with discrepancy falling off like , where is the number of simulation steps; in the case of absorbing Markov chains with finitely many states, hunger game simulation also exhibits concentration for hitting measures and expected hitting times with discrepancy falling off like rather than . When transition probabilities in a finite Markov chain are rational, the game is eventually periodic; the period seems to be the same for all initial configurations and the basin of attraction appears to tile the configuration space (the set of hunger vectors) by translation, but we have not proved this.
- Research Article
1
- 10.1029/2021jb022480
- Apr 1, 2022
- Journal of Geophysical Research: Solid Earth
Cellular automata have proven effective in obtaining statistical insights into expected time series, magnitude‐frequency distributions, and average slip histories of earthquakes by confirming, for instance, the Gutenberg‐Richter magnitude‐frequency distribution and the existence of scaling functions for slip histories. Yet, exhaustive modeling is often required to obtain such insights since the model behavior is generally difficult to predict from fixed input parameters, such as the dissipation and long‐range stress interaction distance. We demonstrate that the temporal dynamics of a cellular automaton (CA), representing discretized equations of motion, can be simplified and modeled as an absorbing Markov chain with transition matrices that are fully determined by CA parameters. Time series, frequency‐size distributions, and slip histories of the Markov chain Monte Carlo (MCMC) and CA models are stochastically equivalent. The proposed method is a mean‐field approximation that replicates temporal CA statistics by ignoring spatial components. Fundamentally, the temporal portion of CA can be represented as a memoryless process in which the current outcome only depends on the immediate past. We believe the transparency of the statistical model may provide pertinent insights into the mean‐field behavior of a variety of physical applications near a critical state, including earthquake and avalanche patterns. For instance, the average slip histories display a typical but asymmetric shape due to a preferred path through probability space with initial acceleration of slip rate to peak size followed by slower deceleration toward rupture arrest.
- Research Article
- 10.15675/gepros.v0i4.181
- Dec 1, 2007
For companies to remain competitive in their markets, efficient cost management becomes indispensable. This article is a product cost analysis and study for a micro company using absorbing Markov chains. The data were summarized using a productive system scheme that considers aspects such as production, inspection, shipping and waste. Product cost analysis in this productive system is conducted from the determination and estimation of all costs involved in the production, inspection and shipping phases. The production capacity probabilities were determined for each subsystem. The subsystems are modeled by status diagrams, which graphically represent system component statuses and the transitions between these statuses. By employing absorbing Markov chains, it was possible to ascertain its effective contribution towards decision making in cost management processes. Keywords: Absorbing Markov Chains; Transition Matrixes; Costs
- Research Article
7
- 10.3390/educsci10120377
- Dec 13, 2020
- Education Sciences
American universities use a procedure based on a rolling six-year graduation rate to calculate statistics regarding their students’ final educational outcomes (graduating or not graduating). As an alternative to the six-year graduation rate method, many studies have applied absorbing Markov chains for estimating graduation rates. In both cases, a frequentist approach is used. For the standard six-year graduation rate method, the frequentist approach corresponds to counting the number of students who finished their program within six years and dividing by the number of students who entered that year. In the case of absorbing Markov chains, the frequentist approach is used to compute the underlying transition matrix, which is then used to estimate the graduation rate. In this paper, we apply a sensitivity analysis to compare the performance of the standard six-year graduation rate method with that of absorbing Markov chains. Through the analysis, we highlight significant limitations with regards to the estimation accuracy of both approaches when applied to small sample sizes or cohorts at a university. Additionally, we note that the Absorbing Markov chain method introduces a significant bias, which leads to an underestimation of the true graduation rate. To overcome both these challenges, we propose and evaluate the use of a regularly updating multi-level absorbing Markov chain (RUML-AMC) in which the transition matrix is updated year to year. We empirically demonstrate that the proposed RUML-AMC approach nearly eliminates estimation bias while reducing the estimation variation by more than 40%, especially for populations with small sample sizes.
- Research Article
1
- 10.1371/journal.pone.0263979.r004
- Feb 17, 2022
- PLoS ONE
Interacting strategies in evolutionary games is studied analytically in a well-mixed population using a Markov chain method. By establishing a correspondence between an evolutionary game and Markov chain dynamics, we show that results obtained from the fundamental matrix method in Markov chain dynamics are equivalent to corresponding ones in the evolutionary game. In the conventional fundamental matrix method, quantities like fixation probability and fixation time are calculable. Using a theorem in the fundamental matrix method, conditional fixation time in the absorbing Markov chain is calculable. Also, in the ergodic Markov chain, the stationary probability distribution that describes the Markov chain’s stationary state is calculable analytically. Finally, the Rock, scissor, paper evolutionary game are evaluated as an example, and the results of the analytical method and simulations are compared. Using this analytical method saves time and computational facility compared to prevalent simulation methods.
- Research Article
10
- 10.1371/journal.pone.0263979
- Feb 17, 2022
- PLOS ONE
Interacting strategies in evolutionary games is studied analytically in a well-mixed population using a Markov chain method. By establishing a correspondence between an evolutionary game and Markov chain dynamics, we show that results obtained from the fundamental matrix method in Markov chain dynamics are equivalent to corresponding ones in the evolutionary game. In the conventional fundamental matrix method, quantities like fixation probability and fixation time are calculable. Using a theorem in the fundamental matrix method, conditional fixation time in the absorbing Markov chain is calculable. Also, in the ergodic Markov chain, the stationary probability distribution that describes the Markov chain's stationary state is calculable analytically. Finally, the Rock, scissor, paper evolutionary game are evaluated as an example, and the results of the analytical method and simulations are compared. Using this analytical method saves time and computational facility compared to prevalent simulation methods.
- Research Article
11
- 10.1080/03610926.2015.1083108
- May 13, 2016
- Communications in Statistics - Theory and Methods
ABSTRACTIn this article, a stock-forecasting model is developed to analyze a company's stock price variation related to the Taiwanese company HTC. The main difference to previous articles is that this study uses the data of the HTC in recent ten years to build a Markov transition matrix. Instead of trying to predict the stock price variation through the traditional approach to the HTC stock problem, we integrate two types of Markov chain that are used in different ways. One is a regular Markov chain, and the other is an absorbing Markov chain. Through a regular Markov chain, we can obtain important information such as what happens in the long run or whether the distribution of the states tends to stabilize over time in an efficient way. Next, we used an artificial variable technique to create an absorbing Markov chain. Thus, we used an absorbing Markov chain to provide information about the period between the increases before arriving at the decreasing state of the HTC stock. We provide investors with information on how long the HTC stock will keep increasing before its price begins to fall, which is extremely important information to them.
- Research Article
1
- 10.21914/anziamj.v53i0.5111
- Apr 28, 2012
- ANZIAM Journal
Stationarity of the transition probabilities in the Markov chain formulation of owner payments on projects
- Research Article
1
- 10.1515/spma-2017-0006
- Jan 26, 2017
- Special Matrices
In order to fully characterize the state-transition behaviour of finite Markov chains one needs to provide the corresponding transition matrix P. In many applications such as molecular simulation and drug design, the entries of the transition matrix P are estimated by generating realizations of the Markov chain and determining the one-step conditional probability Pij for a transition from one state i to state j. This sampling can be computational very demanding. Therefore, it is a good idea to reduce the sampling effort. The main purpose of this paper is to design a sampling strategy, which provides a partial sampling of only a subset of the rows of such a matrix P. Our proposed approach fits very well to stochastic processes stemming from simulation of molecular systems or random walks on graphs and it is different from the matrix completion approaches which try to approximate the transition matrix by using a low-rank-assumption. It will be shown how Markov chains can be analyzed on the basis of a partial sampling. More precisely. First, we will estimate the stationary distribution from a partially given matrix P. Second, we will estimate the infinitesimal generator Q of P on the basis of this stationary distribution. Third, from the generator we will compute the leading invariant subspace, which should be identical to the leading invariant subspace of P. Forth, we will apply Robust Perron Cluster Analysis (PCCA+) in order to identify metastabilities using this subspace.
- Research Article
1
- 10.9790/5728-1204027074
- Apr 1, 2016
- IOSR Journal of Mathematics
In this study, we model the occurrence and length of wet, medium wet and dry spells by Markov chain that best describes the rainfall pattern of Bungoma County (Western Kenya).This is achieved by Markov chain theory and estimation of probabilities of the chain by MLE. Also computed is the distribution of the length of each spells; wet, medium wet and dry from which the central moments of the rainfall pattern are computed. The model developed is applied to rainfall data from Bungoma meteorological station. A three by three transition matrix is obtained and used to predict the weather pattern. It is observed that if everything remains constant, prediction can be certain at the twelfth year as the matrix show stationarity. The three states are recurrent, non-null and a periodic hence forming an ergodic chain. Keywords: Markov chain, Wet spell, Medium wet spell, Dry spell, Prediction, Stationary distribution
- Research Article
4
- 10.1109/tac.2015.2465271
- Jun 1, 2016
- IEEE Transactions on Automatic Control
This paper presents the stability and criticality analysis of integer linear programs with respect to perturbations in stochastic data given as Markov chains. These perturbations affect the initial distribution, the transition matrix, or the stationary distribution of Markov chains. Stability analysis is concerned with obtaining the set of all perturbations for which a solution remains optimal. This paper gives expressions for stability regions for perturbations in the initial distribution, the transition matrix, the stationary distribution, and the product of elements of the transition matrix and the stationary distribution. Furthermore, criticality measures that describe the sensitivity of the objective function with respect to an element of the problem data are derived. Stability regions that preserve the stochasticity of the problem data are given. Finally, stability regions for perturbations of elements of the transition matrix, given that the problem is not linear in the initial distribution or the transition matrix, are obtained using a small perturbation analysis. The results are applied to sensor placement problems and numerical examples are given.
- Book Chapter
11
- 10.1017/cbo9780511819407.006
- May 26, 2011
We consider Hidden Markov Chains obtained by passing a Markov Chain with rare transitions through a noisy memoryless channel. We obtain asymptotic estimates for the entropy of the resulting Hidden Markov Chain as the transition rate is reduced to zero. Let (Xn) be a Markov chain with finite state space S and transition matrix P(p) and let (Yn) be the Hidden Markov chain observed by passing (Xn) through a homogeneous noisy memoryless channel (i.e. Y takes values in a set T, and there exists a matrix Q such that P(Yn = jjXn = i;X n−1 −1 ;X 1+1;Y n−1 −1 ;Y 1 n+1) = Qij). We make the additional assumption on the channel that the rows of Q are distinct. In this case we call the channel statistically distinguishing. We assume that P(p) is of the form I + pA where A is a matrix with negative entries on the diagonal, non-negative entries in the off-diagonal terms and zero row sums. We further assume that for small positive p, the Markov chain with transition matrix P(p) is irreducible. Notice that for Markov chains of this form, the invariant distribution (�i)i2 S does not depend on p. In this case, we say that for small positive values of p, the Markov chain is in a rare transition regime. We will adopt the convention that H is used to denote the entropy of a fi- nite partition, whereas h is used to denote the entropy of a process (the en- tropy rate in information theory terminology). Given an irreducible Markov chain with transition matrix P, we let h(P) be the entropy of the Markov chain (i.e. h(P) = − P i;jiPij logPij wherei is the (unique) invariant distribution of the Markov chain and as usual we adopt the convention that 0log0 = 0). We also let Hchan(i) be the entropy of the output of the channel when the input symbol is i (i.e. Hchan(i) = − P j2 T Qij logQij). Let h(Y ) denote the entropy of Y (i.e.
- Conference Article
- 10.21533/scjournal.v2i1.43.g43
- Mar 30, 2013
This paper introduced a general class of mathematical models, Markov chain models, which are appropriate for modeling of phenomena in the physical life, medicine, engineering and social sciences. Application of Markov chains are quite common and have become a standard tool of decision making. What matters in predicting the future of the system is its present state, and not the path by which the system got to its present state. Two methods are presented that exemplify the flexibility of this approach: the regular Markov chain and absorbing Markov chain. The long-term trend in absorbing Markov chains depends on the initial state. In addition, changing the initial state can change the final result. This property distinguishes absorbing Markov chains from regular Markov chains, where the final result is independent of the initial state. The problems are formulated by using the Wolfram Mathematical Programming System.
- New
- Research Article
- 10.3390/mca30060121
- Nov 6, 2025
- Mathematical and Computational Applications
- New
- Research Article
- 10.3390/mca30060123
- Nov 6, 2025
- Mathematical and Computational Applications
- New
- Research Article
- 10.3390/mca30060122
- Nov 6, 2025
- Mathematical and Computational Applications
- New
- Research Article
- 10.3390/mca30060120
- Nov 1, 2025
- Mathematical and Computational Applications
- Research Article
- 10.3390/mca30050116
- Oct 20, 2025
- Mathematical and Computational Applications
- Research Article
- 10.3390/mca30050114
- Oct 13, 2025
- Mathematical and Computational Applications
- Research Article
- 10.3390/mca30050113
- Oct 11, 2025
- Mathematical and Computational Applications
- Research Article
- 10.3390/mca30050111
- Oct 8, 2025
- Mathematical and Computational Applications
- Research Article
- 10.3390/mca30050110
- Oct 8, 2025
- Mathematical and Computational Applications
- Research Article
- 10.3390/mca30050112
- Oct 8, 2025
- Mathematical and Computational Applications
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.