A stochastic model of area-biased Kpenadidum distribution with the characteristics and applications to real-lifetime data

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Abstract In this present research, we explore the statistical characteristics of the Area-Biased Kpenadidum Distribution (ABKD), a novel probability model. The maximum likelihood method has been used to estimate the parameters, and the asymptotic findings have been explained. The new distribution was compared to the Shanker, Lindley, and Kpenadidum distributions. When the distribution was fitted to cancer data, a good fit was observed.

Similar Papers
  • Research Article
  • Cite Count Icon 346
  • 10.1093/sysbio/22.3.240
Maximum Likelihood and Minimum-Steps Methods for Estimating Evolutionary Trees from Data on Discrete Characters
  • Sep 1, 1973
  • Systematic Biology
  • J Felsenstein

Felsenstein, J. (Department of Genetics SK-50, University of Washington, Seattle, Washington 98195). 1973. Maximum likelihood and minimum-steps methods for estimating evolutionary trees from data on discrete characters. Syst. Zool. 22:240-249.The general maximum likelihood approach to the statistical estimation of phylogenies is outlined, for data in which there are a number of discrete states for each character. The details of the maximum likelihood method will depend on the details of the probabilistic model of evolution assumed. There are a very large number of possible models of evolution. For a few of the simpler models, the calculation of the likelihood of an evolutionary tree is outlined. For these models, the maximum likelihood tree will be the same as the parsimonious (or minimum-steps) tree if the probability of change during the evolution of the group is assumed a priori to be very small. However, most sets of data require too many assumed state changes per character to be compatible with this assumption. Farris (1973) has argued that maximum likelihood and parsimony methods are identical under a much less restrictive set of assumptions. It is argued that the present methods are preferable to his, and a counterexample to his argument is presented. An algorithm which enables rapid calculation of the likelihood of a phylogeny is described. [Evolutionary trees: maximum likelihood.] The first systematic attempt to apply standard statistical inference procedures to the estimation of evolutionary trees was the work of Edwards and Cavalli-Sforza (1964; see also Cavalli-Sforza and Edwards, 1967). At about the same time, the parsimony' or minimum evolutionary steps method of Camin and Sokal (1965) gave a great impetus to the development of welldefined procedures for obtaining evolutionary trees. Edwards and Cavalli-Sforza concerned themselves with data from continuous variables such as gene frequencies and quantitative characters. The CaminSokal approach, on the other hand, was developed for characters which are recorded as a series of discrete states. Although some taxonomists have declared that the problem of guessing phylogenies should be viewed as a problem of statistical inference (Farris, 1967, 1968; Throckmorton, 1968), until recently there have been no attempts to explore the relationship between the statistical inference and minimum-steps approaches. Recently, Farris (1973) has presented a detailed argument that, under certain reasonable assumptions, the maximum-likelihood method of statistical inference appropriate to discrete-character data is precisely the parsimony method of Camin and Sokal. In this paper, I will examine the application of maximum likelihood methods to discrete characters, and will show that parsimony methods are not maximum likelihood methods under the assumptions made by Farris. They are maximum likelihood methods under considerably more restrictive assumptions about evolution. METHODS OF MAXIMUM LIKELIHOOD Suppose that we want to estimate the evolutionary tree, T, which is to be specified by the topological form of the tree and the times of branching. We are given a set of data, D, and a model of evolution, M, which incorporates not only the evolutionary processes, but also the processes of sampling by which we obtained the data. This model will usually be probabilistic, involving random events such as changes of the environment, occurrence of favorable

  • Research Article
  • Cite Count Icon 2
  • 10.1016/j.istruc.2023.105257
Analysis of snow load probabilistic models and calculation of reference snow pressure using maximum likelihood method for multiple cities in Liaoning Province, China
  • Sep 28, 2023
  • Structures
  • Jiaxu Li + 4 more

Analysis of snow load probabilistic models and calculation of reference snow pressure using maximum likelihood method for multiple cities in Liaoning Province, China

  • Research Article
  • Cite Count Icon 12
  • 10.1002/sim.8319
Maximum likelihood estimation with missing outcomes: From simplicity to complexity.
  • Aug 8, 2019
  • Statistics in Medicine
  • Stuart G Baker

Many clinical or prevention studies involve missing or censored outcomes. Maximum likelihood (ML) methods provide a conceptually straightforward approach to estimation when the outcome is partially missing. Methods of implementing ML methods range from the simple to the complex, depending on the type of data and the missing-data mechanism. Simple ML methods for ignorable missing-data mechanisms (when data are missing at random) include complete-case analysis, complete-case analysis with covariate adjustment, survival analysis with covariate adjustment, and analysis via propensity-to-be-missing scores. More complex ML methods for ignorable missing-data mechanisms include the analysis of longitudinal dropouts via a marginal model for continuous data or a conditional model for categorical data. A moderately complex ML method for categorical data with a saturated model and either ignorable or nonignorable missing-data mechanisms is a perfect fit analysis, an algebraic method involving closed-form estimates and variances. A complex and flexible ML method with categorical data and either ignorable or nonignorable missing-data mechanisms is the method of composite linear models, a matrix method requiring specialized software. Except for the method of composite linear models, which can involve challenging matrix specifications, the implementation of these ML methods ranges in difficulty from easy to moderate.

  • Research Article
  • Cite Count Icon 2
  • 10.1002/kin.20513
Identification of the effective distribution function for determination of the distributed activation energy models using Bayesian statistics: Application of isothermal thermogravimetric data
  • Jul 20, 2010
  • International Journal of Chemical Kinetics
  • Bojan Janković

The new procedure for identification of the effective distribution function for determination of the distributed activation energy models, which is based on the Bayesian statistics, has been established. The five different continuous probability functions (exponential, logistic, normal, gamma and Weibull probability functions (the extended set of distributions)) were used for searching the most appropriate reactivity model for two heterogeneous processes: (a) the isothermal reduction process of nickel oxide under hydrogen atmosphere and (b) the isothermal degradation process of bisphenol-A polycarbonate (Lexan) under nitrogen atmosphere. Using the Bayes weights, it was shown that for both processes, the most suitable distributed reactivity model is the Weibull distribution model. The kinetic parameters (ln A, Ea) attached with the Weibull distribution model were calculated for both investigated processes, using three different computational methods (the maximum likelihood method (MLM), the nonlinear regression analysis (NRA), and the posterior mean (the expected value of scale parameter η, E(η)). It was shown that there is an excellent agreement between the values of kinetic parameters calculated by the MLM, NRA, and E(η) approaches. Using Bayes weights, it is possible to discriminate between different probability models and to quantify how well a distribution fits the experimental data. For the formal reactivity model comparison, the use of the (nonnormalized) Jeffreys prior is recommended. © 2010 Wiley Periodicals, Inc. Int J Chem Kinet 42: 641–658, 2010

  • Research Article
  • Cite Count Icon 9
  • 10.1049/ip-rsn:19952112
Fast approximate maximum likelihood algorithm for single source localisation
  • Jan 1, 1995
  • IEE Proceedings - Radar, Sonar and Navigation
  • D Hertz

The authors present an approximation for the deterministic maximum likelihood (ML) method for estimating the direction of arrival of a signal from a single source. To apply the proposed approximate ML (AML) method one has to compute the principal eigenvector of the sample covariance matrix, i.e. the unit eigenvector corresponding to the largest eigenvalue. Surprisingly, the AML method coincides with the principal eigenvector method of Evans, Johnson and Sun (1982) that was derived based on the unnecessary assumption of high signal-to-noise ratio. Next, the authors present for the AML method a fast and simple explicit method for computing the principal eigenvector of the sample covariance matrix as well as an explicit estimator for the signal-to-noise ratio. The proposed explicit AML (EAML) method is faster than the ML method in carrying out the maximisation step. Simulations reveal that AML and ML methods have similar performance, while EAML performance is only slightly inferior to the ML method.

  • Conference Article
  • Cite Count Icon 4
  • 10.1145/2999504.3001104
Optimal method for USBL underwater acoustic positioning by combining TDOA and TOA
  • Jan 1, 2016
  • Fangsheng Zhong + 1 more

In this paper we present an optimal method for Ultrashort Baseline (USBL) underwater acoustic positioning by combining Time Difference of Arrival (TDOA) and Time of Arrival (TOA). For the estimation of the bearing angles, on the basis of Least Squares (LS) method and Maximum Likelihood (ML) method, the Regularized Least Squares (RLS) method is designed based on the optimization criterion. LS and ML methods are optimal estimation method in a certain criterion, but the accuracy of LS method is relatively low, and ML method requires relatively strict conditions, although the accuracy of ML method can be a theoretical upper bound. RLS method is designed using nonlinear least squares, which overcomes the drawbacks of ML method. After a lot of simulation validation, the performance of RLS method is much better than LS method, very close to ML method. RLS method has a good resistance to the measurement error of TDOA, even if the measurement error of TDOA is very big, it can also have an ideal location precision.

  • Conference Article
  • 10.1109/vetecf.2005.1558055
On the blind decision of modulation type in impaired AWGN channel environment
  • Sep 25, 2005
  • Il Han Kim + 4 more

We propose a new modulation classification method that utilizes likelihood function of received signal in an impaired AWGN (Additive White Gaussian Noise) channel environment. The proposed method utilizes the likelihood under the assumption that each modulated signal is sent, but the direct use of the ML (Maximum Likelihood) method is not considered for high computational complexity and weakness to channel impairment such as phase offsets and frequency offsets. The proposed method has lower computational complexity than does the ML method. Moreover, the proposed method is robust to the channel impairment such as phase offsets and frequency offsets. The correct classification probabilities of the proposed method and the ML method are given for an AWGN channel with phase offsets and frequency offsets, which are simulated with extensive Monte-Carlo simulation. As shown in simulation results, a more accurate classification performance both in phase offset environment and in frequency offset environment can be achieved with the low computational complexity of the proposed method. Log-Likelihood Ratio) test, this method is the approximation at a low SNR and it is hard to get a threshold value for general QAM modulation. In this paper, we propose a low complexity digital modu- lation classification method based on the likelihood function of the received signal in an AWGN (Additive White Gaussian Noise) channel environment with phase offsets and frequency offsets. The proposed method is similar to the ML method (2), (3) in the sense that it utilizes the likelihood function of the received signal, but it has lower computational complexity than the ML method and it is less sensitive to phase offsets and frequency offsets than the ML method. This paper is organized as follows. In Section II, we give the signal model used in this paper. This section also gives a previous modulation classification method with this signal model. Section III provides the new modulation classification method based on the ML method. In Section IV, we give some numerical simulation results and discussion to verify the performance of the proposed method. Section V concludes the paper.

  • Research Article
  • Cite Count Icon 10
  • 10.1016/j.atmosres.2012.04.003
Estimation of raindrop size distribution parameters by maximum likelihood and L-moment methods: Effect of discretization
  • Apr 21, 2012
  • Atmospheric Research
  • Marzuki + 5 more

Estimation of raindrop size distribution parameters by maximum likelihood and L-moment methods: Effect of discretization

  • Research Article
  • Cite Count Icon 166
  • 10.1006/mpev.1993.1001
Relative Efficiencies of the Maximum Likelihood, Maximum Parsimony, and Neighbor-Joining Methods for Estimating Protein Phylogeny
  • Mar 1, 1993
  • Molecular Phylogenetics and Evolution
  • Masami Hasegawa + 1 more

Relative Efficiencies of the Maximum Likelihood, Maximum Parsimony, and Neighbor-Joining Methods for Estimating Protein Phylogeny

  • Research Article
  • Cite Count Icon 23
  • 10.1016/j.ecolmodel.2020.109071
Comparing maximum entropy modelling methods to inform aquaculture site selection for novel seaweed species
  • May 19, 2020
  • Ecological Modelling
  • Kathryn H Wiltshire + 1 more

Comparing maximum entropy modelling methods to inform aquaculture site selection for novel seaweed species

  • Research Article
  • Cite Count Icon 93
  • 10.1007/bf02099932
Robustness of maximum likelihood tree estimation against different patterns of base substitutions
  • Jan 1, 1991
  • Journal of Molecular Evolution
  • Kaoru Fukami-Kobayashi + 1 more

In the maximum likelihood (ML) method for estimating a molecular phylogenetic tree, the pattern of nucleotide substitutions for computing likelihood values is assumed to be simpler than that of the actual evolutionary process, simply because the process, considered to be quite devious, is unknown. The problem, however, is that there has been no guarantee to endorse the simplification. To study this problem, we first evaluated the robustness of the ML method in the estimation of molecular trees against different nucleotide substitution patterns, including Jukes and Cantor's, the simplest ever proposed. Namely, we conducted computer simulations in which we could set up various evolutionary models of a hypothetical gene, and define a true tree to which an estimated tree by the ML method was to be compared. The results show that topology estimation by the ML method is considerably robust against different ratios of transitions to transversions and different GC contents, but branch length estimation is not so. The ML tree estimation based on Jukes and Cantor's model is also revealed to be resistant to GC content, but rather sensitive to the ratio of transitions to transversions. We then applied the ML method with different substitution patterns to nucleotide sequence data on tax gene from T-cell leukemia viruses whose evolutionary process must have been more complicated than that of the hypothetical gene. The results are in accordance with those from the simulation study, showing that Jukes and Cantor's model is as useful as a more complicated one for making inferences about molecular phylogeny of the viruses.

  • Conference Article
  • Cite Count Icon 14
  • 10.1109/icip.2003.1247205
Multibaseline InSAR terrain elevation estimation: a dynamic programming approach
  • Nov 24, 2003
  • Lei Ying + 3 more

When estimating terrain elevation via interferometric synthetic aperture radar (InSAR), phase unwrapping procedures have difficulty in dealing with rough regions or large noise. Multiple baseline is used to reduce or avoid this problem. Conventional maximum likelihood (ML) methods reconstruct terrain heights in a pointwise fashion, which does not utilize the smooth characteristics of natural terrain. We propose a new algorithm taking smoothness into account. The new approach tackles the problem in a Bayesian framework. Instead of using ML estimation, we use maximum a posteriori (MAP) estimation, where the likelihood function is defined as in the ML method and the prior is defined as a first-order Gaussian Markov random field. This MAP estimation makes the algorithm more robust to noise, and at the same time, more accurate in reconstructing rough regions. A form of 2-D dynamic programming is used to implement the MAP estimation efficiently. The new algorithm has the advantage over the ML methods in that none of the baselines must be chosen so small as to avoid phase wrapping. Specifically, both baselines can be large so that the noise in the reconstructed height can be low. The new algorithm is shown to be able to achieve lower noise than the conventional ML and least-squares methods.

  • Research Article
  • Cite Count Icon 41
  • 10.1016/0004-6981(84)90239-7
An evaluation of the methods of fractiles, moments and maximum likelihood for estimating parameters when sampling air quality data from a stationary lognormal distribution
  • Jan 1, 1984
  • Atmospheric Environment (1967)
  • David T Mage + 1 more

An evaluation of the methods of fractiles, moments and maximum likelihood for estimating parameters when sampling air quality data from a stationary lognormal distribution

  • Research Article
  • Cite Count Icon 7
  • 10.1002/kin.20357
Identification of the effective distribution function for determination of the distributed activation energy models using the maximum likelihood method: Isothermal thermogravimetric data
  • Oct 29, 2008
  • International Journal of Chemical Kinetics
  • Bojan Janković

The new procedure for identification of the effective distribution function for determination of the distributed activation energy models, which is based on use the maximum likelihood method (MLM), was established. The five different continuous probability functions (exponential, logistic, normal, gamma, and Weibull probability functions (the extended set of distributions)) were used for searching the best reactivity model for two heterogeneous processes: (a) the isothermal reduction process of nickel oxide under hydrogen atmosphere and (b) the isothermal degradation process of bisphenol-A polycarbonate (Lexan) under nitrogen atmosphere. The MLM showed that for both processes, the most suitable reactivity model represents the Weibull distribution model. It was concluded that the values of Arrhenius parameters (ln A and Ea), evaluated from the Weibull distribution model, represent the effective kinetic values for both considered processes. This procedure enables identification the suitable distribution model for considered process only from the experimental data (based on the shapes of obtained integral kinetic curves), and this fact represents the advantage of established analysis. The established mathematical procedure, which is based on the MLM, can be applied as the preliminary analysis for evaluating the distribution of activation energies for complex heterogeneous processes. © 2008 Wiley Periodicals, Inc. Int J Chem Kinet 41: 27–44, 2009

  • Book Chapter
  • Cite Count Icon 49
  • 10.1016/0076-6879(90)83038-b
36] Maximum likelihood methods
  • Jan 1, 1990
  • Methods in Enzymology
  • Naruya Saitou

36] Maximum likelihood methods

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.