The Lomax-Linear failure rate distribution: Properties and estimation with real data application
The Lomax-Linear failure rate distribution: Properties and estimation with real data application
- Conference Article
30
- 10.1109/issre.1998.730863
- Nov 4, 1998
An understanding of the distribution of software failure rates and its origin will strengthen the relation of software reliability engineering both to other aspects of software engineering and to the wider field of reliability engineering. The paper proposes that the distribution of failure rates for faults in software systems tends to be lognormal. Many successful analytical models of software behavior share assumptions that suggest that the distribution of software event rates will asymptotically approach lognormal. The lognormal distribution has its origin in the complexity, that is the depth of conditionals, of software systems and the fact that event rates are determined by an essentially multiplicative process. The central limit theorem links these properties to the lognormal: just as the normal distribution arises when summing many random terms, the lognormal distribution arises when the value of a variable is determined by the multiplication of many random factors. Because the distribution of event rates tends to be lognormal and faults are just a random subset or sample of the events, the distribution of the failure rates of the faults also tends to be lognormal. Failure rate distributions observed by other researchers in twelve repetitive-run experiments and nine sets of field failure data are analyzed and demonstrated to support the lognormal hypothesis. The ability of the lognormal to fit these empirical failure rate distributions is superior to that of the gamma distribution (the basis of the Gamma/EOS family of reliability growth models) or a Power-law model.
- Research Article
3
- 10.3934/math.2024582
- Jan 1, 2024
- AIMS Mathematics
<abstract> <p>We introduced a flexible class of continuous distributions called the generalized Kavya-Manoharan-G (GKM-G) family. The GKM-G family extended the Kavya-Manoharan class and provided greater flexibility to the baseline models. The special sub-models of the GKM-G family are capable of modeling monotone and non-monotone failure rates including increasing, reversed J shape, decreasing, bathtub, modified bathtub, and upside-down bathtub. Some properties of the family were studied. The GKM-exponential (GKME) distribution was studied in detail. Eight methods of estimation were used for estimating the GKME parameters. The performance of the estimators was assessed by simulation studies under small and large samples. Furthermore, the flexibility of the two-parameter GKME distribution was explored by analyzing five real-life data applications from applied fields such as medicine, environment, and reliability. The data analysis showed that the GKME distribution outperforms other competing exponential models.</p> </abstract>
- Research Article
41
- 10.1287/opre.1040.0206
- Aug 1, 2005
- Operations Research
Increasing generalized failure rate (IGFR) distributions were introduced as a tool in the study of contracting mechanisms in supply chains. In this note, we compare and contrast the closure—and the lack thereof—of IGFR and increasing failure rate (IFR) distributions with respect to standard operations on random variables. Some implications of these results for the use of IGFR distributions in supply chain models are noted.
- Conference Article
8
- 10.1109/bife.2009.111
- Jul 1, 2009
Most researchers focus on the step of data mining, or data cleaning and integration, or data selection. However, there is a gap from the data mining to its real world application. In this paper, a step between data mining and application called the second data mining or knowledge presentation and management is proposed. The second data mining is the key whether the data mining result benefits the manager, especially in finance and business field. Four cases are proposed in this paper, and the distance between the data mining result and the real application in its field are analyzed. In the meantime, the importance of second data mining is shown at the end of the paper.
- Conference Article
30
- 10.1109/issre.1998.730872
- Nov 4, 1998
There are many models of software reliability growth, but none of them is able to model the varied patterns observed in practice. A previous paper (R.E. Mullen, 1998), suggested that since event rates in software systems are generated by multiplicative processes, the distribution of rates of events, including failure rates, is lognormal. It also showed that many previously published empirical failure rate distributions are well fit by the lognormal. The theoretical roots and experimental confirmation of the lognormal distribution of failure rates in software systems provide a unique and potentially fruitful basis for constructing a new software reliability model. The paper derives a Lognormal Execution Time Software Reliability Growth Model, a member of the family of Doubly Stochastic Exponential Order Statistic Models, by developing a numerical approximation for the Laplace transform of the lognormal. The model is used to analyze two series of failure data as an example of its use. The likelihood of that data arising from the lognormal and log-Poisson models is computed and shown to be highly favorable to the lognormal in one case and slightly favorable in the other. A preliminary comparison of the lognormal, the LPET, and the BET models using ten Musa data sets further demonstrates the ability of the lognormal model to fit a wide variety of reliability growth scenarios. Of particular novelty is the use of a software failure rate model which has both plausible theoretical justification and solid support from prior studies of software failure rate distributions. Also novel is the application of the Laplace Transform of the Lognormal to the problem of software reliability growth.
- Research Article
- 10.3390/sym16111514
- Nov 11, 2024
- Symmetry
Analyzing the reliability of the aging class of life distribution provides important information about how long a product lasts and sustainability measures that are essential for determining the environmental impact and formulating resource-saving plans. The study emphasizes the goodness-of-fit technique of the nonparametric test for the NBRUmgf class because age data are crucial for applications. Evaluations were conducted using the test’s asymptotic properties and Pitman efficiency methodology for some selected asymmetric probability models, and the outcomes were compared with those of alternative methods. We assessed the test’s power against widely used reliability distributions for some well-known alternative asymmetric distributions, including the Weibull, Gamma, and linear failure rate (LFR) distributions, and provided percentiles for both censored and uncensored data. This study shows the efficacy of the test in various sectors using real-world datasets and comprehensive tables of test statistics.
- Research Article
- 10.56094/jss.v51i3.147
- Oct 1, 2015
- Journal of System Safety
The way forward in system safety engineering will be quantitative, and this paper proposes an innovative method for generating a uniform way to understand the composite of testing and experience. In recent years, new approaches to exact hypothesis testing have been developed without a Gaussian probability distribution for success or failure rates. These techniques eliminate errors introduced by the Gaussian assumption, which is important for the small failure rates that are common in modern systems development, and offer considerable promise as a basis for the new direction.
 This paper presents a theory for exact hypothesis testing and combines two 18th-century theorems to derive an equation for the probability distribution of failure rate employing only the number of tests and the observed count of failures. The concept is expanded to demonstrate the combination of operational experience and expert opinion to update test results. The objective in this work is to derive the general likelihood distribution of failure rate given any set of test results, and then to examine the implications regarding testing requirements, design and interpretation. The particular application considered here is safety assessment for a military weapons system. While the theory developed is for deriving the exact failure rate distribution for system safety applications, it is equally valid for investigating success rates and/or for interpreting performance evaluation tests.
- Research Article
5
- 10.3934/math.20231000
- Jan 1, 2023
- AIMS Mathematics
<abstract> <p>Nonconforming events are rare in high-quality processes, and the time between events (TBE) may follow a skewed distribution, such as the gamma distribution. This study proposes one- and two-sided triple homogeneously weighted moving average charts for monitoring TBE data modeled by the gamma distribution. These charts are labeled as the THWMA TBE charts. Monte Carlo simulations are performed to approximate the run length distribution of the one- and two-sided THWMA TBE charts. The THWMA TBE charts are compared to competing charts like the DHWMA TBE, HWMA TBE, THWMA TBE, DEWMA TBE, and EWMA TBE charts at a single shift and over a range of shifts. For the single shift comparison, the average run length (ARL) and standard deviation run length (SDRL) measures are used, whereas the extra quadratic loss (EQL), relative average run length (RARL) and performance comparison index (PCI) measures are employed for a range of shifts comparison. The comparison reveals that the THWMA TBE charts outperform the competing charts at a single shift as well as at a certain range of shifts. Finally, two real-life data applications are presented to evaluate the applicability of the THWMA TBE charts in practical situations, one with boring machine failure data and the other with hospital stay time for traumatic brain injury patients.</p> </abstract>
- Research Article
13
- 10.1016/j.ijar.2021.12.019
- Jan 5, 2022
- International Journal of Approximate Reasoning
Bayesian nonparametric change point detection for multivariate time series with missing observations
- Research Article
- 10.1080/03610918.2024.2329237
- Mar 9, 2024
- Communications in Statistics - Simulation and Computation
In multivariate statistical inference, the Hotelling T 2 statistic is used to test the equality of mean vectors for two independent groups. This statistic needs the multivariate normality and homogeneous covariance matrices assumptions. However, homogeneous covariance matrices assumption may not be provided in real applications. This case is called the multivariate Behrens-Fisher problem. There are several studies to test the equality of two mean vectors for the independent groups under the multivariate Behrens-fisher problem. But these studies do not interest in outliers at data sets. In this study, we propose solving problems caused by multivariate Behrens-Fisher and outliers in the dataset. We compare our proposed approach with other approaches regarding empirical size and power at simulated data that are both uncontaminated and contaminated. Thus we show that our proposed approach can be used to test the equality of mean vectors for two independent groups under multivariate Behrens-Fisher problem without being affected by outliers in the data. Moreover, we construct an R function in the MVTests package to use our proposed approach for real data applications.
- Research Article
12
- 10.1002/j.2333-8504.2003.tb01926.x
- Dec 1, 2003
- ETS Research Report Series
ABSTRACTDetecting item fit for common dichotomous item response theory (IRT) models has always been an issue of enormous interest, but there exists no unanimously agreed upon item fit diagnostic for the models. This paper employs the posterior predictive model checking method (Guttman, 1967; Rubin, 1981, 1984), a popular Bayesian model checking tool, to examine item fit for common dichotomous item response theory models. An item fit plot, comparing the observed and predicted proportion correct scores of examinee groups (with groups based on raw scores), promises to be useful in real applications. This paper also suggests how to obtain posterior predictive p‐values for the χ2‐type test statistics of Orlando and Thissen (2000) comparing the observed and predicted proportion correct scores for different raw score groups. A number of simulation studies and real data applications examine the effectiveness of the suggested item fit diagnostics. The suggested p‐values seem to have low Type I error rate, low false alarm rate, and adequate power, indicating that researchers might find the methods more acceptable than the existing item fit measures.
- Research Article
25
- 10.1016/j.adhoc.2015.08.017
- Sep 8, 2015
- Ad Hoc Networks
Deployment and evaluation of a fully applicable distributed event detection system in Wireless Sensor Networks
- Research Article
1
- 10.30495/jme.v12i1.671
- Jun 29, 2018
- Journal of Mathematical Extension
We propose a new class of continuous distributions with two extra shape parameters named the Odd Log-Logistic Poisson -G family. Some of its mathematical properties including moments, quantile and generating functions and order statistics are obtained. We estimate the model parameters by the maximum likelihood method and present a Monte Carlo simulation study. The importance of the proposed family is demonstrated by means of three real data applications. Empirical results indicate that proposed family provides better fits than other well-known classes of distributions in real applications.
- Research Article
2
- 10.1017/cts.2020.556
- Nov 16, 2020
- Journal of Clinical and Translational Science
Identifying predictors of patient outcomes evaluated over time may require modeling interactions among variables while addressing within-subject correlation. Generalized linear mixed models (GLMMs) and generalized estimating equations (GEEs) address within-subject correlation, but identifying interactions can be difficult if not hypothesized a priori. We evaluate the performance of several variable selection approaches for clustered binary outcomes to provide guidance for choosing between the methods. We conducted simulations comparing stepwise selection, penalized GLMM, boosted GLMM, and boosted GEE for variable selection considering main effects and two-way interactions in data with repeatedly measured binary outcomes and evaluate a two-stage approach to reduce bias and error in parameter estimates. We compared these approaches in real data applications: hypothermia during surgery and treatment response in lupus nephritis. Penalized and boosted approaches recovered correct predictors and interactions more frequently than stepwise selection. Penalized GLMM recovered correct predictors more often than boosting, but included many spurious predictors. Boosted GLMM yielded parsimonious models and identified correct predictors well at large sample and effect sizes, but required excessive computation time. Boosted GEE was computationally efficient and selected relatively parsimonious models, offering a compromise between computation and parsimony. The two-stage approach reduced the bias and error in regression parameters in all approaches. Penalized and boosted approaches are effective for variable selection in data with clustered binary outcomes. The two-stage approach reduces bias and error and should be applied regardless of method. We provide guidance for choosing the most appropriate method in real applications.
- Research Article
18
- 10.1080/03610918.2016.1224348
- Apr 19, 2017
- Communications in Statistics - Simulation and Computation
ABSTRACTThe binary logistic regression is a commonly used statistical method when the outcome variable is dichotomous or binary. The explanatory variables are correlated in some situations of the logit model. This problem is called multicollinearity. It is known that the variance of the maximum likelihood estimator (MLE) is inflated in the presence of multicollinearity. Therefore, in this study, we define a new two-parameter ridge estimator for the logistic regression model to decrease the variance and overcome multicollinearity problem. We compare the new estimator to the other well-known estimators by studying their mean squared error (MSE) properties. Moreover, a Monte Carlo simulation is designed to evaluate the performances of the estimators. Finally, a real data application is illustrated to show the applicability of the new method. According to the results of the simulation and real application, the new estimator outperforms the other estimators for all of the situations considered.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.