Practical significance and the modified control chart with estimated parameters
Shewhart charts are still widely used in practice to monitor process quality. When a plotted point falls outside the control limits, an out-of-control (OOC) signal is triggered, the process is halted, and a search for assignable causes begins. However, in practice, even when an OOC signal is issued, the process may still produce an acceptably small proportion of nonconforming items, making it practically capable. In such cases, stopping the process may not be necessary, reducing false alarms and saving resources. The modified X ¯ control chart (MCC) was proposed for such capable processes, specifically for monitoring the process mean, assuming known parameters. We consider the realistic case where the standard deviation is unknown and must be estimated. Using an estimated standard deviation adds variation to both the fraction nonconforming (FNC) and the false alarm rate, often inflating them beyond nominal levels. We propose an MCC for monitoring the mean with an estimated standard deviation and derive corrections that yield an adjusted acceptable region and control limits that account for estimation effects. This guarantees the desired in-control performance. We also investigate how these corrections affect the expected FNC and the power to detect process mean shifts. An illustrative example is provided.
- Abstract
1
- 10.5210/ojphi.v11i1.9942
- May 30, 2019
- Online Journal of Public Health Informatics
An Algorithm for Early Outbreak Detection in Multiple Data Streams
- Research Article
66
- 10.1002/uog.5270
- Feb 28, 2008
- Ultrasound in Obstetrics & Gynecology
CUSUM: a tool for ongoing assessment of performance
- Research Article
21
- 10.1016/j.agsy.2004.06.019
- Sep 11, 2004
- Agricultural Systems
A comparison of the performance of statistical quality control charts in a dairy production system through stochastic simulation
- Book Chapter
- 10.1007/978-1-4613-1967-2_13
- Jan 1, 1986
Consider a mass production line for a single “part” and let us center our interest on a single “dimension” of the part. There will always be some variability in the dimension because of the sum of several variables whose causes are not usually understood and whose effects might not be controllable if they were. The dimension of the part is thus a random variable, but one whose mean and variance can change with time as something goes wrong with the “machinery” producing the part. If the process is stable and only the “usual” system of chance causes is operating (which implies that the mean θ and variance σ2 of the process are constant), we say that the process is in statistical control and that it has an inherent variability σ2. If the mean or variance wanders from those stable values, we say that an assignable cause is operating, meaning that it can be understood/controlled. The term quality control refers to control of quality by any means, but statistical quality control refers to the control of quality through the use of certain statistical tools to be described shortly. The best known of those tools is the Shewhart Control Chart, which is used to demonstrate statistical control and to detect the presence of assignable causes. There are control charts for the mean, range, standard deviation, fraction defective, and number of defects per unit. Three horizontal lines plus the data make up the control charts. The 3 lines are a centerline, an upper control limit, and a lower control limit. (One of the control limits may be omitted if there is no interest in it.) The centerline is an “average” of the statistic being plotted, and the control limits are at distances of 3 standard deviations (of the statistic) above and below the centerline.
- Research Article
2
- 10.1080/03610918.2022.2083165
- May 30, 2022
- Communications in Statistics - Simulation and Computation
Control charts are useful to monitor if a process is in a state of statistical control (in-control) or if changes have occurred due to the presence of any assignable causes. To this end, the pattern of points displayed on a control chart plays an important role. A process is declared as in-control when the plotted points display a random pattern. On the other hand, when the points display a nonrandom pattern, with or without one or more points falling beyond the control limits, the process may be declared out-of-control. Thus, the constellation of points, along with the type of the displayed pattern in a control chart can provide useful clues about the possible presence and diagnosis of assignable causes, which can then be dealt with in an appropriate manner. In this work, we propose a methodology using randomness tests based on the theory of runs that can be applied in a supplementary manner in order to assess the statistical significance of a pattern on a Phase I Shewhart X ¯ chart in an objective way. Five common nonrandom patterns with corresponding tests are considered. The performance of the tests is evaluated in terms of their false alarm rate and power, via simulation. An illustration based on some real data is provided. Conclusions and practical recommendations are offered.
- Research Article
9
- 10.1080/03610920903168636
- Aug 11, 2010
- Communications in Statistics - Theory and Methods
A single control chart, Max chart, proposed by Chen and Cheng (1998) and slightly modified by Chen and Huang (2006), was shown to simplify and perform as effectively as joint and S charts for monitoring both the process mean and standard deviation. This article studies the economic design of Max charts, and a simplified algorithm is applied to determine the optimal settings of three control chart parameters: the sample size, sampling interval, and control limit. A numerical example is presented to illustrate its application. A fractional factorial experiment is conducted to study the sensitivity of the input parameters on the optimal designs of Max chart and joint and S charts. Statistical and economical properties including in-control average time to signal (ATS), out-of-control ATS, adjusted average time to signal (AATS), and minimum expected cost per hour of the two charts are compared. The results show that the two charts operate with approximately equal cost but different false alarm rate and detecting speed for the assignable cause without and with consideration of process failure mechanism. Critical input parameters that cause these differences are then investigated. We conclude that Max chart can be a good alternative to joint and S charts from not only statistical and simplified properties but also economical considerations.
- Research Article
2
- 10.1080/03610918.2021.2009865
- Nov 22, 2021
- Communications in Statistics - Simulation and Computation
Recent researches have shown that a single control chart, Max chart, simplifies and performs as effectively as joint and S charts for monitoring both the process mean and standard deviation. This paper develops an economic design of Max chart based on a unified model and embellishes it with Taguchi’s quality loss function. A simplified algorithm is applied to determine the optimal settings of three control chart parameters: the sample size, the sampling interval and the control limit. A numerical example is presented to illustrate its application. A fractional factorial experiment is conducted to study the sensitivity of the input parameters on the optimal designs of Max chart and joint and S charts. Statistical and economical properties including in-control average time to signal (ATS), out-of-control ATS, adjusted average time to signal (AATS) and minimum expected cost per hour of the two charts are compared. The results show that the two charts operate with approximately equal cost but different false alarm rate and detecting speed for the assignable cause without and with consideration of process failure mechanism. Critical input parameters that cause these differences are then investigated. This paper concludes that Max chart can be a good alternative to joint and S charts from not only statistical and simplified properties but also economical considerations.
- Research Article
1
- 10.5897/ajbm11.099
- Feb 21, 2013
- AFRICAN JOURNAL OF BUSINESS MANAGEMENT
Variable sample size and sampling interval (VSSI) control charts have been shown superior to standard Shewhart (SS) control charts for detection of a small or moderate shift in process mean. However, they might utilize more resources by a more frequent sampling rate and a large sample size to improve its performance. Recently, some economic models were used to express the long-run cost per hour of operating the VSSI control charts and gain insight into the way to design the charts. The usual assumption for the models is the normality of the underlying data or measurements. However, this assumption may not be true in practice. In this paper, an economic design of the VSSI control charts for skewed non-normal data is conducted by using the Markov chain approach and genetic algorithms. Two types of VSSI charts are considered and compared with the SS charts over several numerical examples: the symmetric control limits and asymmetric control limits. Moreover, effects of non-normality on the performance of the VSSI charts with respect to the costs of operating the charts are studied. It is shown that the reduction in cost can be achieved by using the VSSI charts instead of the SS charts. However, an increase on the skewness coefficient results in a decrease on the cost savings. In addition, the asymmetric control limits is a better choice with respect to the costs and the false alarm rate. Key words: Skewed non-normal, control chart, genetic algorithms, economic design.
- Research Article
108
- 10.1016/j.ins.2008.09.022
- Oct 8, 2008
- Information Sciences
Development of fuzzy [formula omitted] and [formula omitted] control charts using α-cuts
- Research Article
71
- 10.1002/nav.21557
- Oct 25, 2013
- Naval Research Logistics (NRL)
This article considers the problem of monitoring Poisson count data when sample sizes are time varying without assuming a priori knowledge of sample sizes. Traditional control charts, whose control limits are often determined before the control charts are activated, are constructed based on perfect knowledge of sample sizes. In practice, however, future sample sizes are often unknown. Making an inappropriate assumption of the distribution function could lead to unexpected performance of the control charts, for example, excessive false alarms in the early runs of the control charts, which would in turn hurt an operator's confidence in valid alarms. To overcome this problem, we propose the use of probability control limits, which are determined based on the realization of sample sizes online. The conditional probability that the charting statistic exceeds the control limit at present given that there has not been a single alarm before can be guaranteed to meet a specified false alarm rate. Simulation studies show that our proposed control chart is able to deliver satisfactory run length performance for any time-varying sample sizes. The idea presented in this article can be applied to any effective control charts such as the exponentially weighted moving average or cumulative sum chart. © 2013 Wiley Periodicals, Inc.
- Conference Article
1
- 10.1063/1.4887733
- Jan 1, 2014
The control chart is the most powerful tool in statistical process control. A control chart is a graphical display used to determine the presence of assignable causes so that prompt corrective actions can be taken to remove such causes before many nonconforming products are produced. The exponentially weighted moving average (EWMA) and moving average (MA) charts are very effective in detecting small and moderate shifts in the process mean. These two charts are constructed based on the properties of the normal distribution. In many practical applications, the validity of the normality assumption is always doubted as the process distribution could be skewed. A skewed distribution can result in a higher incidence of false alarms. This is due to the inconsistencies between the spread of a skewed distribution and the normality assumption employed in setting up a control chart. This paper studies the effects of a skewed distribution on the performances of the EWMA and MA charts, in terms of the charts' false alarm rates. We compare the in-control average run length (ARL0) performance of these two charts when the underlying distributions are normal and skewed. The gamma distribution is selected to represent the skewed distribution. A Monte Carlo simulation using the Statistical Analysis System (SAS) software is carried out to compute the necessary ARL0s. The findings of this study show that the ARL0 performance of the EWMA and MA charts is substantially affected by the skewed distribution. However, the MA chart is not as robust as the EWMA chart, in terms of the ARL0, when the distribution is skewed.
- Book Chapter
- 10.1007/978-3-642-59239-3_9
- Jan 1, 1997
The control chart, first developed by W. A. Shewhart, is a useful tool in statistical process control. It is an on-line process control technique used to detect the occurrence of any significant process change, and to call for a corrective action. The construction of a control chart is basically equivalent to the plotting of the acceptance regions of a sequence of hypothesis testing over time. For example, \( \overline X \)-chart is a control chart used to monitor the process mean μ. It plots the sample means \( \overline X \) of subgroups of the observed {X 1,X 2,…} and is equivalent to testing the hypotheses H 0: μ = μ0 vs. H a: μ ≠ μ 0 (for some μ 0 required by the engineers) conducted over time using \( \overline X \) as the test statistic. Here we assume that {X 1, X 2,…} are the sample measurements of a particular quality characteristic from the distribution F whose mean is μ and standard deviation σ. When there is not enough evidence to reject H 0, the process is said to be in control. Otherwise it is said to be out of control. The decision rule to accept or to reject H 0 is based on the value of \( \overline X \), the sample mean of observations taken at each time. These decision rules are graphically displayed in the control chart as the upper and the lower control limits (UCL and LCL). The region between the control limits is the acceptance region of H 0. The process is considered out of control when an observed sample mean falls outside the limits. When this occurs it suggests that the process may have been affected by some assignable causes. Investigation of these causes should then be initiated. As in hypothesis testing, to obtain the control limits, we need to find the sampling distribution of \( \overline X \) — μ when H 0 is true. More precisely, for a given α, we need to locate two values, U and Z, such that, under H 0, $$ P\left( {L < \bar{X} - {\mu_0} < U} \right) = 1 - \alpha $$ (1.1)
- Research Article
1
- 10.1088/1742-6596/1511/1/012054
- Mar 1, 2020
- Journal of Physics: Conference Series
The main purpose of quality control is to quickly detect the presence of assignable causes and shifts in the process so that an investigation of the process can be carry out as early as possible. The Shewhart control chart provides good performance when the observation data is normally distributed, whereas when the normality assumption is not met, a Robust control chart is needed. The performance of the control chart depends on the stability of the estimator used to estimate the process parameters and establish control limits in phase I. In this study, a Robust Exponentially Weighted Moving Average (EWMA) control chart will be present to monitor process variability using one of estimator Robust scale to estimate standard deviation. This estimator is used to develop robust control limits. Then evaluate the control chart performance using Average Run Length (ARL) and Standard Deviation Run Length (SDRL) with Monte Carlo simulation. Furthermore, the robust chart was applied to monitor the quality characteristics of the number of bacterial colonies in each aquaculture medicine product. The results obtained in this study are formed a control chart that is resistant to the existence of outliers and sensitive to shifts in the process of variability.
- Research Article
7
- 10.1111/j.1600-0420.2005.00625.x
- May 16, 2006
- Acta Ophthalmologica Scandinavica
Editor, The frequency of occurrence of acute infectious endophthalmitis is a good indicator of the quality of care given after cataract surgery, which is of increasing importance to both the users and providers of health care. We used quality control charts to plot the variability in the rate of endophthalmitis. These charts represent a monitoring system that sorts out ‘signals’ from ‘background noise’ (Adab et al. 2002). The method uses continuous monitoring of process variation and categorizes it into common cause or special cause variation. Successful monitoring involves identifying particular causes and then taking appropriate action against them once they have been identified. It has been effectively used in monitoring outcomes in cardiothoracic and gastro-oesophageal cancer surgery (Poloniecki et al. 1998; Tekkis et al. 2003). This is the first study demonstrating the use of control charts in ophthalmology. Data were collected retrospectively on the number of cases of acute presumed infectious endophthalmitis (PIE) that occurred after cataract surgery at two hospitals in South Wales, UK between April 2000 and March 2004. Acute PIE was defined as any clinical suspicion of endophthalmitis in a patient presenting within 3 months of cataract surgery (Desai et al. 1999). During the study period, 11 616 cataract operations were performed and 21 cases of endophthalmitis were recorded. The rates of endophthalmitis during the 4 years were 1.48, 1.49, 3.96 and 0.66 per 1000 cataract operations, respectively. Figure 1 shows that the mean rate of endophthalmitis over the 4 years of the study was 1.9 per 1000 cataract operations. The upper control limit, based on 3 standard deviations (SDs) from the mean, was six per 1000. The lower control limit, 3 SDs below the mean, was taken to be zero as it has a negative value. A control chart was constructed to plot the degree of deviation of our rate of endophthalmitis from the UK national rate, as shown in Fig. 2 (Kamalarajah et al. 2004). Control chart showing our endophthalmitis rates over 4 years. The centre line represents 1.9, the upper control limit (UCL) is 6 and the lower control limit (LCL) is 0. Three instances of special cause variations are seen as outliers above the UCL. Control chart comparing our rate of endophthalmitis against the national rate. The centre line represents the national average rate of endophthalmitis, which is plotted to be 1.4 per 1000 cataract operations. The upper control limit (UCL) is 4.9. The upper and lower control limits are 3 SDs above and below the given mean, respectively. Our hospital data for the 4 years of our study are superimposed on the plot for comparison against the national standard. Explanations were sought for the three events where the rate of endophthalmitis exceeded the upper control limit (Fig. 1) because they were special cause variations that occurred within the process. In May 2000, although fewer cataract operations were performed relative to the number of endophthalmitis cases that occurred during the same period, a special cause variation showed up on the graph as an outlier. This is an example of an outlier with a simple explanation. The second instance of special cause variation coincided with the relocation of the surgical site to a new building. In February and March 2003, a third unnatural variation required a clinical explanation and a full-blown multidisciplinary investigation was undertaken. The control chart (Fig. 1) showed signals within the existing background noise that represented a warning about the increase in the rate of endophthalmitis. Most units will experience a spurious outbreak of endophthalmitis from time to time. Control charts can predict when concern should give way to action by determining the level at which the limits are set and therefore when intervention may be indicated. More prospective studies on both regional and national levels are needed to set control limits and a national registry of endophthalmitis would enable individual units to compare outcomes. It would also identify models of good practice in surgical units where rates are low.
- Research Article
8
- 10.1111/j.1151-2916.1944.tb14870.x
- Dec 1, 1944
- Journal of the American Ceramic Society
Many glass plants obtain daily measurements of the density or specific gravity of the glass in each furnace. Small daily fluctuations of about ±0.0010 density unit are usually taken for granted, while pronounced changes within a two‐ or three‐day period are a matter of concern; but heretofore neither criteria of permissible variability nor rules for interpretation of the data have been in general use.In the present work, the control‐chart method of statistical analysis of past data has been applied to data from ten glass furnaces. Small daily fluctuations of density are found to be statistical in character, and the predominant cause of large variations is found to be in the batch house. The rational subgroup sample to be used in analyzing such variations and in operating a control chart is found to be a subgroup of three consecutive daily density values obtained from a particular furnace. Using this subgroup, the average 3‐day range of density for the ten furnaces varied from 0.0006 to 0.0023, and the corresponding 3‐sigma limits for daily variation from the central line density were ±0.0011 to ±0.0040. A typical value for the average 3‐day range of density is 0.0012 and a value no larger than this is a reasonable goal for a glass container plant.The use of control charts for maintaining a state of statistical control of density during production is illustrated for four furnaces over a 2‐ to 6‐month period. Many assignable causes of variation were found in the batch house, usually in the scales; other assignable causes were changes in cullet and in raw materials, changes in firing of the furnace, and laboratory errors in measurement of the density.Present experience indicates that it is difficult to maintain a state of statistical control with the types of batch‐weighing equipment in use in some plants. The importance of control, however, was demonstrated for two furnaces in two different plants by the fact that cordiness increased with increasing 3‐day range of density. When the density was not maintained under statistical control in one plant, trouble was experienced with checks in the ware.The use of control charts for keeping lack of control within tolerable limits is discussed for one furnace where the variations were small and the control limits narrow. The range was held under control, but the density showed “trends” and went out of control. In this instance, the 3‐sigma control limits for variation of daily values from the central line density were ±0.0011, corresponding to ±0.09% replacement of lime by silica. Inasmuch as composition changes in excess of ±0.09% are tolerable in the present state of the art, a modified control limit corresponding to a composition change of approximately ±0.25% is suggested, the corresponding density limits being ±0.0030. When the 3‐sigma limits for density are less than, this value, modified limits may be used, although the 3‐sigma limits for range are retained. When the 3‐sigma limits are greater than ±0.0030, it is most desirable to maintain strict statistical control, and efforts should be made to reduce the variability; otherwise there may be excessive cordiness and other difficulties in fabrication of the ware. In some instances, a reduction in variability will require major repair of batch handling and weighing equipment or a new batch‐house weighing installation. Other subgroup methods and other sources of variability are also discussed.Control charts on density are of practical utility to plants. “Assignable‐cause” variations are easily distinguished from unimportant, normal variations. The use of 3‐sigma action limits keeps investigation of fluctuations to a minimum, and sets troubleshooting, when it is necessary, on the right track. The charts, furthermore, are a useful guide toward a permanent reduction of the variability. They should be helpful to management in striking an economic balance among tonnage pulled, glass quality, and capital expenditures for improvement of batch mixing and handling and other changes. The time required to maintain a chart for one furnace is about one day for past‐data analysis, one minute each day for plotting, and not more than one day per month for current analysis, review, and adjustment.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.