Abstract

<p><strong>Purpose:</strong> To clarify the different types of data likely to occur in any service or industrial process, the main applicable statistics for each type of data and the Six Sigma metrics that allow characterising and benchmarking organisational processes.</p><p><strong>Methodology/Approach:</strong> A short reference to the statistical process control is carried out, from Shewhart’s works to Motorola’s achievements, followed by a short discussion of the use of Six Sigma tools as a part of today’s total quality approaches, and by a discussion of the continuous, attribute and counting data worlds and their main applications in process analysis. Because many quality professionals may have difficulties dealing with engineering perspectives, a review of main classic and Six Sigma process metrics is done with examples. Complementing discussions, four functions written in the R language are presented, which can deal with real organisational data, or can be used for training purposes.</p><p><strong>Findings:</strong> The functions developed provide useful graphical displays and calculate all necessary metrics, having the ability to let the user provide theoretical values for training activities. Real and simulated case studies help understanding data worlds and respective Six Sigma metrics.</p><p><strong>Research Limitation/implication:</strong> This paper reports an intentionally simple theoretical perspective of Six Sigma metrics and friendly software which is available to all interested professionals on request to the authors.</p><strong>Originality/Value of paper:</strong> The paper presents clear definitions of main data types and metrics and is supported by a set of four new functions that can be used by any researcher with a minimum knowledge of the R software.

Highlights

  • Six Sigma is essentially a methodology developed by Motorola, to address manufacturing or service problems

  • It started to be a well-defined methodology to reduce variability, whose origins can be traced back to the beginning of the twentieth century after the works of Walter Shewhart (Mitra, 2016; Wadsworth, Stephens and Godfrey, 2007) dealing with the control of the variability of high yield machines. He proposed the use of statistics to control high throughput processes, and the quality control charts devised by Shewhart, which can be faced as the beginning of “statistical process control” (SPC) were the main tool to follow and control processes’ performances and are still referred to as one of the seven basic quality tools (Mitra, 2016; Pyzdek and Keller, 2018)

  • The definition of data worlds follows that used in previous Minitab approaches (Brook, 2006), and metrics follow the definitions of the American Society for Quality (ASQ) (Barsalou, 2015) closely

Read more

Summary

Introduction

Theory, applications and developments can be found in many references, like Aboelmaged (2010), Antony and Banuelas (2002), Firat et al (2017), Ward, Poling and Clipp (2008), Thomas, Barton and Chuke-Okafor (2009), Tjahjono et al (2010) It started to be a well-defined methodology to reduce variability, whose origins can be traced back to the beginning of the twentieth century after the works of Walter Shewhart (Mitra, 2016; Wadsworth, Stephens and Godfrey, 2007) dealing with the control of the variability of high yield machines. The works of Shewhart are still valid, but the advent of computers and efficient software enabled the production of high yield machines and high throughput processes, forcing professionals to be increasingly concerned with precision and control, maintaining process parameters within very narrow specification limits, to avoid production of defectives (waste) and re-work (Antony and Banuelas, 2002; Marques et al, 2018; Pyzdek and Keller, 2018; Santos et al, 2006; Tjahjono et al, 2010)

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call