- New
- Research Article
- 10.1080/08982112.2026.2623244
- Jan 30, 2026
- Quality Engineering
- Mahmut Onur Karaman + 1 more
Process capability indices are commonly used to summarize a process’s capability based on a single quality characteristic. However, modern production processes usually involve multivariate and often correlated quality characteristics, rendering univariate indices insufficient to fully assess the capability of a process. This has led to the development of multivariate process capability indices (MPCI). Even though various index formulations have been proposed in the literature, they differ greatly in their approach, interpretation and sensitivity to the parameters such as the “non-centeredness” of the process and the number of the quality characteristics as well as their correlation structure. In this Quality Quandaries, we examine a selection of the available MPCI. We then conduct a sensitivity analysis to examine how these indices respond to changes in certain process parameters. Finally, we illustrate these findings using a case study and emphasize the need for careful interpretation when applying MPCI in practice.
- Research Article
- 10.1080/08982112.2026.2615238
- Jan 10, 2026
- Quality Engineering
- Christine M Anderson-Cook + 1 more
Problem-solving is central to contributions that statisticians and data scientists can make as part of collaborative interdisciplinary teams. Mastering getting up to speed quickly with a problem, identifying the critical aspects of a problem, as well as assessing how statistics can improve the solution are all important aspects of becoming a valued and valuable contributor to solving complex problems. In this article, we explore three general problem-solving tactics that can increase the impact of statisticians and build confidence with tackling messy multi-stage problems. Problem decomposition focuses on breaking large messy problems into manageable pieces. Approximation identifies key components of a complex problem that can help prioritize resources, maximize improvement opportunities improvement and identify areas of weakness. Analogies leverage successful results used in closely or more-distantly related areas that provide ideas or lead to potential paths to solutions. We discuss aspects of these general approaches where statisticians can make unique contributions. A complex problem tackled by a team including the two authors illustrates how the different tactics were used and combined for an enhanced solution. We also share ideas for how teaching and practicing these skills can be incorporated into statistics/data science training and our daily lives.
- Research Article
- 10.1080/08982112.2024.2428235
- Jan 2, 2026
- Quality Engineering
- Marco S Reis
Developing analytical solutions for the process industry is a judicious exercise of exploring the available information sources, handling constraints, and overcoming limitations of different natures. These solutions should be flexible, robust, and operationalizable, and must be able to cope with the fundamental characteristics of the systems and data they generate. Their development often requires a pragmatic, problem-oriented perspective, which can yield different proposals compared to those derived with a method-centric focus. Industrial Process Analytics aims to provide the proper context, principles, and methods for developing holistic approaches by bringing together expert knowledge, first principles, and data induction. This work presents the different levels of industrial challenges that must be considered (related to the nature of systems and data collection specifics), as well as the macro-organization of analytical goals, referred to as the Industrial Process Analytics Ladder, that provides a coarser view of the plethora of problems that can be addressed. Existing and emerging challenges pertaining to each step of the Industrial Process Analytics Ladder are briefly referred and some solutions proposed to address them are presented.
- Research Article
- 10.1080/08982112.2024.2430610
- Jan 2, 2026
- Quality Engineering
- Joanne R Wendelberger
Interdisciplinary problem-solving draws upon expertise from multiple fields and often requires teams of individuals from different disciplines working together to address a complex challenge. Statisticians can play an important role in addressing interdisciplinary challenges by providing a statistical framework for modeling and analysis. In this paper, statistical distributions and metrology concepts will be used to provide a foundation for modeling errors, constructing different types of statistical intervals, and characterizing error transmission to support further modeling and analysis. Examples of statistical solutions involving methods for Design and Analysis of Experiments, Functional Data Analysis, Predictive Analytics, and Data Science will be discussed that were developed as part of the collaborative process of interdisciplinary problem-solving.
- Discussion
- 10.1080/08982112.2025.2470361
- Jan 2, 2026
- Quality Engineering
- Antonio Lepore
- Discussion
- 10.1080/08982112.2025.2520227
- Jan 2, 2026
- Quality Engineering
- Marco S Reis
- Research Article
- 10.1080/08982112.2024.2428233
- Jan 2, 2026
- Quality Engineering
- Marcus B Perry
- Discussion
- 10.1080/08982112.2025.2503868
- Jan 2, 2026
- Quality Engineering
- Victoria S Jordan
The article by Dr. Rigdon et al. highlights the importance of assessing and addressing errors in collected data. One cannot make informed decisions if the data is incorrectly assumed to be complete and accurate. The authors share innovative and reasonable approaches to addressing “gaps” in the data in order to use the data for further research. This is critical for any work using this data to analyze the vaccination rates during the COVID pandemic. It also highlights questions about how such data is collected. Perhaps work such as this will lead us to rethink the decentralized approach to collecting health data across states (and sometimes by county within states) and the need to implement a standard approach to collecting and reporting health information.
- Research Article
- 10.1080/08982112.2025.2567562
- Jan 2, 2026
- Quality Engineering
- Kelly Ayres + 5 more
When COVID-19 vaccines were introduced in late 2020 and widely distributed in early 2021, states were responsible for collecting and managing the data. In the best situation, states kept accurate records of each person who received the vaccine, including the age and the county of residence. States reported the cumulative number of those vaccinated in each county, although there were substantial numbers of vaccine recipients (within a given state) whose county of residence was unknown. Some states have very low numbers of vaccine recipients with unknown county, while other states reported upwards of 50% “unknown county of residence.” At the extreme, Texas did not report the county of residence until October 2021, although they did report the state-wide total. There were a number of states that reported a nearly simultaneous jump in the cumulative number of those vaccinated whose county of residence was known and a drop in the number of “unknowns,” likely caused by a retrospective analysis and reallocation of those whose county of residence was unknown. A further problem occurs when the cumulative number of vaccines drops. We describe how we created a database for county-level vaccine data that addresses these data quality issues.
- Research Article
- 10.1080/08982112.2025.2552420
- Jan 2, 2026
- Quality Engineering
- Andrew Cooper + 2 more
Gaussian processes (GPs) are powerful tools for nonlinear classification in which latent GPs are combined with link functions. But GPs do not scale well to large training data. This is compounded for classification where the latent GPs require Markov chain Monte Carlo integration. Consequently, fully Bayesian, sampling-based approaches had been largely abandoned. Instead, maximization-based alternatives, such as Laplace/variational inference (VI) combined with low rank approximations, are preferred. Though feasible for large training data sets, such schemes sacrifice uncertainty quantification and modeling fidelity, two aspects that are important to our work on surrogate modeling of computer simulation experiments. Here we are motivated by a large-scale simulation of binary black hole (BBH) formation. We propose an alternative GP classification framework which uses elliptical slice sampling for Bayesian posterior integration and Vecchia approximation for computational thrift. We demonstrate superiority over VI-based alternatives for BBH simulations and other benchmark classification problems. We then extend our setup to warped inputs for “deep” nonstationary classification.