Abstract

The idea of this special issue came on the occasion of the joint European network on business and industrial statistics (ENBIS) Design of industrial experiments (DEINDE) 2007 Conference held on April 11–13, 2007 in Torino (Italy) at the Department of Statistics and Applied Mathematics ‘Diego de Castro’. Aim of the Conference was to bring together common areas of interest of DEINDE and ENBIS organizations on design of experiments (DoE) techniques. ENBIS is an organization aiming at promoting the widespread use of sound science driven, applied statistical methods in European business and industry. More details on the ENBIS project can be found on the web site: http://www.enbis.org. The Workshop DEINDE was introduced in the early 1990s as a cooperation between the Politecnico di Torino and the University of Torino. DEINDE is a forum for researchers and practitioners alike to discuss topics related to industrial experimentation. In the course of the past eight editions, qualified experts gave presentations covering experimentation on large systems, dimensional scatter control on body in white, process optimization, experiments on mixtures, parameter prediction via simulation, characterization of dedicated software, experimentation in the food and drug industry. Since its first edition, the objective of DEINDE has been to encourage the connection between theory and application of the statistical methodologies in different areas. The most important target is to link the academic researchers and the actual users of the statistical methods in the companies. In fact, the last five editions of DEINDE had a fifty–fifty participation of academics and industrial and organization users. More detailed informations and links on DEINDE workshops can be found at: http://calvino.polito.it/∼vicario/deinde_2005.htm. The organizing committee of ENBIS-DEINDE 2007 Conference decided to make an effort to publish papers with contributions in their area of interest in a special volume of the Journal Applied Stochastic Models in Business and Industry. The goal of this special issue is to give an overview on new developments in DoE and to give a cross-sectional view on the current research in the particular area of computer experiments as it is applied today. In the last three decades the physical experiments have been increasingly replaced, entirely or partially, with the numerical ones. An honest motivation is that physical experimentation is sometimes unapproachable or extremely expensive, and the use of the codes in the product/process development phase has become straightforward and quite inexpensive. The numerical codes may also render very complex systems and may reduce the deal at both design and analysis stages, undoubtedly reducing the preparation of very expensive prototypes. The general ease of the use of comprehensive computing facilities and the recent progresses in software development make numerical simulation of complex systems an attractive alternative option for the execution of expensive and time-consuming physical experiments. Different techniques for modelling and designing a simulated experiment are available in the literature, but according to the more recent literature the most popular models are the Kriging ones, named after an engineer, Daniel Krige, who worked in the South African mines: he states that the closer the input data are, the more positively correlated the predictions are. That is the reason why an appreciable number of the papers presented at the joint ENBIS-DEINDE 2007 concern the modelling and the analysis of numerical experiments and the criticism of the beliefs proper of the physical experimentation when transferred to the numerical one. Kriging models are also used because they allow quantifying the prediction uncertainty, which plays a major role in many applications. The aim of the paper Assessment of uncertainty in computer experiments, from universal Kriging to Bayesian Kriging by Helbert, Dupuy and Carraro is to show that the prediction uncertainty has a correct interpretation only in the case of Bayesian Kriging where model parameters are random variables opposed to the thinking of a number of practitioners who use universal Kriging where the parameters of the model are estimated. The authors study different cases of prior distributions and show that in a specific case of prior distribution, Bayesian Kriging supplies an interpretation as a conditional variance for the prediction variance provided by universal Kriging. Finally, a simple petroleum-engineering example is outlined aiming to highlight the importance of prior information in the Bayesian approach. The goal of Ginsbourger, Dupuy, Badea, Carraro and Roustant, in A note on the choice and the estimation of Kriging models for the analysis of deterministic computer experiments, is to give an insight into some important questions to be asked when choosing a Kriging model for the analysis of numerical experiments. The authors are especially concerned about the cases where the size of the experimental design is relatively small compared with the algebraic dimension of the inputs. The lack of reliability of likelihood maximization with few data and the consequences of a trend misspecification, subjects often skipped in the field of computer simulation analysis, are dealt within two experimental studies. An original Kriging method in which a nonlinear additive model is used as an external trend is presented in an example from a porous media application. The paper Kriging-based sequential inspection plans for coordinate measuring machines by Pedone, Romano and Vicario, offers a novel application of Kriging in the field of industrial metrology. Exploiting the recognized predictive capability of Kriging models, they are used to drive the online construction of sequential plans for inspecting industrial parts on coordinate measuring machines. These machines are universally adopted to check the compliance of parts to dimensional and geometric specifications; they are ‘statistical’ machines as the part surfaces are probed in a sample of points. The inspection plan specifies which points are probed and in which order. Moreover, the economy of the process forces the sample to be small. As best accuracy/cost trade-off is also the goal of sequential designs, it seems appropriate to treat the inspection plan as a sequential experiment to be designed online. At each step of the procedure, Kriging models are iteratively updated based on new incoming data, and predictions from the updated model are used to select the next point to inspect. The authors present two case-studies related to the checking of two form tolerances, straightness and roundness and the performances of Kriging-based plans are compared with that of the simple nonsequential plans massively used in industrial practice (random, stratified) and with a deterministic sequential method taken from the engineering literature. Alvarez, Gil-Negrete, Ilzarbe, Tanco, Viles and Asensio resort to computer simulation using the finite elements design techniques for designing an accelerometer (A computer experiment application to the design and optimization of a capacitive accelerometer). An accelerometer is a transducer that allows measuring the acceleration acting on a structure; physically, it consists of a central mass suspended by thin and flexible arms and its performance is highly dependent on the dimensions of both the mass and the arms. The two most important parameters when evaluating the performance of these devices are the sensitivity and the operating frequency range. Therefore, it is very convenient to gain knowledge on how changes in the dimensions of the mass and arms affect the value of the natural frequency of the accelerometer, as it will provide guidelines to design accelerometers that fulfil frequency requirements of a specific application. A quadratic polynomial function of the natural logarithm of the frequency versus geometrical factors is obtained using response surface approach and a faced-centered cube design is used in the experimentation. Data are generated via computer simulations using finite element design techniques. A better understanding of how these variables affect the value of frequency is reached, which will be very useful for the device design purposes. The optimality criteria, taken into a proper account when designing a physical experiment, may be meaningless when the experiment is numerical. In fact, output from computer simulation experiments is often approximated as realizations of correlated random fields; consequently, the corresponding optimal design questions must cope with the existence and detection of an error correlation structure, issues largely unaccounted for by traditional optimal design theory. Unfortunately, many of the nice features of well-established design techniques, such as additivity of the information matrix, convexity of design criteria, etc, do not carry over to the setting of interest. This may lead to unexpected, counter intuitive, even paradoxical effects in the design as well as the analysis stage of computer simulation experiments. Müller and Stehlík offer a comprehensive overview and some simple but illuminating examples of the different behaviour in Issues in the optimal design of computer simulation experiments. The paper Designs for misspecified exponential regression models by Xu concerns a problem of robust design for exponential regression when the response function in an only approximately known function of a given exponential function variance heterogeneity is allowed. The author finds minimax designs and corresponding optimal regression weights in different scenarios. In particular, the author determines a design to minimize the maximum value of the integrated mean squared error (IMSE) for nonlinear least-squares estimation with homoschedasticity, a design to minimize the maximum value of IMSE for nonlinear least-squares estimation with heteroschedasticity and for nonlinear weighted least-squares estimation. Moreover, the author deduces both the weights and the design to minimize the maximum IMSE and chooses the weights and the design points to minimize the maximum IMSE, subject to a side condition of unbiasedness. Carfagna and Marzialetti face the problem of evaluating the quality of land cover data bases produced through photo-interpretation of remote sensing data according to a legend of land cover types in Sequential design in quality control and validation of land cover data bases. The quality control is considered as the comparison of a land cover data base with the result of the photo-interpretation made by a more expert photo-interpreter, on a sample of polygons and the percentage of area correctly photo-interpreted as a quality measure. The authors analysis the problem of validation, i.e. the check of the photo-interpretation through a ground survey. The polygons are stratified according to two variables: the land cover type of the photo-interpretation and the size of polygons since both the variables affect the probability of making mistakes in the photo-interpretation. An adaptive sequential procedure is proposed with permanent random numbers in which the sample size per stratum depends on the previously selected units but the sample selection does not; moreover, the stopping rule is not based on the estimates of the quality parameter. These quality control and validation procedure allows unbiased and efficient estimates of the quality parameters and enables high precision of the estimates with the smallest sample size. The editors of the special issue followed in their reviewing process the general rules of publishing papers in this journal. They wish to thank all those who submitted papers and all the referees who went very carefully through the papers, improving their quality. A special thank goes to Jef Teugels and Fabrizio Ruggeri, Editors-in-Chief of ASMBI, for warmly accepting our proposal and for all the support they gave us during each step of the preparation of this issue.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call