This issue has three papers. The first, Finding failures from passed test cases: Improving the pattern classification approach to the testing of mesh simplification programs, by W. K. Chan, Jeffrey C. F. Ho, and T. H. Tse, applies metamorphic testing to test mesh simplification programs. The second, Fault localization based on information flow coverage, by Wes Masri, presents results on using information flow to aid fault localization. The third, Fault localization using a model checker, by Andreas Griesmayer, Stefan Staber, and Roderick Bloem, presents ideas using model checking to support fault localization. I recently read Norman's classic book, The Design of Everyday Things [1]. Though 22-years old, it's an excellent read and still very relevant. One point Norman made is that people are good at approximations, but not at being perfect. This cognitive aspect of people affects the design and production of everyday things (and software). For most of human history, we created each product by hand, and made each product a little better than the previous. That is, our ancestors used natural continuous product evolution. But during the industrial revolution, we invented the idea of mass production. Suddenly design became vastly more important. Design mistakes could not be corrected in the next product, but were immediately placed into thousands of products. Then we invented software, making things worse. Software drives the production cost to almost zero, putting even more pressure on designers. This led to decades of attempts to get software to be perfect the first time. We invented processes, formal methods, specifications, complicated test criteria, designed modeling languages, etc., all with the goal of getting the software right before producing thousands of copies. With great efforts from academic researchers, industrial research labs, and industry practitioners, we have made progress toward getting software right the first time. Our software has gotten better. But Norman's point makes one thing crystal clear: In a very real way, the efforts in software engineering to achieve perfection are quixotic tilting at windmills, fighting against basic human nature. Simply put: People are not naturally good at being perfect! However, we are good at approximating and improving. In fact, our ancestors have used that approach for thousands of years. And in software, we are coming full circle … Large segments of the software industry have recently seen enormous changes in the deployment process. Web applications, updates delivered through the web, and the ability to push updates onto mobile devices allow us to update software daily or even hourly. That is, we are returning to our roots, continuous evolution, just like our ancestors did for millennia. Creating frequent updates where we try something, identify problems, then quickly make changes, is a natural human process and is being used in the industry. I recently attended a talk by the Senior Engineering Director at Google, who said that Google emphasizes ‘fast fixing’ over ‘prevention,’ in part by having a very short test-fix-update evolutionary cycle [2]. This approach will be suspicious to many academics. Most of us learned our craft in an environment where new versions come out every 4 or 5 years, and from teachers who emphasized safety critical software, where evolutionary test-fix-update process is more risky (if we crash the plane, don't expect another chance!). But for a large (and growing) part of the software industry, this process works. And yes, this has majorimpacts on software. A few years ago I published a paper on how quality is more important in web software [3]. Norman and Copeland have helped me to add a missing piece to that article: continuous evolution to approximate and improve our software. I believe this to be a good thing. And every testing researcher should pause and ask a question: Which of my ideas are relevant to this new reality?
Read full abstract