This issue contains two papers that have a number of significant differences but also some common themes. The differences include the problem addressed (debugging vs. notions of test coverage for regression testing) and the type of entity analysed (model vs. source code). However, one thing that they do have in common is that they both explore alternatives regarding what is observed or monitored. They thus fit into a significant tranche of work that investigates different notions of observation and how these affect software engineering problems. Our first paper, Simulink fault localization: an iterative statistical debugging approach, by Bing Liu, Lucia, Shiva Nejati, Lionel C. Briand and Thomas Bruckmann, looks at debugging based on Simulink models (recommended by Tse). Simulink is widely used in the development of embedded systems, with code being auto-generated from a Simulink model. The motivation for the work described in this paper is that developers deal with Simulink models, and test such models, and so there is a need for debugging techniques that operate at this level. The authors adapt spectrum-based debugging techniques for use with Simulink, leading to what they call the SimFL technique. This ranks blocks from the model according to how ‘suspicious’ they are — based on passing and failing tests. One issue that has to be considered, when reasoning about testing or debugging, is what one considers to be an output. A Simulink model will typically contain multiple blocks, with outputs from some blocks acting as inputs to others. The authors observe that the developer might note that some such outputs are correct, while others are faulty. The authors propose a method that generates a dynamic slice for each output considered; the spectrum-based approach utilizes these slices. The authors also take advantage of the hierarchical nature of Simulink models to produce an iterative version of their approach (iSimFL). The results of empirical studies, using three industrial systems, were promising, with developer only having to consider between 1.3% and 4.4% of model blocks for iSimFL. The second paper, UCov: a user-defined coverage criterion for test case intent verification, by Rawad Abou Assi, Wes Masri and Fadi Zaraket, introduces a new notion of test coverage (recommended by Harman). Most traditional coverage criteria are based on particular syntactic features of a program or model. The authors argue that instead one might consider a notion of intent, as expressed as test requirements. When the system under test is run with a test input there is a resultant execution trace and test requirements refer to properties of this execution trace. Test requirements can refer to structural aspects, such as branches followed, and also behavioural aspects, such as properties of the values of program variables. The authors argue that a potential advantage of using test requirements is that this might be more robust to changes and so suitable for regression testing. For example, the approach is capable of discovering situations in which a test case is still valid for software that has changed but no longer satisfies the corresponding test requirement. The authors propose that if a test case finds a fault then the tester should add a new test requirement that represents why the test case found the fault. If this is done and a later change leads to the test requirement not being satisfied, then the tester can be warned and there is the potential to add a new test case that does satisfy the intent.