Introduction. Model-based test cases generation is a popular strategy for test automation. It helps to reduce time spent on the development of a test suite and can improve level of coverage. However, many reports show shortage of such test cases in poor quality and doubtable efficiency. Purpose. The main goal of the proposed method is cost-effective validation, assessment, debugging and concretization of generated test cases. The method helps improve quality and efficiency of the test cases, make their scenario meaningful and goal-oriented. The method also develops debugging facilities and simplifies data dependency analysis and test scenario editing. Methods. We propose an automated post-processing method which allows to evaluate path that is examined by the test case, and to make safe changes to the path which will eliminate the shortcomings while leaving the coverage targets of the test case unharmed. The method is based on visualization of the path along the control flow graph of the model with additional information about factual evaluation history of all variables and possible alternative variants of behavior. For consistent substitution of certain values in the signal parameters, which would determine the artifacts of the test environment (such as, for example, files, databases, etc.) and check boundary cases (in predicates of conditions, indexing of arrays, etc.), a method of interactive specification of symbolic traces has been developed. Results. The role of the user in deciding whether to add a test case to the project test suite and make changes to it remains crucial, but to reduce labor intensity, the following processes are automated: evaluation of test scenarios according to certain objective characteristics (level of coverage, ability to detect defects, data cohesion, etc.); highlighting of possible alternatives for making corrections; consistent updating of computations for the corresponding corrections. A prototype was developed based on the proposed methods. The empirical results demonstrated a positive impact on the overall efficiency (ability to detect defects and reduce resource consumption) and quality (meaningfulness, readability, maintenance, usefulness for debugging, etc) of the generated test suites. The method allows to make automatically generated test cases trustable and usable. Conclusion. The proposed toolkit significantly reduces the time spent on researching the results of test generation and validation of the obtained tests and their editing. Unlike existing simulation methods, the proposed method not only informs about the values of variables, but also explores the history of their computations and additionally provides information about admissible alternatives. Further we plan to improve the process of localizing the causes of test failure at execution phase to speed up the search for defects.
Read full abstract