Abstract

Software faults are a well-known phenomenon. In most cases, they are just annoying – if the computer game does not work as expected – or expensive – if once again a space project fails due to some faulty data conversion. In critical systems, however, faults can have life-threatening consequences. It is the task of software quality assurance to avoid such faults, but this is a cumbersome, expensive and also erroneous undertaking. For this reason, research has been done over the last years in order to automate this task as much as possible. In this thesis, the connection of constraint solving techniques with formal methods is investigated. We have the goal to ��?nd faults in the models and implementations of reactive systems with data, such as automatic teller machines (ATMs). In order to do so, we ��?rst develop a translation of formal speci��?cations in the process algebra µCRL to a constraint logic program (CLP). In the course of this translation, we pay special attention on the fact that the CLP together with the constraint solver correctly simulates the underlying term rewriting system. One way to validate a system is the test whether this system conforms its speci��?cation. In this thesis, we develop a test process to automatically generate and execute test cases for the conformance test of data-oriented systems. The applicability of this process to process-oriented software systems is demonstrated in a case study with an ATM as the system under test. The applicability of the process to document-centered applications is shown by means of the open source web browser Mozilla Firefox. The test process is partially based on the tool TGV, which is an enumerative test case generator. It generates test cases from a system speci��?cation and a test purpose. An enumerative approach to the analysis of system speci��?cations always tries to enumerate all possible combinations of values for the system’s data elements, i.e. the system’s states. The states of those systems, which we regard here, are influenced by data of possibly in��?nite domains. Hence, the state space of such systems grows beyond all limits, it explodes, and cannot be handled anymore by enumerative algorithms. For this reason, the state space is limited prior to test case generation by a data abstraction. We use a chaotic abstraction here with all possible input data from a system’s environment being replaced by a single constant. In parallel, we generate a CLP from the system speci��?cation. With this CLP, we reintroduce the actual data at the time of test execution. This approach does not only limit the state space of the system, but also leads to a separation of system behavior and data. This allows to reuse test cases by only varying their data parameters. In the developed process, tests are executed by the tool BAiT. This tool has also been created in the course of this thesis. Some systems do not always show an identical behavior under the same circumstances. This phenomenon is known as nondeterminism. There are many reasons for nondeterminism. In most cases, input froma system’s environment is asynchronously processed by several components of the system, which do not always terminate in the same order. BAiT works as follows: The tool chooses a trace through the system behavior from the set of traces in the generated test cases. Then, it parameterizes this trace with data and tries to execute it. When the nondeterministic system digresses from the selected trace, BAiT tries to appropriately adapt it. If this can be done according to the system speci��?cation, the test can be executed further and a possibly false positive test verdict has been successfully avoided. The test of an implementation signi��?cantly reduces the numbers of faults in a system. However, the system is only tested against its speci��?cation. In many cases, this speci��?cation already does not completely ful��?ll a customer ’s expectations. In order to reduce the risk for faults further, the models of the system themselves also have to be veri��?ed. This happens during model checking prior to testing the software. Again, the explosion of the state space of the system must be avoided by a suitable abstraction of the models. A consequence of model abstractions in the context of model checking are so-called false negatives. Those traces are counterexamples which point out a fault in the abstracted model, but who do not exist in the concrete one. Usually, these false negatives are ignored. In this thesis, we also develop a methodology to reuse the knowledge of potential faults by abstracting the counterexamples further and deriving a violation pattern from it. Afterwards, we search for a concrete counterexample utilizing a constraint solver.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call