Abstract

SummaryTesting of concurrent programmes is difficult since the scheduling nondeterminism requires one to test a huge number of different thread interleavings. Moreover, repeated test executions that are performed in the same environment will typically examine similar interleavings only. One possible way how to deal with this problem is to use the noise injection approach, which influences the scheduling by injecting various kinds of noise (delays, context switches, etc) into the common thread behaviour. However, for noise injection to be efficient, one has to choose suitable noise injection heuristics from among the many existing ones as well as to suitably choose values of their various parameters, which is not easy. In this paper, we propose a novel way how to deal with the problem of choosing suitable noise injection heuristics and suitable values of their parameters (as well as suitable values of parameters of the programmes being tested themselves). Here, by suitable, we mean such settings that maximize chances of meeting a given testing goal (such as, eg, maximizing coverage of rare behaviours and thus maximizing chances to find rarely occurring concurrency‐related bugs). Our approach is, in particular, based on using data mining in the context of noise‐based testing to get more insight about the importance of the different heuristics in a particular testing context as well as to improve fully automated noise‐based testing (in combination with both random as well as genetically optimized noise setting).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call