Abstract

Domain specific languages (DSLs) are widely used in practice and investigated in software engineering research. But so far, language workbenches do not provide sufficient built-in decision support for language design and improvement. Controlled experiments have the potential to provide appropriate, data-driven decision support for language engineers and researchers to compare different language features with evidence-based feedback. This paper provides an integrated end-to-end tool environment to perform controlled experiments in DSL engineering. The experiment environment is built on the basis and integrated into the language workbench Meta Programming System (MPS). The environment not only supports language design but also all steps of experimentation, i.e., planning, operation, analysis & interpretation, as well as presentation & package. The tool environment is presented by means of a running example experiment comparing the time taken to create system acceptance tests for web applications in two different DSLs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call