Abstract

Goal models are commonly used requirements engineering artefacts that capture stakeholder requirements and their inter-relationships in a way that supports reasoning about their satisfaction, trade-off analysis, and decision making. However, when there is uncertainty in the data used as evidence to evaluate goal models, it is crucial to understand the confidence or trust level in such evaluations, as uncertainty may increase the risk of making premature or incorrect decisions. Different approaches have been proposed to tackle goal model uncertainty issues and risks. However, none of them considers simple quality measures of collected data as a starting point. In this paper, we propose a Data Quality Tagging and Propagation Mechanism to compute the confidence level of a goal’s satisfaction level based on the quality of input data sources. The paper uses the Goal-oriented Requirement Language (GRL), part of the User Requirements Notation (URN) standard, in examples, with an implementation of the proposed mechanism and a case study conducted in order to demonstrate and assess the approach. The availability of computed confidence levels as an additional piece of information enables decision makers to (i) modulate the satisfaction information returned by goal models and (ii) make better-informed decisions, including looking for higher-quality data when needed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call