Abstract

Abstract Requirements Engineering (RE) is a relatively young discipline, and still many advances have been achieved during the last decades. In particular, numerous RE approaches are proposed in the literature with the aim of understanding a certain problem (e.g. information systems development) and establishing a knowledge base that is shared between domain experts and developers (i.e. a requirements specification). However, there is a growing concern for empirical validations that assess RE proposals and statements. This paper is related to the assessment of the quality of functional requirements specifications, using the Method Evaluation Model (MEM) as a theoretical framework. The MEM distinguishes the actual efficacy and the perceived efficacy of a method. In order to assess the actual efficacy or RE methods, the conceptual model quality framework by Lindland et al. can be applied; in this paper, we focus on the completeness and granularity of requirements models and extend this framework by defining four new metrics (e.g. degree of functional encapsulations completeness with respect to a reference model, number of functional fragmentation errors). In order to assess the perceived efficacy, conventional questionnaires can be used. A laboratory experiment with master students has been carried out, in order to compare (using the proposed metrics) two RE methods; namely, Use Cases and Communication Analysis. With respect to actual efficacy, results indicate greater model quality (in terms of completeness and granularity) when Communication Analysis guidelines are followed. With respect to perceived efficacy, we found that Use Cases was perceived to be slightly easier to use than Communication Analysis. However, Communication Analysis was perceived to be more useful in terms of determining the proper business processes granularity. The paper discusses these results and highlights some key issues for future research in this area.

Highlights

  • Requirements Engineering (RE), in spite of being a relatively young discipline, has achieved an ever growing body of knowledge

  • We focus on semantic quality; the framework is extended with four new metrics that are aimed at the assessment of model completeness and model granularity

  • A bigger difference appears in the degree of linked communications completeness, with up to 75% for Communication Analysis and only 50.46% for Use Cases

Read more

Summary

Introduction

Requirements Engineering (RE), in spite of being a relatively young discipline, has achieved an ever growing body of knowledge. It is widely acknowledged that Requirements Engineering (RE) has a big impact in software quality [2]. Most authors act as designers and propose new RE methods, while few authors act as real researchers validating that their (or other authors’) proposals improve RE practice; there is a growing concern for validating RE proposals, empirical evaluations are a strong need in the area [3, 4]. Seldom do we see work intended to compare features of RE approaches within an experimental context. This paper contributes to bridging the gap between the RE and the ESE (Empirical Software Engineering) communities

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call