Abstract

Abstract Given the foundational role of system requirements in design projects, designers can benefit from classifying, comparing, and observing connections between requirements. Manually undertaking these processes, however, can be laborious and time-consuming. Previous studies have employed Bidirectional Encoder Representations from Transformers (BERT), a state-of-the-art natural language processing (NLP) deep neural network model, to automatically analyze written requirements. Yet, it remains unclear whether BERT can sufficiently capture the nuances that differentiate requirements between and within design documents. This work evaluates BERT’s performance on two requirement classification tasks (one inter- document and one intra-document) executed on a corpus of 1,303 requirements sourced from five system design projects. First, in the “parent document classification” task, a BERT model is fine-tuned to classify requirements according to their originating project. A separate BERT model is then fine-tuned on a “functional classification” task where each requirement is classified as either functional or nonfunctional. Our results also include a comparison with a baseline model, Word2Vec, and demonstrate that our model achieves higher classification accuracy. When evaluated on test sets, the former model receives a Matthews correlation coefficient (MCC) of 0.95, while the latter receives an MCC of 0.82, indicating BERT’s ability to reliably distinguish requirements. This work then explores the application of BERT’s representations, known as embeddings, to identify similar requirements and predict requirement change.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call