Abstract

In the last decade, requirements classification has emerged as hot research topic in Requirements Engineering (RE). Early identification of software requirements helps the development team in the design of software systems. Manual identification and classification of these requirements is time-consuming and labor-intensive task. To address this issue, machine and deep learning techniques have been studied for automatic classification of requirements. Furthermore, an efficient word embedding representation of the input data is a major concern in automatic approaches. This research work presents a novel requirements classification model that integrates Bidirectional Encoder Representations from Transformers (BERT) with Bidirectional Long-short Term Memory (BiLSTM) with Convolutional Neural Network (CNN) layer called BERT-BiCNN. In BERT-BiCNN, BERT is used as word embedding layer that extract the full combination of contextual semantics. BiLSTM obtains contextual information. The CNN mechanism has been employed to reduce the dimensionality of feature space by selecting the important features. The effectiveness of BERT-BiCNN is evaluated on the PROMISE dataset. On comparative analysis, it can be concluded that proposed approach outperforms six recent deep learning based architectures in binary and multi-class classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call