Abstract

Modeling of natural language requirements, especially for a large system, can take a significant amount of effort and time. Many automated model-driven approaches partially address this problem. However, the application of state-of-the-art neural network architectures to automated model element identification tasks has not been studied. In this paper, we perform an empirical study on automatic model elements identification for component state transition models from use case documents. We analyzed four different neural network architectures: feed forward neural network, convolutional neural network, recurrent neural network (RNN) with long short-term memory, and RNN with gated recurrent unit (GRU), and the trade-offs among them using six use case documents. We analyzed the effect of factors such as types of splitting, types of predictions, types of designs, and types of annotations on performance of neural networks. The results of neural networks on the test and unseen data showed that RNN with GRU is the most effective neural network architecture. However, the factors that result in effective predictions of neural networks are dependent on the type of the model element.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.