Abstract

User feedback on software products has been shown to be useful for development and can be exceedingly abundant online. Many approaches have been developed to elicit requirements in different ways from this large volume of feedback, including the use of unsupervised clustering, underpinned by text embeddings. Methods for embedding text can vary significantly within the literature, highlighting the lack of a consensus as to which approaches are best able to cluster user feedback into requirements relevant groups. This work proposes a methodology for comparing text embeddings of user feedback using existing labelled datasets. Using 7 diverse datasets from the literature, we apply this methodology to evaluate both established text embedding techniques from the user feedback analysis literature (including topic modelling and word embeddings) as well as text embeddings from state of the art deep text embedding models. Results demonstrate that text embeddings produced by state of the art models, most notably the Universal Sentence Encoder (USE), group feedback with similar requirements relevant characteristics together better than other evaluated techniques across all seven datasets. These results can help researchers select appropriate embedding techniques when developing future unsupervised clustering approaches within user feedback analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call