Abstract

Many if not most software-intensive systems have their mission-critical software modified by people other than its developers. The resulting misunderstandings often seriously compromise the missions the system is supporting (over 80% of the functionality in most current ground, sea, air, and space vehicles depends on software). There is a major need for models, methods, processes, and tools for identifying sources and effects of software misunderstandings, both in preparation for the system’s software use and evolution, and in evaluating modified software to avoid further examples of misunderstanding. Emphasizing high software understandability enables system maintainers to avoid these misunderstandings as they modify existing software systems. However, while many metrics for understandability have been developed, with the majority of them being source code based, little to no correlation has been found for these metrics with actual software understandability. In this paper, we instead focus on issue summaries as a non-source code alternative for measuring understandability. We generate fuzzy rules and linguistic patterns using a sample of issue summaries from the Mozilla community and evaluate 1416 issue summaries from two other software systems to measure the performance of our model. Our results suggest that this is a viable method to measure understandability and has the potential to be extended to other software maintainability qualities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call