Abstract

Source code analysis has been and still is extensively researched topic with various applications to the modern software industry. In this paper we share our experience in applying various source code analysis techniques for assessing the quality of and detecting potential defects in a large mission-critical software system. The case study is about the maintenance of a software system of a Bulgarian government agency. The system has been developed by a third-party software vendor over a period of four years. The development produced over 4 million LOC using more than 20 technologies. Musala Soft won a tender for maintaining this system in 2008. Although the system was operational, there were various issues that were known to its users. So, a decision was made to assess the system's quality with various source code analysis tools. The expectation was that the findings will reveal some of the problems' cause, allowing us to correct the issues and thus improve the quality and focus on functional enhancements. Musala Soft had already established a special unit - Applied Research and Development Center - dealing with research and advancements in the area of software system analysis. Thus, a natural next step was for this unit to use the know-how and in-house developed tools to do the assessment. The team used various techniques that had been subject to intense research, more precisely: software metrics, code clone detection, defect and “code smells” detection through flow-sensitive and points-to analysis, software visualization and graph drawing. In addition to the open-source and free commercial tools, the team used internally developed ones that complement or improve what was available. The internally developed Smart Source Analyzer platform that was used is focused on several analysis areas: source code modeling, allowing easy navigation through the code elements and relations for different programming languages; quality audit through software metrics by aggregating various metrics into a more meaningful quality characteristic (e.g. “maintainability”); source code pattern recognition - to detect various security issues and “code smells”. The produced results presented information about both the structure of the system and its quality. As the analysis was executed in the beginning of the maintenance tenure, it was vital for the team members to quickly grasp the architecture and the business logic. On the other hand, it was important to review the detected quality problems as this guided the team to quick solutions for the existing issues and also highlighted areas that would impede future improvements. The tool IPlasma and its System Complexity View (Fig. 1) revealed where the business logic is concentrated, which are the most important and which are the most complex elements of the system. The analysis with our internal metrics framework (Fig. 2) pointed out places that need refactoring because the code is hard to modify on request or testing is practically impossible. The code clone detection tools showed places where copy and paste programming has been applied. PMD, Find Bugs and Klockwork Solo tools were used to detect various “code smells” (Fig. 3). There were a number of occurrences that were indeed bugs in the system. Although these results were productive for the successful execution of the project, there were some challenges that should be addressed in the future through more extensive research. The two aspects we consider the most important are usability and integration. As most of the tools require very deep understanding of the underlying analysis, the whole process requires tight cooperation between the analysis team and the maintenance team. For example, most of the metrics tools available provide specific values for a given metric without any indication what the value means and what is the threshold. Our internal metrics framework aggregates the metrics into meaningful quality characteristics, which solves the issue partially. However, the user still often wonders about the justification behind the meaning of the given quality characteristic. There is a need for an explanation system - one, which could point out the source code elements and explain why they are considered good or bad. The integration aspect is considered important because such analysis should be performed continuously. In our experience, the analysis is usually performed subsequent to an important event - in this case: beginning of maintenance tenure. Some quality assurance practices should be developed and then adopted by the development teams so that the implementation quality is checked continuously. This should cover various activities and instruments, such as the integrated development environment, the code review process, automated builds, etc. In conclusion, we think that implementation quality audit and management is a vital activity that should be integrated into the software development process and the tools that support it should be usable by the development team members without much knowledge of the underlying analysis. In this paper we presented a case study that showed the benefits of such a process.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call