Abstract

Testing helps assure software quality by executing program and uncovering bugs. Scientific software developers often find it challenging to carry out systematic and automated testing due to reasons like inherent model uncertainties and complex floating point computations. We report in this paper a manual analysis of the unit tests written by the developers of the Storm Water Management Model (SWMM). The results show that the 1,458 SWMM tests have a 54.0% code coverage and a 82.4% user manual coverage. We also observe a “getter-setter-getter” testing pattern from the SWMM unit tests. Based on these results, we offer insights to improve test development and coverage.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call